Hashing It Out
Hashing It Out

Episode 6 · 4 years ago

Hashing It Out #6: Stuart Popejoy

ABOUT THIS EPISODE

This week's episode features Stuart Popejoy, founder of the Kadena, creator of the Pact smart contract language, and engineer/designer of the Chainweb blockchain protocol. We go in-depth on design decisions for Pact, how language simplicity impacts the security and adoption of second generation blockchains, how formal verification can be used to improve certainty in smart contracts, and the over-emphasis of Turing completeness for a very domain-specific language such as smart contracts. We go into the design behind Chainweb, an unboundedly scalable Proof of Work blockchain architecture, which expects to reach a whopping 10,000 tx/sec (864,000,000 tx/day) on its first year of release.

No joys work. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, episode six of hashing it out. As always, I'm here with Colin. Say what's up? What's up, guys Hio, and today we are here with Stewart. From the day. I can you, can you give us a quick brief instruction as to how you got into this space, what the project is and what you're trying to do? Sure, yeah, so, I'm sure, Pope Joy. I founded Cadena in two thousand and sixteen with Will Martino. Going into that, we were working at a big bank. We're working at JP Morgan. I had previously done a lot of built a lot of trading and exchange systems for Jape Morgan and others. Will had worked at the SEC and done some quantitative analytics and while we were there we were leading up a blockchain group that was helping japing Morgan both make strategic investments and do R and D to solve problems at the bank with blockchain, largely private blockchain. So KUDANNA launched based on some technology. We developed an open source there with a high performance private blockchain as well as a wed quickly developed a smart contract language called packed, which was really kind of in response to a deep understanding of etherium and the EVM and solidity and trying to make something that we fall would make it easier and safer to right smart contracts. And then, about a year ago, we decide we want to make an investment into public blockchain technology by developing chain web to both kind of work in a proof of work system and also to provide a scalability solution that did use proof for work, so chain web being a parallel chain proof of work architecture that allows for up to tenzero transactions per second, and we announced that at Stanford. And that kind of brings us today. We've started building chain web and we've had the private blockchain in production for over a year. We've got some big clients and yeah, it's been fun. So I have this viewpoint of the current landscape of things that I've I've built out this analogy. That's that might be not working anymore, but I want to explain that for a second and I think it sets the stage for the rest of the conversation, for this for this talk, and that is looking at the history of not the Internet this time, but the history of computation itself and how that has evolved and what problems we faced along the way. So, as the CPU got better and better and better, we decrease the size of transistors. The CPU is able to do more and more and more work, but we got to a point where that no longer scaled and so we had to move towards this concept of instead of computing on a single core that just got faster and faster, we did computing on multiple cores and we spread the workload to multiple CPUs at the same time. So we had to find a way of distributing work. This gave rise to two concepts of parallel computing. One is the embarrassingly parallel model, which is basically referred to as map reduce these days. The other one is a fully parallelized model where the individual workers have to know at least some context of the other workers, whereas embarrassingly Pale parallel computation, the workers are completely isolated and don't have to care about what other the other workers are doing, and the way that you design tasks based on the communication between these processes is very, very different. And the way I see blockchain is we just got to the point of trying to start to it's not do the same work on a single CPU and trying to figure out how to then spread that work across multiple CPUs AKA block chains in this context, and what I'm seeing is a lot of people trying to force full paralyzation parallelization techniques into an embarrassingly parallel model which inherently can't work. Can you, for one, do you think that's a good reference to sort to think about the problems we're currently facing and, if so, like, what can we do about it? Yeah, so the the terms...

I'd use for embarrassingly parallel and paralleled use concurrency to describe problems that require, you know, kind of knowledge of the other process it's running. And then paralysm is kind of, in a way, the easier problem to solve and a lot of the you know if you have a background in scaling distributed systems. My background is in training, exchange systems. The you know, one of the things that like you try to do from an engineering point of views. You try to take kind of the dumbest possible model you can and, you know, make as few assumptions as you can, and that one of those and some of those assumptions are it's surprising when you see where assumptions kind of pop up, and one of them is in this idea of trying to take a concurrent problem and make it parallel that we see in Blockchain, and that that's when you start talking about like dags and things like Hash scraph and things like that. And the reason why is that they're trying to partition. That what you might call partitioning the key space. Certainly you're talking about like websharting. That's the way you express it. And you know, while that might work really well for a you know, shopping cart or something like that, you know, a lesson I learned in trading systems is that you know, you can come up with these incredibly smart ways to partition your load, but then apple releases their earnings report and then all of a sudden everybody's trading apple. So now you know all of your fancy mechanisms to be able to use all your cores or use all your machines. Basically come to not if you've assumed that, you know, nothing impacts anything else, because now all of a sudden everything's impacting the same thing. So you know, so in your kind of back to the same old load balancing techniques of the past, right, and you know so. And then the other side of it is, you know, kind of looking at the history of consensus itself. And you know, and I think people have a tendency, I think there's a weird tendency, to kind of underestimate the magnitude of, you know, of Setoshi's achievement with Bitcoin, in the sense that the idea that you could have a partition tolerant, open system that's basically constantly under attack, that's not prevented by that's not protected by far walls, be able to achieve anything is, you know, as a first in the history cree of computing. For sure, all of this, all of the work that goes into like, doesn't team fall tolerance and consensus, you know, has huge assumptions made about, you know, a closed environment, all sorts of things like this, and you know so, of course, Bitcoin is, as you mentioned, bitcoins, like a single CPU. etherium follows that with, you know, kind of a more fully fledged CPU. Metaphor, what chain web basically tries to do is offer the most, the lowest level parallelization, primitive possible in the sense that one you know the chain web. You know, chain web is based on this notion that if you can build a hash from a single chain, where a hash incorporates the previous the Hash of the previous block chain. Web takes that to where a hash could incorporate blocks from other chains that are kind of running alongside it, and that serves two purposes. The first purpose is to keep all of these is to have a notion of multiple chains and be able to keep them all on one kind of mega fork or mega branch, if you will. And that's possible because you can. You can basically find yourself in the history. If you sample someone else's mercle route, but they sampled you or through some route, sampled you in the past, you can find yourself in their hash and that way you, everybody, can ensure that they're staying on the same fork. Where and then and then when? That ends up providing, however, is a a SPV oracle because of the so SPV simple payment verification, you know, the technology that allows things like electron and other light, you know, other a desktop wallets to kind of not have to run a full bitcoin node. The problem they have to solve, and it gets discussed in this in the Setoti paper, is that you have you have to basically make a kind of probabilistic guess about what the longest chain is at any time, you know, and the way you might do that is look at like six big miners, you know, and get there and you know and watch what they're doing and load their header stream and then say, oh look, there's a bunch of kind of compute your own consensus chain. Web ends up offering its own in the sense that you just look at your own header and anybody who wants to send anybody who wants to prove a transaction that happened on another chain, basically provides you with a proof that...

...allows you to stitch together the hops from you from the chain they were on to your chain and will really you're going the other way, you're going from your chain to their chain and then doing the kind of normal markle proof to find where their transaction is. As a result, you have the ability to do a burn create, where you burn a coin on one chain and then create the corresponding coin on another chain, and that's that's the kind of primitive that allows us to have a single currency over tens, hundreds or thousands of chains. It's also the same mechanism that allows us to run these chains more or less independently alongside each other and get a linear increase in throughput up to Tenzero, if you save. You have a thousand chains and they're doing ten transactions per second, which is a very pessimistic. You know, we feel like we can do a lot better than that. But just to kind of be honest and say we're doing ten transactions per second chain. We can get tenzero transactions per second by linking a thousand chains together. But, but, but the interesting thing about it is that it's a very low level coordination, primitive, and it's also available to smart contracts. But it's not like a magic bullet for, you know, for kind of like working out computation schemes for how you're going to you know, like it does allow, for instance, that allows you to load balance smart contract because so you could deploy a smart contract to say, ten or FIF five, ten chains, whatever, whatever you think your load is going to be. And of course it would allow you to even respond to a kind of congestion event by being able to actually hot deploy to new chains. But in so far as your smart contracts need to be aware of the ones running on the other chains, it's not a magic bullet for that. It's just possible because you can similar to using the receipts of the coin transactions to be able to do a burn create, you could do a similar kind of token sharing strategy to share state across smart contracts. But the only other point I want to make about that is that that's only if you want to stick to the completely trustless low level mechanism, which you know, if you're super paranoid, that's going to be one that would allow you to not ever, you know, to have a basically fully trustless and proven workplode. But that might be kind of slow every time you have to move state across chains. But clearly what you can do is you can you there's tons of opportunities for trusted services, you know more typical Oracle services, to leg in there and provide you know, and it basically make it possible to do instantaneous transactions where somebody's basically asserting for you what the state of your smart contract is on another chain and making that available to you, you know, in a in a you known, a fast or even, you know, completely straight through processing kind of way. So there's a bunch of stuff you can build on top of it to allow for, you know, very instantaneous interaction between smart contracts, if you're willing to give up a little bit of trustlessness. And then, finally, of course, as a level one scaling solution, that means, you know, by level one I mean that the base layer itself is where the scaling is achieved. That doesn't rule out the use of a lightning network or a similar level to scaling solution as well. I mean lock. You know, what I just described in a way might be orchestrated, in fact by atomic cash lock or some of the things we see being used in lightning network, depending on what kind of trust model you want to use. So you know. So the idea, so again, the idea is that the infrastructure shouldn't try to answer every single question. The infrastructure should basically make it possible to parallelize and then, but not make kind of grandiose promises about you know, because that you which is my big problem with like the kind of Dag based approach is that it really presupposes that the problem that you're trying to solve is going to decompose to something that can be solved with this kind of partition key spase, and there's any number of problems that don't decompose that way. Right. It's interesting me, though, that you've a lot of these these solutions that I see. They really want to retain state across say shards, and they want to they want to be able to like reference the other, the other the states across like the multiple parts of the chain, the throughput, the channel, whatever you want to call it, that it's actually through in the transactions of value through sounds like you're just completely not even you don't even care about that. You sounds like you've kind of abandon that approach and I was wondering, is it even possible to do that in your current system, because it sounds to me like you could have chains for state and chains for value and chains for just high throughput transactions and they could all kind of play together to build a larger decentralized application. Yeah, I mean absolutely, there's you really can do it, every one. I mean there's a few things that we need to keep uniform across the chains. I was having a discussion with somebody who brought up the possibility of having different chains have different hashing algorithms as a way to avoid...

...decentrals, eight of you know, as a basic yet way to avoid centralization. Sorry by but the but one thing we need to be able to reason about over the whole chain is the difficulty parameters, because basically, within some window of you know, blocks being a little off from each other, chain web runs as kind of a single block height at any time. So that's obviously that implies that we need to keep the hash rate and the Hashes to need. We need to be able to reason about the hashing difficulty cross chains. So some things do need to be uniform, but not really that many, you know, like so one the interesting thing. A lot of chain web, you know, chain webs that design its history has a has a fairly distinguished pedigree in the sense that Adam back and others proposed, you know, Adam back up close, Beta coin on the mailing list I want to say like two thousand and fourteen or two. I think was in two thousand and fourteen for both of these. And someone else proposed a thing called block rope, and these were these are strictly to chain proposals, but using basically the same mechanism and a largely as an approach to security, but also as a way to like Beta coin was about being able to stage, basically have a hot data of the next kind of like bitcoin release, so that they could find bugs and a kind of so they want to be able to burn and coins and create them on the other chain. So really, really was the the same design, but not really looked at as a scalability solution, but looked at as either a staging flexibility solution or, in the case of block rope, there it was after Mount Coc so they they wanted two chains because two chains are harder to attack than one in the sense that if you you know the normal fifty one percent attack, or you know, or whatever you want to call it, where you end up. You know where you try to kind of rewrite a bunch of blocks, rewrite the history of say however many blocks you need to you know, to cause some kind of fraudulent transaction to occur and chain web you not have to do that for every chain that gets referenced by the chain you're trying to attack. So the increase in security is just kind of intuitive and dramatic. And that was the block reproposal. So that's the kind of pedigree of and in fact they talked about it in the original side chains paper, of how they how, you know, one of the problems, one of the things they're trying to avoid was there was a there was a belief that this would make the network harder to do, you know kind of community hard works on. But that's not how we came to it. The way we came to it was based on features in our smart contract language because we wanted to base we liked packed and we wanted to put packed on public chain and at that time everybody was talking about governance and you know, packed, as always had governance at the smart contract level in the sense that any smart contract that you deploy in the pack system, necessary barely, has to have a public key based governance regime that allows you to upgrade the contract or, you know, fix an exploit or fix a dator or anything like that. And, you know, and with all the bugs that come out in a theium where they basically have to do a hard port to fix them. You know, this was something that we felt was going to be a huge improvement all by itself. But packed has a primitive as a has a primitive mechanism for orchestrating, for orchestrating multi step transactions, and the the concept that led to chain web was, well, what if we could do SPV inside of a smart contract? And that's so that's kind of how we ended up with. It was to be able to was a state sharing thing in the sense that, okay, we have this nice kind of trustless way to put together a multi step transaction thanks to packed. If we gave pack the ability to, you know, made it easy for developers to do an SPV proof inside of pact, what would result? And it's you know, and the first thing to pop out was a SPV based way to do a in exchange with bitcoin or lightcoin or something like that, although lightcoin is as probably is a fast, you know, the atomic the Tom Swap, the Hashtimelock contract of the probably is a faster and kind of simpler way to do it. But once we realize that we had a header once once we once we thought about what it would mean to do that to the same chain is actually and the fact that you'd need to incorporate the headers. That's actually where we came out of it. We didn't like this. This emerged as a solution for being able to run, to use SPV, to coordinate two chains, to share state, and then it kind of ballooned out from there. So yeah, because you guys, you guys have the ability to add many, many chains to this like it has one genesis. But Look, look at through your bepaste talk. It was it looks, like you said, tenzero transaction psychos. You have a thousand chains. Is that it is? By the way, is at a hard cap, and I'd like to I'd like to wait Bos take a step back and and ask a...

...question that discusses the number of chains and how they're interconnected, because I think there's automatically a number of naysayers that say you can't do that, and I think that has to do with the fact that, as you would like say, you have a bunch of chains that are running parallel to each other and you then have to then help them communicate with each other by passing messages across them, which is you do through references to block adder or the different block adders. Now, if you do that naively, you end up with a exponential scaling problem because all chains can't connect to all other chains. And from what I've gleaned from the talks that I've read about you is that you're using graph theory as a method to optimize the communication amongst change, which gives you somewhat of a tradeoff between the time it takes to move an asset from one chain to another, but it also minimizes the amount of computation, which I'm not sure helps you get bread of exponential scaling, but at least makes it tractable. Can you talk about that a bit for we move on to help. Pact does this? Well, I guess they only. The thing I would note there is that, before before I answer the question, is is just as is the exponential you you the use the term exponential there, because that's a degraded case, right. I mean there's no question that you could come up with a super degraded case by which every single transaction in the system has to reference a transaction on the other chain, but that's you know, but that's it's pretty absurd to worry about that because there's so many solutions to that problem. We have our primitive solution, of course, which is that you know, which is you do do this kind of like stitching together the chains, and, depending on what your need for confirmation before you do that, you might have to wait a few blocks for those. So you know, we're talking about something that, you know, the interchain communication can be quite slow. And one thing I want to add there is that, you know, there's so many things to talk about chain Wab and one of the things to talk about is the fact that, since two chains are stronger than one, a thousand chains are absurdly stronger than one, and especially with the way the propagation works, so that we, and you know, once we can build a build a kind of robust simulation that can give us hard numbers about, you know, the probability of an attack, we're going to be able to take the difficulty down considerably. So when we talk about and then, lastly, the graph theory part is basically the ideas, that is, the notion of a diameter, and a diameter in a graph is a is a kind of longest shortest path, or, sorry, shortest long as path. So that means that we have a we have a network configuration that allows us to get so with a thousand chains. We can conceive of a network configuration that has where every note is directly connected to ten, every chain is directly connected to ten other chains, but you can get to any other chain in the network in three hops. That's the diameters. The same concept of six degrees from set from Kevin, six two separation from Kevin Bacon, who, for anybody, didn't quite understand that right. And in here the I mean degree gets flipped, of course in that colloquialism, like in degree, I am to degree is the number of a people. You immediately know it's a number of edges. You have to you know, an immediate neighbor. And then diameter. So it's really like, you know, it's edges, like it's six edges of separations. What is what it really is? So yeah, and and ours, you know, and this is a problem with known configurations. I mean it's a hard problem to solve a priori, but there's all these configurations that have been solved and you know. So there's a bunch that have like four versus three hops to get to the entire network. So so, but again, that the the you know, the the things that I discussed before is that, you know, there's nothing about chain in web that doesn't preclude using level two solutions to improve performance. You know, you can do whatever you want in terms of using side chains. You know, you can still see chain web is kind of this massive Mainenet, but the big difference is that it's a main net that's got that is parallel. It's so it's actually a far more flexible I mean, I think the kind of implications of chain web for designing level two solutions are, you know, are basically completely unexplored. I mean, like someone like Zaki would be that, you know, would be the person who would probably start looking into that first, in the sense that, you know, the if you look at chain web as a main net, the size and the power of a network that would take level two solutions off, that is kind of absurd. So, you know. So that's the those the one thing I don't like about a lot of the discussions and blockchain, or you know whatever is it. One thing that I that I question a lot of discussions and blockchain. It's this kind of like tendency to talk about scaling solutions as though the implications of...

...a scaling solution are understood. If you build large systems that scale, which I have, I can tell you you need to roll that thing out into production before you're going to understand it. And that's just all there is to it. So, you know, I for me what I like about chain Webs I know what the issues are going to be. You know, because what are we not changing? We're not changing proop of work. We know that each of the chain means is going to be partition tolerant, is going to have all the features that we like about Bitcoin, about etherium, and we're not going to lose any of that. All we're doing is we're basically taking properties of the system that easily accept, you know, basically the the blockchain itself. You know Stewart Hamer's design, which is to incorporate Hashes in a mercle structure. We're leveraging the fact that that can easily incorporate another mercle structure to give us a, you know, a low level form of paralyzation that you can then, you know, build, you know, a huge amount of things on the same way, that we're doing is like lightning or anything else. Can Can we? I feel like the next almost automatic argument that someone would come up with against what you're doing, or at least question in terms of explaining it, is how does mining in the scenario work? Is it something that anyone can jump in on and off on, like Bitcoin will? Maybe not Bitcoin, but the idea of that type of permissionless incorporation, of becoming a minor? Or is this something that's going to be basically relegated to a few small powerhouses of people who can mind? It almost feels like mining pools are really going to be essential for this kind of thing to take off. That's not really the way we look at it. I can see why people would jump to that and then like that was like one of the first questions at the beef paste talk, because you know, now you've got this like you know. So the I idea being is that if you wanted to have a view, say we and I want to actually get back your there was a question before. We'll talk about the size of the network and stuff like that, but let's let's just stick with this thousand number because that's a you know, that's a kind of intimidatingly large number. And you know, and there's and you know, so the idea being that you're actually going to try to successfully map mine a thousand chains. We're not talking about, you know, having, you know, fifty gpus in your garage or, you know, like to give a kind of upper bound of like individual efforts, or, you know, we're talking about, you know whatever, a data center somewhere or something like that. That's so. Therefore, people, you know, there's this tendency to say like, oh well, chain web is all about centralization, because now you need to be this powerhouse to be able to have a view the entire network and the the the points to make there is, first, that chain web is a hundred percent permission list, just like the coin, in the sense that you can mine any individual one of these chains. There's no there's no reason, nothing is they operate the exact same way. So there's no reason why you can jump in on one, two for you know whatever. And in fact, if you're somebody WHO's wearing a smart contract and you load balance to like, you know, say ten or twelve chains, is probably in your interest to run full nodes or even mine on those chains. So but the but the interesting thing about chain web is just that chain web introduces an interesting variable into the kind of mining optimization. So when miners are, you know, trying to figure out, you know, how to most profitably mine that the only way we can really talk about it is to look at like miners who are mining both bitcoin cash and Bitcoin, you know, I and kind of the situation we've seen where like the BITCOIN cash difficulty, nobody's mining bitcoin cash and then the difficulty falls, so then all the Bitcoin, you know, will not all the bitcoin miners, but all the bitcoin cash miners, all the you know, all the kind of people who do both switch over to bitcoin cash and, you know, make a bunch of money and, you know, mine like crazy with the low difficulty and then the difficulty pops back up and then they all jump back onto Bitcoin, you know, which, you know, obviously there's been mitigations for this and but that's that's an interesting lesson that can be applied to chain web, which is that so, you know, big bad minor who's like mining all the chains is still going to be interested in at any point in time, in mining the chain that is furthest behind, you know, in the sense as I said before, there's this notion of a single block height. You're basically going to want to be observing all the chains and finding the ones that are, you know, that have had the least work done on them, or at least something, or you have an impression of that the least work has been done in it, because your mining dollar will just be better spent there, as opposed to you just won this chain, so you're just going to stay on that chain and try and get the next block right, which seems it's to really questioned about that, because it's that that's the point in particular, is kind of striking to me, because you're monitoring all the chains, how can you guarantee that they're all going to have sort of a similar hash difficult or hash power given to them so that...

...they're going to have the same sort of block times? Like how it seems like it's important that the difficulty remains the same across all of them and in turn, the block times also sort of are intended to remain rather consistent. So you could say ten second block times on all chains, the stuff, what it sounded like, but it's that the model needs to be consistent, not necessarily the difficulty moment to moment moment. No others. You need to have the same computation, but you don't have to have the the idea be at some point. So the other thing is that all the reason why I said there's a windows that at some point everybody has to be caught up for the network to make process progress. The network cannot leave a chain behind it is you know, so again, this is another reason why you know so. I was talking about it from a mining optimization point of view, but there's another point of view, which is that you're also going to do that just because you want to make money on this network. So as a minor so you're going to go where the work is needed so that you can help there. It's an interesting kind of like the incentive, the mining reward incentivises kind of a smooth function for spreading your mining work across the chain and as that moves forward, the difficulty necessarily needs to stay within range. But it doesn't have to be locked step. You don't. You know, you can have, you know, obviously like a chain and that's moving ahead depending on, you know, depending on the tolerance that the network has for other chains getting behind the chain. That's moving ahead in theory, you know, depending on how often the how often the mining, kind of the mining difficult the hashing difficulty, readjust you know, you could see slight differences, but eventually the network going to stop making progress of some if, if some chains are left behind. So yeah, so that's not and you know. And how does that mechanism work then? How do you guarantee them that the whole network, what make progress if some chains are left behind? I mean, it's no different than Bitcoin, if you think about it. I mean that what's interesting. The weirdest thing about Bitcoin is the way that, you know, if you think about a Bitcoin, get stop making progress tomorrow, because all the minors could just be like forget it, you know, like there's nothing forcing. It's all incentivization based. So likewise, with chain web, the incentive, the incentive now is to not only keep working on you know, to keep doing proof of work, but the incentive is also to efficiently spread your proof of work over all the chains, because if you don't, eventually you're not going to be able to make money anymore. Got You you know. And how does that not open an attack vector? So let's say that there's a lot of miners who are have a whole and that you see this now. There's a whole lot. Certain miners just have a carrier significant amount of weight. They can all get together and go, okay, we see this application. Let's just say facebook is on these chains. We know that if we stop mining on those chains than facebook's for productivity will go down significantly. Does that not seem like a concern? Or Am I misunderstell? But I mean whether's stat they're shooting themselves in the foot right because because, though, is that chain will get that chain will quickly slow down and even halt the entire network if they go up single chain. So it's actually like a fairness thing. It's not. It in sense that you can't attack a single chain because if you do, you're attacking the entire network. And by attack, of course, here we need a liveness attack and the right why other can't radically ignore certain chains because if you do, eventually the other chains are not going to be able to make progress. And that that limit is generally the diameter, by the way, in the sense that you know you need to be you need to incorporate your peers, you know, in the sense of that when we talk about the diameter degree, diameter, that first degree, you need to incorporate that their last block so that you can make your next block. But they need to incorporate their peers. And so, depending on diameter, you basically have diameter blocks of Mabel before you have before you. You cannot make progress if you can't make it make a new block. So it actually, you know, forces a high degree of coordination on the chain. And the last point I want to make, because I know someone I know, is just to say that the coordination is one of the most interesting things about chain web because it actually creates a role for large minors and small minors to work together. Now, you know, if you have a mining pool that is focused on, say, five chains, that's something that, like a big minor doesn't have meal the big minor can take into account to work you're doing on that in terms of them trying to optimize how they're going to mind the entire network. And so that's something that I think is very new. Is that, because you know as much as you know, there's there's there's obviously reasons to want to oppose, you know, big miners, but it's, you know, everybody knows that. I mean not everybody knows, but I think there it's a commonly held belief in the certainly one.

I hold that it's a war of attrition and that at some point it's going to get you know that that Bitcoin is not central you know that hashing is not very centralization resistant up to some point, depending you know, there's gpus versus a six. There's all these kinds of things you can discuss. But chain web is interesting at least in that as those kinds of differences emerged, there becomes a productive way for large and small minors to be able to coordinate. So if I can, I re explain that. So they make sure that I have it. You're saying, because everything is connected, and so you have, let's just say, a network with ten, you know, tenzero notes. Everything must be at least degree three. I think you said that. Maybe. Yeah, let's back while I was saying ten and three. I don't know if I have the exact sorts right, but either way, let's just say you have stand your Gr ten diameter three, a thousand chains. Okay, so do read ten. So you need to include ten other headers from the chains that are your neighbors. Your nearest neighbors in order to produce the block on your current chain. So if somebody all the way on the other end of the the the network, okay, they're they're not. They're not connected you within three they're say nine hops away. Okay, well, the diameter, if we did ten diameter, would be three. The diameter at a maximum in three. So let's just say it does. Okay. So like yeah, that makes sense. So they would be all the way on the other other end of the graph. You would you would have to they're attacking. So this really slow, because remember attacking north. You can remember. You can switch over in mind those chains yourself, right. But I was saying, like, let's just say somebody's attempting an attack over there. You would notice that and go, okay, well, now I can't produce what I need to do in order to continue my work, so I'm going to go hop over there. Furthermore, it would just disincentivize them because the entire value of what they're mining. Whoever just like with holding over there wouldn't wouldn't gain any benefit from it, because it's also with holding over the whole network in like, let's just say four blocks. Yeah, all right, so that there's a there's another aspect of this that I think is I want to look at an attack from the opposite perspective and since that a block with holding, because it seems as though you've leveled the incentives properly so that, with holding a block, does it make sense people can just hop on that chain and mind that block if they're incentivized to have the information from that particular chain? The opposite of that is, if you have an outside actor that has a lot of minding power over producing blocks that are single chain, would that not also cause issues with the coordination of other things? Because if it's maybe, you know, fifty blocks ahead, you can't include any of those things. You can't. I guess he can't. He can't. He can't produce new blocks until the others are okay. Right, yeah, it all balances out. It's like actually pretty solid. I can't. Yeah, that's that's neat. Yeah, and you know what's The craziest ride? The thing about what you know, designing the system. The craziest thing about it is that most of these things emerge from the design. Like it wasn't we didn't like you know, we weren't like you know, sitting in our lab with this like cackling. You know matter. You know, it was like we just came up with this primitive that said that, when you know we just started with SPV, we're like, you know, we want to have smart contracts be all. We want to have two chains talk to each other with smart contracts and SBV. In fact, where this came from? Was it the way it came out of the smart contract language? Is that one of the things that's one of the biggest warts on the EVM is the fact that money paying itself is not in the smart contract language per se, but as part of the messaging system that contracts used to call each other. The only way you pay money in the EVM is by doing call or the related off codes and you know, attaching an amount of ether to that. And you know, and now you know there's there was there's an effort. I don't know where it's at right now. There's an effort actually make any arc twenty four theium so that, you know, we could use the same interface. But we wanted to solve that problem at the route by having a smart contract be the mechanism by which coin was managed in the system. So that ever, so that moves that in tough even in Bitcoin. That's not true. And bitcoin, while you know, the trustless part of it is the ownership of the outputs. Obviously, when you run a bitcoin transaction you're running the script to to guard those outputs and release them into the trench, but that mechanism by which it's released into the kind of the middle of a transaction where the new outputs get generated, is obviously hard coded as part of the protocol. What we wanted to do is have an entirely smart contract based coin for any number of reasons. Originally we thought we might want to have governance, but that's an extremely fraught question. If you put if you if you put a governance, you know, like the coin and chain web is going to be autonomous. You know, like in other words, you know, while we can have governments or a smart contract, you can make an etherium style smart contract by just making it that it can't be upgraded. In other words, there are no public keys that can upgrade it. So the coin is going to be changeable, but the coin contract is only going to be changeable by hard for just to avoid, you know, the...

...problem of like becoming a money transmitter. If you have control over a cryptocurrency, you know issuing regime. But so the governance isn't the big deal there. But it's more that packed has a smaller surface area, all the things that make packs safe to write smart contracts in. You know our upcoming formal verification which I don't know, forget to get a chance to talk about, but in the next next week we're going to release a first cut over formal verification solution for fact where we're able to compile pack directly to SMT, live to and and offer a property property checking language that melds in with the pack code to allow you to write throughout low normal, nonphd to, you know, application developers to be able to use a formal verification system to prove things. Oh that's awesome. That benefits the coin contract because now we can load up the coin contract with all sorts of proofs and say that, like, look, our coin contract hasn't has a very small surface area anyways. But you know, beyond that we've already gone and like loaded up with every proof we could think of. You know, of course, you can't ever say that nothing has you know, will never ever have a bug but will do the very best job we can. But at that point, you know, we got into this thing of like, okay, well now, now you know SPV should be something that's native to the smart contract language, and that's how we ended up and that's how, in Weirdly, that's how we ended up a chain web. Is that we wanted to be able to do that. I think the way we can maybe start moving into that conversation is the first explain the concept of Smart Contracting Languages until they interact with the base layer. And so for most people's, I think, intuition around this is the programming of solidity in an it's compilation into bite code for the theorem virtual machine or Evm, and what you're doing is not that. So I think explaining the concept of how a human would write a contract and how that then interacts to the base layer of of chain web works, but help pack the shape. I might go the other way, if that's all right, because perfectly just just the context and start a bitcoin. Okay, so Bitcoin, you know, is a user programmable system but with a very kind of unique model that you know, I don't really know any other system that's quite like it, and that was what I was talking about before, that you have the ability for the you have ability to write these tiny little scripts that will allow you to own it, will allow you to release an output in a transaction to be reallocated to somebody else or back to yourself as change or whatever. And bitcoin had this bikee code and the reason for the bike code was that it was, you know, while certainly you can write some fairly complex ownership, you know, schemes, the very model of computing kind of means that, unless you're like doing colored coin hacks and trying to basically use time stamping to represent things that don't really resemble a Utx. So as long as we're talking about UTFX, so's, it's very special plans. Bite Code is kind of right sized for the problem at hand and you can directly read Bitcoin Bite Code. It's not that easy, but you can do it. You can look at you know, you've got to get used to it, but then you can look at these you know, you can look at Pai to Pubkey Hash or any of these things. You can look at the the the you know, the Hash timelock contracts. You can read the by code and you can understand what's going on and then, of course, because it's turned ring and complete, you know, that old terminate. So if you look at it that way, it's not the bikecode is not like a low level machine language. It is a DSL. It is a scripting DSL that runs on Bitcoin. It's in Bitecode, probably just because they wanted it to be kind of small, you know, and or that, you know, they there's certain things about stack machines. Oh, one thing about stack machines is that they have point free code in the sense that they don't have any variables, you know, and that's actually a safety thing. I mean, like you know, there's there's kinds of it limits the amount of computation that you can do and that was wisely, I think, seen as a way to reduce the surface area bitcoin scripts, which have you know, if you look at the history bitcoin scripts, of course they've shut down a lot of upcodes to make it even safer. So is as reduced as it was. They got rid of things like string concatenation to make it even safer. So when we move to something like Atherium, I feel like that is a radical departure from bitcoin, and it doesn't seem like it because it's like Oh, they just added a few they added jump and they added call. You know, they basically and they added store, they told the UTX. So there's a lot of differences between there. Clear enough. Well, but think about UTXO is not in bitcoin script. Hmm Right, bitcoin screw. UTXO is strictly external to to bitcoin script. Firewolf is is is strictly just you know, I've got this signature, I've got this public key, you know, and I'm basically issuing a buoy and I'm saying pass or fail. It's like it's like a firewall to getting into the UXTX has. Yeah, it's it's a custody...

...system basically. So if you look at Bitcoin the way I think you should properly look at it, which is basically a DSL per custody, then evm has nothing to do with it, because Ebm but superficially looks like it because it's a bite code. And Oh, just d add jump, pad, call. But now you have completely changed the entire purpose and the entire reason for existence of the of the bite code itself. And my argument is that that was just that was just a wrong move. You know, sorry, but I just don't think that was the right way to go the packed report. Obviously I wrote pack. So I think there's a different way to do it, and the way I wanted to go was, let's go with this idea of scripting and with the DSL's and things like that. Let's let's stick with you know, and in so far as you can read Bitcoin Byte Code, which I think you can, it's not like fun. I mean I think it's fun, but you know, it's like, for a beginner it's certainly going to be a little bit intimidating. Let's see, let's it's intimidating. Let's put that way. The idea is like, well, we don't you know, you could be, I think actually even ivy, you know, chains language started off basically as a something that could be compiled to Bitcoin by code, which again I think kind of misses the point. But you know, so packed as more the idea that like, okay, so let's let's take a step back, let's consider what we're trying to do with smart contracts and let's try to write the smallest possible language. But now you know, it needs to be a script because it needs to be a real scripting language in the sense of you know, it needs to be needs to have code, not just by codes. It needs to have things like variables, needs to have things like abstraction, needs have functions. But let's keep a turn and complete. Let's you know, packed is basically an attempt to say what, what are the kinds of computations that we're going to be doing in this kind of smart contract area and, you know, money transacting area? What do we need to get that done and how can we eliminate everything else? And in that sense I see packed as much more and then all the other thing about pactice, of course, is that it's got primitives for doing public key authorization, which is also how it kind of derives directly from the bitcoin heritage, whereas Evm, on the other hand, says, Oh, state machines are great, are you know stack? Sorry, stack machines are great. Let's let's make a stack based little CPU model and just chuck it on there and not really think about what it means to have this kind of general purpose CPU running on a blockchain with a very low level storage model and, you know, and all these kinds of things that don't really have. There's no real reason for that to be the compute model on the blockchain and in fact it has huge problems, as we all know, the biggest one being that, you know, at by making it very clear that this was a compiled target and not something that you're supposed to directly understand it, you know, directly led to do this. You know, this idea going a right surface language is like solidity that will deliberately and purposefully compile a code that no human could ever really read like and really can't. But you know we never want to. And you know, and not only that, introduced the problems of compilation. I mean compilation is a very fraud field in terms of like you know, there's a lot of things to think about between a low level CPU model and a high level language. If you look at what we've done in every other field and computing. In fact, even calling it a virtual machine is a bit of an abuse because if you look at the history of virtual machines and others, you look at the Jvm, you look at beam from R Langue, you look at anything else that called its look at the Lvum. They don't make kind of naive assumptions that like, oh, we just have this tiny little computer running here. They make a lot of they try to actually make those things as high levels they can get away with. In the case of JVM, very high level. JVM has a lot of knowledge about like dispatch, about like the names of functions that are being called, about the names of variables. You can like Java bikee code is almost readable as Java, and that's deliberate because, you know, the point of the bike code in that in something like the gvm is not to be low level. It's to basically, you know, be able to erase a lot of the kind of like things that get in the way of optimization when you're just dealing with source. It's a lower so what we call lowering, you know. So it's something that as more amenable to optimization, you know, kind of hot optimization inside of the JVM and things like that. So the VM is not a VM, it's really a machine. It's like, you know, it's a model of a CPU and you know, as a result it, I mean the nicest thing I can say about is that represents a bold, kind of unhinged experiment of like what would happen if we put a machine on a blockchain? And I think you know the answer is pretty clear, which is that safety is going to be a huge problem. That is going to take years and you're basically turning back the clock because, you know, we moved away. That's why we started doing BM's is to safety was one of the reasons, like the GVM. You know, people love...

...to hate the GVM, but you know, the fact is is that the GVM is a safe for computing environment. Then, you know, coding on metal. That's just all there's to it. Well, I just yeah, before you get too deep into that, I really feel like the reason they made the evm the way it is is different than, say, the JVM, or even all of them. There the the the purpose of the EVM was to get a small bite code that would run or something that would take a small bike code, was a small, limited instruction set and actually put it, publish it to the block chain and let the blockchain be the distribution mechanism in a assuredly correct way. Like you know that the person posted is actually posted, this is actually in the blockchain, which actually is one of my questions about pack because I haven't had a whole lot of time to get deep into it. Like where is this code? How do I retrieve the packs code? Like, how do I know this is where it exists and how do I like verify that the code that I pulled down is the same code as what was publisher what is expected? Oh, the great news is you just go right onto the blockchain and quary it and you'll see the code. So that's I mean, it's much nicer than something like a Theorem, where you know, sure, you can go to a contract and look at a bunch of you can look at a bunch of Gobbledy Book of you know, Bike Code that you that there's no way you're going to be able to reason about what it does by looking at the bike code. You're going to have to basically go to their github, look at the solidity that they compiled and say, Oh, I see I'm supposed to do this RPC call to provide an Abi or something like that. Yeah, well, you, yeah, I mean, and if you're you know, and of course you can use an Abi explorer and then you'll get a little bit more information because of the slidity Abi. Be Member. You don't have to use the slidity Abi. And you know, in fact this that Abi is one of the many things holding back the platform, because you know the other there's a whole other kind of social thing that happens by having compiling, which is that you basically get locked in a backward compatibility thing before you even started, and in the Abi is one of the worst parts of whereas packed is really straightforward. Pack you just hop onto the blockchain, you know, on you know, make a local call. You know, in terms of the compute model, kind of north of the smart contract layer, etherium and cadenn are the same in the sense that you know, it's deterministic. You know you're not. You're not able to do things like, you know, call out the network. You still you still need things like oracles. All that stuff is same, and so, likewise, you can do you can do something akin to exact, which is basically you're going to push a transaction through the blockchain consensus and then execute it. Or, you know, you're going to do something like local, which is basically you're going to look at your local node, you're not going to be able to make any changes to the state and you're just you know, so it's basically like a query. Right. So likewise, but I think this is kind of like getting away from what I was actually asking, but that's it's a small bike code. That's why they did it that way, so that everybody could have a copy without increasing the size of the blockchain tremendously. Well, the hold on. Okay, small bike code does not mean a small program correct. Correct, but if you're posting the whole pack program and it's like two thousand lines of human readable pact or logic of that, well, first thing is to remember is that code jesups beautifully. There's no like. I would I that that is I would not want to make any naive pronouncements about the size of solidity code compiled compared to solidity source code in terms of it being a pression. And also, I don't I just don't think that's a I don't think that's really the reason. I don't think it's for compression. Let's put it that one. Okay, there are other ways to compress. You could compress solidity code with Jesus and then you'll end up with a very nice, tiny little thing on there. I don't think that's why. You know it. I think that might be white again. I think that might be white. Bitcoin used to Bikee code because you know when you can end. I'm one of the first things. I did it Cha ed when I was at the the bank, was right, an open source a theorem interpreter and Haskell. It's one of only three that are out there. It's called Massala. I mean, you know, now it's like a frontier era JBM EVM, but you know, they haven't honestly changed all that much about it. But and you know, the thing is is that it one of the things that I was kind of into was coding directly in Hevm. I was like, Oh, this is pretty fun, you know, like I mean I've worked another letters joy, which is this interesting kind of stack language. There's fourth there's stack languages out there, you know, which the evm is one of them. And I was like, Oh, this is kind of interesting. But but the problem is is that nobody is doing that. Nobody is coding and straight Ebum and because the whole push of Atheroreum has been, you know, kind of use solidity, use this object oriented model which, you know, I could go on for hours about why I think I'm to oriented. Is the wrong approach. But use this model and don't worry about the bike code and don't worry about the Abi. Just, you know, right, hello world, right, you know, it looks like Javascript, it acts like Java script. Write that kind of stuff. It'll compile to whatever. Don't worry about that, and you...

...know. So the point being, I think whatever the intentions might have been, what has resulted is that evm is primarily a compiled target, and the one thing ad buys you is the ability to say, Oh, we could go and write another surface language. Now, of course you going right another surface language. You're going to be in the position of needing to respect all the if you want to interoperate with any other contracts on there, you're still going to have to get Abi Shim on it. So anyway. So I so the point is is that when you're talking about something like a vm, small is not necessarily good and in fact, if you think about it, big would be better. Like if you had a bunch of really high level of codes, you know that could do, you know, slice and dice and do like, you know, like had types, and you know, like where you actually store a string, God forbid, and it's you know, it's a string. It's not like a bunch of word two hundred and fifty six is you know that that would actually shrink the size of the code much more, because the point would be your targeting people writing in bikee codes or you're going to give them these kind of feature rich bitecodes. And in fact that's kind of the ideal of pack, to is to say, let's make the language it self as small as possible, but let's fight, let's provide a very rich library. Let's make sure that the library and very rich. You know it's not, it's not absurdly rich, but let's let's provide you the types you need, let's provide you all the library functions you need, you know, with things like you know, like so we have a type for multisig key sets, you know, because we just don't want you to have to worry about that. You want to do Multisig, you can do it and you can do it in once. In fact, you can't. The only way you can do single sig unpacked is to write a key set that only has one key. So pack and that's was also another thing that's crazy about a hearing was the way they baked in in single signature into the API, you know, in the sense that, like Multisig, smart contracts and etherium have to run multiple transactions if they want to leverage etheriums trust less signature mechanism if they want to build in and spend all the gas to do the verification themselves. That's something else. But theorium is hard coded to a single signature per transaction if you use the external mechanism, whereas packed is a natively multisig container and sense it's all. It's not. It's not just multisig, it's multi curve. With packed you could actually have sex P to fifty six signature and two, five, five, wood nine signature and at four or eight signature, all on the same transaction and you could do multisig computations with that. That's really nice, I but there's a there's an aspect of this. I would actually argue that the reason why theorium has been so successful as because they made it so easy for people to build a reality. The situation, regardless of the design decisions they made, is that there's a large developer pool because people can get things done to deployed and absolutely. Yeah, no, it's Alidity, having a nice light, having an easy language and in so far as slidy is easy, is what it's all about. I don't disagree with that. I mean the major that I could think the Midge. One of the major aspects of blockchains in general is the community aspect of kind of they set it from the get go of Atherium, or Bitcoin for that matter. Is that Synergism, synergy, whatever the word is that I'm missing here, is synergy. Cynergy yet is the key part of this. And applications are further benefited by other people building applications, which requires people to actually participate in that pool. So the value of a token or the scarcity of that blockchain becomes larger because people are contributing to it, so on and so forth. Now, you may be able to offer a lot of these things, but how do you get people to actually move from things they're already accustomed to and learn something new to contribute to your block chain ecosystem? Well, I'm on our podcast. Yeah, why do you think, Bil no, I mean, that's that's you know, we've got our work cut out for us as far as that stuff goes. And you know, look, I mean you know, of course, if aium is the first market, so of course we're going to be you know, you know, as a competitor, we're going to be shooting arrows at it all the time. But, you know, smart contract the fact that we talked about smart contracts without having to explain for like, you know, ten minutes, but L we're talking about is entirely because of a theorium, yeah, and bitcoin before it. So, you know, like we were talking about computational models before. I mean, I think one of the this to me, one of the reasons why I just seem so excited about the space in general, is that if you if you have a background and distributed systems, you have faced all these problems before and you generally had really crappy solutions. From I grew up with for Trining, MPI. That was my my PhD, so I I I've faced some communication issues in my day. Just saying, you know, it's like the you had, you know, all across the spectrum of this like if you're in say so just hot loaded code itself, the idea...

...that you want to have a user and a multi tenant user environment for safecode. That has a pretty good idea of what you're going to do with it in the sense of like, you know, the like beam is similar or Erlang or something like that. You know, in the sense that, and it's probably the most mature environment that most closely resembles this, although it's it's necessarily kind of more complicated environment. But you know, for whatever reason we aren't all programming and erling and, I don't know, arling and and, you know, licks or sounds great, but the point being is that, you know, I've encountered these problems. I've written a little DSLS for like trading, for traders to write their own algoes in a script that they could understand so that they could deploy things out in production. And, you know, a bank it takes six you can take up to six months for code to reach production, you know. So like a user script that can be hot loaded into production was tremendously valuable there. But, you know, is again fraught. There's a bunch of issues you have to deal with. Just having something that's partition tolerant, you know, having a platform to run on, a public platform that is resilient to attack, where you don't have to worry about that. I mean this is like cloud to Oh in a big way. You know, there's it's and then having money on it. I mean the ability that you don't have to have, the fact that you don't have to have a shopping car, the fact that you can directly transact, I mean, these are all incredibly wonderful things to be able to do that we couldn't even talk about ten years ago, you know. So I just think it's all just I mean, this is like the coolest, coolest space to be in by a long shot for all things compute related, if you care about distributed computing, because you know, we're theorium, with all of its problems, is, you know, I mean I'm a I'm obviously of the opinion that has a lot of problems, is still, you know, one of the most exciting things to happen in computation in general and so far as anything has worked, you know, because let's not underestimate what it means when things work. You know, you have things like I see as, you have things like Crypto Katies, you know, you have these like applications that are actually running in this environment and proving their utility, you know, and now you've got new challenges, you know, and the biggest challenge of scalability. I mean, if theorium could, you know, get to their next scaling target, you know, tomorrow, you know, a lot this would be kind of a different conversation, because it would be one of the reasons why the kind of problems with theorium safety loom so large is because right next to it as the problem of scalability, which isn't a theium's fault per se. thereum is just a single chain proof work prop blockchain, but we've got to solve that problem for smart contracts to take off, which is why, you know, we always talked about chain web at the same time we talked about packed, because pact might be better than you know, let's say packed is just clearly better than Ebm and and we're just going to have a cake walk, which we won't for all the reasons you say, but let's say for some reason we did. I'm the most popular guy in the universe and everyone's going to do what I say. Without scalability, no business is going to put their stuff on a blockchain because that, in the end, scalability is as dangerous as an exploit in the sense that you know you're going to put like your major business blow. Say Your Business was Crypto Kitty, say your business was an ICO, and you know, of course those are businesses, or you know something at least, and you know and because one of the other ones is more popular than you like. It's going to slow down your business and you're going to have all this congestion because your application is succeeding. That's that's unacceptable. Yeah, you know, from a business point from a business risk point of view, that is way too much risk. So we have to if it's all called, how you go. Sorry about that. Let's talk about how you skill your your network a bit then, because it seems the first all I did. I did get the feeling, while listening to particular talk that on Condena that you can't really decrease the power of the network. He can't really lower the number of nodes in it, which is fine. Actually, I think in an ideal scenario that it should never happen, to be frank with you. But how do you increase without use a hard work? Records more it's hard for you have to hard work every single time you want to increase, smell member every single time. The idea is that this isn't something you need to do all that often. So like our our road map right now is, you know, is test net. I mean test that's a little different, because test that we've obviously port all day long. But you know, so like test net would be. You know, our first one I'm probably going to run is going to be probably like ten or fifty, and then will hard work to a hundred. And then, if you know, so per production. We might launch with a hundred, we might launch with five hundred. We probably won't launch with a thousand just because you know that's going to be a big party for no one to show up to if there's not a lot of utilization. You know, like we want. We want to right size the chain for how much, you for how much utilization it's going to have. And then you know, and then, but yes, the the network configuration itself is is a hard pork and so the idea there's that. You know once, once we figure out what are stealing target. The point is we just have to have...

...scaling targets and we have to have that conversation. We're not going to it's not just like we've got as much. You know, it's we're good or forever. Yeah, we're good because member could a question you ask before. Is there a numper limit? There isn't. You know, these graphs, these expander graphs or whatever you want to call them, degree diameter graphs. They get to vary. They get to configurations that have a hundred thousand chains and I'm no problem. That would be the care symmetry limit. But do you have to have a certain like you can only increase by x amount or some geometric you know how degree diameter is an MP complete. It's like it's one of every everyone you added it. Their unique, each, each, you know. It's not like you can't do a smooth function. They these are these are, you know, unique graphs and they there's an there's a table on Wikipi if you look up pollutions to the degree diameter problem and it's got all of them that are known and they get quite big. The problem is is that once you get up to like a hundred thousand chains, and especially given that you know, we we suspect that the difficulty will be a function and will be inversely proportional to how many chains. I mean we don't suspect that, we know that the difficulty is inversely proportional to how many chains you are on. We don't know what that function is, but we know. We know that like two chains can have a lower difficulty than one. So I mean which one of the most amazing things about won't if we if we turn the conversation quickly to, you know, carbon footprint and things like that. One of the great things about chain web is for your Hash dollar you're getting more through put, you know, and one of the big problems we have with some like bitcoin or etherium, in one of the reasons why people, I think, you know, like are so afraid, I mean, or at least a point in the conversation of like why people don't like it when kind of big miners are able to come in, as it also kind of increases the arms race, it increases the energy consumption and the worst thing about it is that after some point nobody really benefits anymore. I mean you can make an argument Oh, big miners are great because now you got more people running full nodes, but I think a Bitcoin we're all good, you know, we don't need more nodes like her and the difficulty is its surd. So in chain web you actually get this function where the more chains you run and once you see that happening you can hard fork and add more chains. But the last point I want to make is to simply that once you get up to like a hundred thousand chains and if they're pull utilization, we're going to have to start talking about bandwidth, because you know, it's like the amount of data that's going to be going a chain. was actually for a lightweight protocol in terms of that, the header strength. It's just the header streams. I'd have to go between chains. It's not full transactions. So that's not what I'm talking about. I'm talking about the RAW transactions themselves. If we're doing like two hundredzero a million transactions on a public chains, like, Whoa, we got the data consumption there is going to be. You know it's going to be severe. I worry about like we get the straight is the total hash rate. So trying to stay at a certain level and then as you increase the number of chains, the entire tire network, you didn't distribute them equally at always setting up to the total hash rate. So like, as as you increase the amount of work that's necessary in a single chain decreases. Yes, and what we don't know is what that exact. We just finded some equilibrium there, right, no, no, we we. It's actually we tried to find at the related chain web paper is a math paper and it's basically an attempt to find a close farm solution, which we there isn't. So the you know, one of the things we have to do before we get to test net is a build, a build, a Monte Carlo simulation in which we're able to finally simulate these things to start getting an idea of what that is. While I was want to cry, I'm sorry, I'm the wrong guy. I'd like to have that conversation with the right game, please, please. You know, that is where. That is something where that the people have proposed other things too. So you know, it's a fair amount of work and it's stuff we're basically just starting on now. But that's why we say a thousand chains, tenzero transactions per second, because that's a lower bound. If we do, you know, because Bitcoin, you know, ten transactions per second. We can do that. We can, and that won't be more efficient the BITCOIN. That will be, you know, a thousand chains doing tenzero transactions per second, you know, with the difficulty that you know, kind of. I mean there will be less minors, you know, because it's a new network. So it won't, but it won't. There won't. That doesn't represent a drastic it's just chain way, with chain webs raw paralyzation. That gets US UP TO TENZERO transactions per second. But you know, so the point is but the goal is definitely to be able to make some kind of statement that says, you know, when we hard for it to a new network, that doesn't you know, if the network is twice as big, that doesn't mean we're using double the power now. It doesn't mean we're using the exact same amount of power. I don't know. So it's so funny. Is Actually is as you add more. Let's just say, I guess would we call that? We gets into middle chain notes. I'm not sure how you would change individual change. Okay, sometimes works the larger network of Braid. It's funny because you can actually leverage the network computing power itself to sort...

...of optimize for the next iteration. It feels like there's all this computer power going into what you're doing. It's almost like you can actually donate it to the network to solve these problems that are going to expand the network as it starts to reach capacity. How how do you how does that work? Well, you have you have all these all this computer power going into solve the contracts themselves. It could also yet perhaps put in with every solution, put in a unique proof that says, Hey, this is this is what I found, and it's less than or greater than, yeah, less optimal or more optimal than the next, the next iteration up for number of nodes or in degree or something like that. I don't know, just just ano fan thought. Maybe I'm not understanding how that would kind of play out. I'm actually looking at the table while I'm talking. You have orders of largest crafts degrees. Yeah, yeah, it's great. I love that table, but it's feel like a kind of leverage the night the way actually got optimize the network well, and, by the way, one of the biggest questions to be answered is not like the difficulty is probably one of the easier questions to answer. One of the more interesting questions to answer is do we optimize for more degrees or do we opt to up mice for a smaller diameter, because it might turn out that one benefits. And then how does that rate rate through put in difficulty? Like it might be that, like more degrees is better. It might be that less degrees and a long and a slightly longer diameter is better, and that's going to be one of the most interesting things to find out. There's a lot to play with there, I think, as you open up, that's kind of a issue that I think you're running into, is that we've made all this progress but everything's focused on a small amount of knobs at the very, very base layer because we're working on one blockchain. As you increase to those, the number of variables you can move in different directions, how you optimize for different sets of those variables and what that means in terms of end use is a very large space. And Yeah, and then you have to kind of figure out, like what do we want this network to be? So you have ideological things you have to take into account along with the computational problem of optimization. HMM, yeah, yeah, and I mean I'm the the thing I'm excited about is that it is more knobs, but it's not an infinite amount of obs, basically like four or five. And yes, that's a lot more, like you know. Oh yeah, but we know what they are, we and we just need to figure out how we can talk about the in a relationship between those two. And you know, that's my that's the kind of problem I can get with is, you know, the the very the dimensionality of it is not huge it, you know, but the implications are important. So I actually have one more question before we kind of wrap up here. Something that you've mentioned several times is a turning completeness of pact. And Yeah, it's incomplete, correct. And what I've actually found use cases for that kind of turning complete operations, specifically recursion in in solidity. But I get why you don't have it in pact. For instance, like if I want to do fuzzy matching, I would I would store something with that could be compared by some sort of hamming distance. Okay, yeah, and I just started to B K Tree, which would be easierly true, true, best traversed through some some recursive algorithm. Do you intend on including primitives which could actually be handle some of these more common you know, I guess you call recursive tools, but still like provide you can actually you already got the formal clarification in set the language. You don't actually have to. Nobody's could be able to implement their own version of that right. Well, you bring up an interesting point. I mean one of the great things about an interpreted language because, you know, if we want to talk about a language, it's like packed. The easiest one to think of is the various sequels store procedure language out there. That's exact same model. You know, you've got and you've got interpreted court code. If you look at out store procedures are deployed to databases. We're talking about the same thing. The code gets deployed to the database. That code is inspectable. You know exactly what's running, when you run it and where the you know. And then you've got this rich library, Standard Library, that you can call from and you know. So that that thing you just said, the introduction of new natives, I mean that's exactly what we're talking about. Something like SPV. You know, basically all someone in the community has to do is say, look, I think this is going to be great and provide the code that implements it. And then, you know, and then probably also, you know, get the feet the formal verification people to talk about how that might interface with the formal verification environment, because that's going to be important too. And and then you know, you're basically you can add. You can add great function, you know, a great utility to the language without breaking anything that was there before, and maybe even start talking about new ways to compute things. So and then the other thing that's important to note with pack is that you do have iteration. So packed has filter, map, fold, all the functional approaches to what pack doesn't have a for. You know, if you want to do for you just make a list. You you D rate over a list of the size of the loop you want. You do map over that. And so the point is it's not a you know, it's that falls under the heading of recursions games, even though it's not recursion. And and the idea of there...

...is that you can do you know, you can do iterative things. You can build up state iteratively, impact. You can do it enough to get yourself in trouble, by the way, just not infinite trouble. You know you can. You can do some kind of combinatorial explosion that's going to use up all your gas easily. Impact. That's not hard to do. It's just going to terminate. It's definitely going to terminate. It might use that you know, and might use up too much gas, but it's definitely going to terminate. So that's the first answer. That question is, are you sure you can't unroll your recursive because, you know, it's important to remember that most recursive things are going to have some kind of break in them anyways, you know, like in the sense that there's going to be some point you're going to give up or you're going to you know. So if you can, if you can essentially unroll your recursive thing, you can do it in pact and that. But the other thing I want to mention is that certain things, one of the you know, packed has this ability, you know has it makes it really easy to do oracle processes. And one of the you know, with the these things called packs, which are basically coroutines, functions that stop and start, that remember where there are at, so when you come back in, they know, and it's a very nice computer model for sequence transactions, makes things like oracles really trivial, really easy to write. So so one of the first most obvious use cases for that and just in general for Oracle based computation is, you know, is is offloading some kind of like really expensive computation to the right kind of compute model for doing that in this but but the thing to realize about that is that, you know, any time you start talking about fuzzy anything, right, is you do have to take step back and just remind yourself that you are working on a blockchain which is deterministic, right. So you might be able to convince yourself. You can certainly, probably most cases, convince yourself that same input. So for your Fuzzy Algorithm, you can probably convince yourself that for the same inputs you always get the same outputs. But what you can't convince yourself is that for the same input you'll have the same that that you can reason about the amount of work you will be doing on different inputs. So that and the example I like to give is, you know, you've decided to have a smart contract use case. That's good. That is going to do mapping and it's going to find the shortest route from San Francisco. So you know, like I'll say San Francisco to la but that's dumb because that you just take the five you know, like whatever you know, like San Francisco to Calibuquerque, where I'm from. You know, the problem is is that that's all well and good, but you have no idea in advance how many compute cycles that's going to take. And when you consider that you're operating in a cost constrained compute environment, I'm really going to start wondering why you're doing this on a block chain when you could so easily do this. Let me look at cryptocitties, right. I mean that's you know, they left a lot of the stuff off the blockchain and you might argue, oh, that's just because it's hard to do these things in solidity. But I would argue that. You know, honestly, I don't. In most of these applications there's a ton of utility for you really just want that seamless interaction with the off chain compute. You don't want to. You want to always be thinking about how you can use off chain compute either before you know where you're like. You know the class example. Look something like option pricing. You know what needs to be trustless. Put that on a blockchain, take everything else off. Well, I would enter this case like you storing fuzzy hashes onto the block chain. That's really the trustless part. But don't you know, everything else is your use of those have the the hemming distance of is something you could do like independent that stuff, but that's that's what's that's the kind of concept of the idea, though, is that we're just now getting to the point where we're sarting to say, well, what really should we be putting on a blockchain? But are blockchain is actually good for, and then how do we optimize the stuff that does the other things really well elsewhere? But we're finding is there's a lot of stuff that you go on a blockchain that we that we're in order to greet this like truth. I would argue that there's a lot, a lot of stuff we want to repressent on a blockchain. Yes, I'm not sure we want to compute everything on a blockchain. That's the same situation of like we don't really store things from a digitous distributed storts perspective on the blockchain. We just storts reference like for IPFs psya. Yeah, yeah, time stamping in general. Yeah, all right, I think that's a that's a fantastic way to kind of wrap this up and I want up thank you for coming on and just having a great conversation about what you're doing, as well as the overall types of things people should be thinking about when trying to build stuff on top of block chains or building blockchains themselves. Yeah, it was really great. How can how could people find condenna? How could they contribute to the project? Are you hiring and what a how could they find you? Yes, yes, people can definitely contribute to the project. Chain web is is just kicking is. We've been working on it for a little while now and that's open source. I think...

...the repose up. I'm not sure yet. If not, it will be up shortly, packed as it fully open source project. It's been open source since November of two thousand and sixteen. There's tons of work going on there. This form of arification things very exciting. We absolutely welcome people to both. Will encourage people to check out pact because it's very easy to use. We have a web editor so you can try out smart contracts right in your browser. But it's got a great tooling environment. Very quick to get up and running with pack and start cutting smart contracts. You can even test them right there with the pack tool and but also contribute to the language. Where at Cadenna Dot I. Oh, that's our website. You can find all our papers on there. You can also join our telegram chat there, you can join our newsletter there and yeah, and you know I'm reach well through there. I definitely you know, I want it. I'm very interested in all the discussions that are happening around the frown this, as we all are at Kandana, and we're just really, really looking forward to test net and getting these things into the hands of the users. Also. Thankstar, very sure. I thank you. It's very fun.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)