Hashing It Out
Hashing It Out

Episode 22 · 4 years ago

Hashing It Out #22: Casper+Sharding - Danny Ryan

ABOUT THIS EPISODE

Danny Ryan from Ethereum comes on to speak with us about the Casper Sharding v2.1 specification. Casper Sharding is a combined effort between the migration to Proof of Stake and Sharding for Ethereum. We go over the motivations behind the new specification, the research efforts underway, the architecture and design, and even how Ethereum can maintain liveness during a major world catastrophic event. There's so much information to cover, and we only scratched the surface. The future of Ethereum looks exciting!

Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology and the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. Episode Twenty Two. Passing it out. Say Hello, Callin. You here with you today, as I've always hello, Colin, I'm here with you today as always, as always. So we tried a couple episodes back to go through the Casper plus charting SPEC that has been recently release for etherium to try and figure out exactly health the infrastructure, how the base blockchain changes with the shards, and we weren't terribly happy with our understanding at the end of it. So in order to fix that and maybe help you out, because you possibly weren't terribly happy with our with our conclusion, we brought Danny Ryan on to try and help grock through all of the stuff, the couple locations of it, where it's going and how far it's going, and give some overall clarity to the situation. Service up, Danny Hey how's it going? I'm happy to be here. I spend a lot of time ironing out the details of the SPEC, working on implementations of the SPEC, helping other teams understand this speck and kind of coordinating a lot of the development around it. So hopefully I'll be able to answer your question. Yeah, so, actually, I think a good place to start is, since some of them are listeners haven't heard the previous episode, tell us what this is and what are its goals and where did this come from? Out It felt like it kind of dropped out of nowhere and then it was like, oh, we could do this new thing and it actually makes more sense if we just did all this together. So explaining me the thought process that led us to creating the SPEC in the first place and what it is? Yeah, absolutely. So we had to really kind of main scaling efforts going on from a research respective and and beginning to the implemation implementation perspective. This was the Caster FFG and the shouting catster fg was going to give us proop mistake on the existing at there and blockchain and charting was going to give us scalability gains by breaking kind of new layers of the block chain up into a different sharp both of these efforts involved validators. Both of these efforts involved rewards teams penalty schemes, and both these efforts, at the time, we're going to be utilizing system level contracts on the existing evm. So I was spending a lot of my time developing the FFG contract and working with the team that was formally verifying the contract and building out the EIP to specify the whole thing and working with people as they were building it, and we're making making a lot of progress. On the other hand, other other members of the team and the community were working on the sharding manager contract, the SMC, which was going to be doing a lot of the same things, but for managing the new shards. There are a lot of reasons that we ditch these two efforts for the new design. Three three or really the top relevant one. One is that processing cryptographic signatures in the EVM sucks. The EVM is very efficient at certain things, well a lot of things, and processing signatures was going to be a major bottleneck. The signatures in the sense of I'm a validator and I want to sign a message saying I believe this is the canonical chain or this is a recent block cash from the shards. If I'm going to put that message into an evm contract act, I have to process the transaction like normal and then also validate a pretty complex signature in the EVM. This is going to be a major, major bottle neck for both of these systems. So much, though, that you were seeing that really high ether state requirement. So we're on the order of a thousand five hundred east to be a castor validator, and this almost entirely was because of the limitations of being able to process signatures in the EVM. So by putting that high cap we ensured that if all the east in the world state, we still be able...

...to process the signatures in time. So signatures were signatures a big deal. This new beacon chain implementation we get to a little bit more, uses a different and shrine signature scheme that allows aggregation off chain and, because we're pulling out of EVM, allows for efficient processing of these aggress signatures. Another reason is when we had we had casper of a system level protocol and we had charting as a system level protocol. We had two really kind of competing games in the sense of like a economic game that these players can come in and play. And with that we had had vastly different requirements for each game and at the same time different different rewards schemes, different penalty schemes, depending on what the requirements of the game were. And so if we had to competing games, that worry was that want we might, when I get large asymmetries between the two where, all you know, if it's easier from a system requirement or the reward schedules a little bit better of one of these games. We might get way too many people validating the core protocol and not enough, where we get high shared security but the core protocol is not that secure because of the asymmetry, and then it's kind of unnecessary complications and there's a lot of work and a lot of organizational stuff going on on both sides. The third and one that's a little bit less talked about and something that less talked about because it's something that came up in my work and was relevant to me being gung ho about the position, but wasn't something that had really been out in the community very much, and that was that the caster refuge required that cast for vote transactions could be processed in parallel with existing normal block transaction. This premise at the cast per contract was written was flawed and we were going to have to require a major rewrite. Rewrite kind of not the bulk of the castor contracting the same, but can require a lot of surgery, moving things around, hiding certain things, changing when certain things have than to allow for this boat transaction, prailization. And with that there was going to be a lot of the formal verification work that had that had gone over the past four months was going to have to be pretty much completely reworked. And so that was that was it going to step back the fog effort on the order of at least four months, which at that point you push that back and taking all these other things into account for the reasons that we might want to switch, all of a sudden it really starting to look like if we have a better design, let's go with the better design, let's get this thing out, let's do it right. You know, do it right first, do some short term, short term pains for ultimately mega long term game. So if you had to guess the progression, if you had to guess like I'd say the Venn Diagram of overlapped work that you could you get to use for both projects couple. What percentage is that across both those projects? How much you how much this stuffs you actually get to reuse of major portion? So there was overlap in the projects in the sense of we're managing, we're both both of the Commonas of the project or managing validators and having to do things with validators and Bonda validators. Both of these projects were being worked on as core protocol projects that were being built as protocol level contracts and the IBM. Neither of. Neither of. Now the current effort is not using protocol level contracts for the managing of things because it's, quite frankly, too and efficient and want to break clean from the EBM. So in terms of research, uns everything that we've done in the paths been forming in design. In terms of development, this is totally new development. So that's this is this makes a lot of sense and I'm glad you guys went with this approach. So, in order to sort of like get the community involved in this, the first thing you did was drop out at two point one, two hundred and two speck for the castor plush charting initiative. I don't know what to call it at this point and and you know, a lot of us kind of went through and started to read it and there were some questions that we had and things that were raised. And tell us what, what kind of things happens between two and two one that we might want to take particular note...

...on? I think one of the things that we particularly pointed out when we, Corey and I, were discussing it was concept of our proposer and the amount of power that the proposer has. And I'm kind of curious, you know, how the evolution is come from two point out to two point one today. And it's funny, as you ask that question, I'm like, what was two? When did we change the name the two point one? There's been some I'd say, I think that one of the big things between two hundred and two one was the refinement of the different roles in the system we have in the Deacon chain, which we haven't really talked much about what this thing is. Let's take a step back into bird's eye view of the architecture of what this is. Right. Okay, so we have we have a blockchain. We currently have an EVM blockchain. It's a proof of work blockchain. It's pretty good. It's you know, it does what it does. It it does. It has some issues with the evm in terms of efficiency. We want to move to you off and it has an existing proof of work architecture. But what it doesn't do is it's kind of it's a plane in flight. There's a lot going on, there's tea, tons of people are building on this thing, tons of people are relying on this thing for real, real word stuff. So what the Deacon chain does is it allows us to cleanly break free of the constraints of the existing evm to build in parallel. It's kind of a conservative approach allows us to we have the architecture of the existing system and allows us to kind of build this new component of the architecture in parallel, get the new component kind of stable and start adding to it, adding the charts, adding more functionality, and then eventually take the existing uvum and kind of roll it into this new chart of them. So the beacon chain really is the core system level chain, like system protocol level chain, of the new charting system. It's where the validators exist. It's where the validators do they finalize things. It's where they get organized to do all their duties on both the Deacon chain and on the new chard chain. The general architecture there is the beacon chain as a blockchain. There are proposers that propose blocks and which is a subset of which is on these during these cycles, the subset of the total validators that that propose blocks during a cycle. That's kind of the unit of time, like a block of time in this and this system. During a cycle, all validators get to act as a testers to these blocks to then finalize them in the sense of proof mistake, finality. It's part of kind of part of the core caster thing, is voting on things, coming to consensus and finalizing points in the chain. So the beacon chain does this for validators. At the same time, these validators play a game. The game is still a little bit the design is still there, so still being worked on, but they, voters, play a game to also build, to create randomness, to create a source of a source of randomness for this whole of the whole system without group of work is interesting in that it kind of the process really does kind of create its own randomness in terms of like who the participants are, how they get to play with group. Mistake, you don't really have that extra protocol source of entropy. So we have to design and R G A random number generator into the protocol that allows us to our orchestrate where the valadors are and what they're views are any given time. So so there's that castor mechanism for finality. There's creating a random number for the organizational component and then, using this random number, we organize the valuators into the proposers and a testers, but we also organize the validators across Shard. So if I'm validator zero, it might be my duty at this current time to be building charred one hundred, and the rng is going to slowly shuffle me and my duties around and everyone around, such that the system this kind of like load balanced and controlled controled in that way. So the beacon chain back to really what the beacon chain is. It is it's the core. It's the core infrastructures for these to Oh and it allows us to buy kind of breaking free from the original architecture. That allows us to be really aggressive and take all these new ideas...

...and implement them and a component of the protocol that lives kind of in the same world, but slightly parallel to the existing protocol and allows us to go in totally the architect totally create a new consensus and then from there totally recreate what it what the execution layer shards are and how they communicate. And at the same time the EVM can kind of exist and it's in its own world and only when we've reached kind of a stable place figure out how we want to loose the V and back then. So we interviewed and we interviewed a definity a well back, and I'm assuming that my got based on the current spect that you read. The randomness is done through bals signatures, which is what definity does as well, and what was an interesting conclusion that we came from that interview was that the purpose they're doing it is you're separating state from randomness, which allows you to be very open and how you innovate the actual state. Because of a proof of work, the randomness that you that you create is dependent upon the state updates of each block. Right it's basically the final hash of all the transactions included in single block, whereas the bls signatures are always going to be random but deterministic, but don't depend on the actual state. Be being updated to stay being updated, gets to validated and depends on the on the eventual randomness of the peacon chain correct. So yeah, there's an issue. Right, if you say to word to use a proof of work block hash as a source of strong randomness or state, yet the state as a source of randerness, this gives a large grinding opportunity to somebody who might be able to profit off of this randomness going in certain direction. If I'm a block proposer and I just want to change, I can, you know, grind on my proof. If I have a large enough mining pool, I can grind. I could try to, you know, make a block that's not quite the has show out. Let me make another block. We make another block and I might miss some block or word there, but I might have the opportunity to really manipulate things. Right. The other animness so important. Our system is not using so we're using be able to aggregate signatures or signore signature aggregation, and we have massive gains there in terms of the minimizing the amount of Geef required to stake and thus maximizing the number of elaters that participate. But we are not using bils threshold threshold signatures for Orang US Rad randell. So that's this is for the implementers. Currently there's a number of teams in bluning RNG. We're just black boxing right now so they can they can just assume they have a they have an Orang en, build a system Justin Drake is spending he's a researcher on the AFTEAM, is spending a ton of time and resources and effort working on the RG design. The current direction of that is to use Randell as a weak source of entropy and make it a stronger, less maniputable source of entry by layering a VDF on top of it, which is the verifiable delay function. I'm sure you've seen a ton of stuffs about that and we can get into that. Or I'm sure you've seen tons of murmurs of vds. And what are these things, these new random tools that might require weird hardware. And so there's a reason that we're shying away from definity design and one of one of the design requirements of a there and two point out is that it can survive world war three so two requirements. It can survive world war three and it can survive are it's able to change, when it's able to be a quantum secure in the five or ten year time rising the last rushold signatures are not they're not quantum secure and they require a in protocol threshold amount of Valadiers to be online. And so I think the FIFO percent is their number. So if, for some, you know, if if the case, if the network is majorly partitioned world war three, they can't then create their rng, their system HALP and they, you know, they probably hard work and coordinate around that, but they, you know, they do lose widness in that. And that scenario, that is a very awesome set of Coles. So it's not to say vicious, but it's reasonablely a bit ambitious, like we are designing the next stage of trust, and if you can't trust the network will survive...

...major catastrophe, then it's not resilient enough to actually eat a ployd. So that's kind of one of the interesting things. Personally thought definity would be a good layer two solution rather than, and you know, I mean if any of that mean. They're talking around super awesome project and different slight of different designles and, you know, different tradeoffs in terms of their rng. We're just going down a different, different rabbit hole and designing RNG, but we're come to the same conclusion, like you have to have a strong in protocol rng to design these systems appropriately. So that before we move on and get a little deeper into the beacon chain, I'd like to then move a little bit more into the architecture of the shards. And I'm I work currently work for consensus and I work for status, sorry, and as doing secure ury of status and as as well known, we recently made a sharding implementation called Nimbus. Yes, so that team is working quite a bit on on trying to implement. There's this speck and I've asked the guys if they had any specific questions for you. The main one was like, with the beacon chain taking up so much of a mind share of development, what's the plants of the rest of the sharting infrastructure that you see put in place? Right so you can chain can be rolled down phases, which is exciting from a kind of iterative development perspective, but those first couple of phases don't have. Don't even have shargain. So the beacon chain can exist, the validaors can be organized do their duties kind of stimulate finalizing shards, even though the shards don't exist, and finalizing the beacon chain and creating Orang. But that can all kind of like a shave zero where we've gotten the core proof of stake architecture and Orang in place, and from there we can add the sharde chain. The next phase is probably going to be adding some amount of Shard chain to the the beacon chain, the validator duties. These shard chains actually, at that point won't even have state execution, so they'll be what we've been calling data blobs. We've in this design or moving towards the decoupling of the data layer and the state execution layer. So first let's come to consensus on data. Then we can come to consensus on the execution and state of that data. That they that data brings us to. So in terms of the architecture, then we can have charge chains that are just chains of data. validers begin building these things. They stuff the data. They can do to one or two things. All block all show blocks to be the same size, so they can stuff these blocks full of crap heroes whatever they want, or there might be a secondary market in terms of utilizing the data layer of the shard chains for maybe, say, a decentralized twitter or something. So somebody could begin to pay for the utilization of this consensus day layer. Then and they mix up the phases. I don't think that these members are truly that meaningful, but I think that we're in in about a phase to we bring in a state execution. So we bring in a state state machine, like like Ivm, but instead we're moving towards UIUS and construction. So a lot of the same goals there, but utilize is a lot more efficient underlying architecture. So then we bring in state execution. At this point, this is where things get interesting. This is where you start actually having what looks like, in our minds, a functional blockchain and since that we can resolve the resulting state of bundles of transactions. And at this point it is when you start facilitating that cross shard communication. Although the focus right now of these implementing teams is on the Deacon chain because we're going to be rolling things down and phase. There's definitely there's a ton of research going on on the phase one, phase two and actually the the general design of phase one in terms of data blobs and coming in census on data and data availability proofs and things like that. These this design hasn't actually changed since early this early this year. The that phase one phase two of the design existed with the sharding manager contract. So the starting manager contract in the previous design was really kind of like what the Deacon chain does now, are serve some of the same role and that it organized a kind of the core infrastructure. But the actual shards that we're going to be existing on the starting manager contractor can operate in a very similar way to what's going on and what's going to happen in phase one phase...

...too. So we do. We're feeling a lot more and more confident about the stability of the becontaining design and the fact that our phase one data, data, kind of data starting is, has been so stable for a long time. We're feeling good, a competent about that. In terms of the space to do and bring the execution layer at the a ton of really active research in terms of that execution, delayed state execution, eas them crosshart communication. This is super active discussion on each research. But there's actually now a team, I think a subsection of the theorem J team and maybe I think they kind of have some crossover with might be the Theorem JS teams. I can't remember, but they have a lot of crossover with the you as the team, and they are working on black boxing the phase zero and even the phase one of charting and they're working on a prototype of the phase two. So that would be okay, we can assume like they're they're starting from the other side of the saying, okay, assume we have consensus on data, assume we have shard chain with data. Let's start building the execution layer. And so in that sense I imagine members of the numbers team and members of some of these other be can chain, these UO teams. They're probably they probably feel like they have a little bit of blinders on untils. They're like really digging into the research about the future state execution stuff. But it is definitely actively being worked on and something that I'd like to see over thecoming handful of months really have a lot of progress and start to get some a lot more clarity on that. All awesome. I I got one more piece to this, this bird's eye view puzzle of what the architecture looks like, and that is the transition mechanism from the current evm to the beacon chain and how that works for both underlying ether assets as well as maybe smart contract and that state correct adds that question really quick into something. I notice that you're still requiring to stake in the pop work chain, and how does that relate to all this and how's that kind of work out? Right, right, so I'll answer UN question first and then we can talk about the general kind of difference between two systems how they react to each other now, in the mid term and maybe in the future. So currently, no either exists in the east to Oh side of the protocol. All the ether exists in abum. We know that. This is a fact. So we need a way to open up this new version of the protocol, this new component of the protocol, and in doing that, similarly to how we're going to have this like system level contract of the fog or the SMC, we're just going to have a system level registration contract where probably gonna have one function register. It's going to take my initialization values of being a validator you know my bls, sublic key, the thirty two, Eve a, withdrawal address, US, etc. And it's just going to be it's going to take in that information taken the thirty two, validate some of the Info and then broadcast receipt or a log and in doing that, the beacon chain is going to be a light client validator. So the beacon chain needs to lead to know about the shard chains from a light client perspective. needs to be able to read this, read a note about those receipts, and those receipts are they're going to be used to induct validators into the the deacon chain, and so there's there's directionality there. There's and to note the withdrawal address of a validator is to be to a Shard and so this mechanism there's a directionality. We have these two systems living in parallel, but the ether can go from system one to system too, and not necessarily back, at least on the short medium terms, and so that's why you do have this this contract still coming into play. And Corey, your question was more on the new to me. Give me a and you're basically answer like what was yeah, like you do you kind of answered it in a way. And one is like how do we get either from the main? That the either? Yeah, it's a they. Also, how does that then work with current state of the Ev into the new charts? Yes, so actually answer a question on each researcher about the the MSET. I think so the validator. That's the directionality and kind of the validation and that if I'm a validator, I can move east out of each one. Oh, and you too. Oh, but there's a question on each resource. This morning that was okay, we can move thirty to...

...ease chunks this way, but what about if I have less than that or I don't want to be a validat or how do I move my ether over? And I imagine probably after the Deacon after the phase zero launch of the beacon chain, we should someone should ride in the IP. That proposes a different system level contract. That's just a you know, one way, a one way to posit, a one way transfer from eat of east, from to East to Oh. So similarly, then it would broadcast a re seat and instead of and it would have a destination chart and destination address and instead of being picked up by the core system, you can change protocol. It would be picked up as a transaction one of these shards, and so you would need that phays to to need a state execution on the shards to really have the either over there. So that that's going to be a little bit further down one and that's the same thing for contracts in there and there and their stay as well. Contracts a little bit complicated in the with. I haven't spend too much time thinking about actually porting state. I can imagine that. Can Imagine taking a snapshot. I would give it state and then I can associated contract the new whatever shark that exists and then transferring that stayed over and then nullifying the original state on the evum. Right. It's that's like a one right, yeah, because who has who has the right to do that? Or the owner, because that's whatever contract is, basically. But well, we can't assume we yeah, you know, this like mechanism that people just kind of use will get us over in the short term. The State of Ethue Oh exists and dependent of the State of the U Oh. Yeah, and then that's a probably at as well. In the long term, I would like to see one of two of thinks two things happen, the second of which I'm were excited about. The first is to once you have a stable sharded landscape with state execution, you roll the evm in, as you know, the last Shard as a exceptional chard it. You know, if I'm going to be, if I get organized to be a validator on what becomes the EVM Shard, then I have to do things in the evm way. I have to run around EVM say execution and process dum transactions, but it could live in the same landscape. This is a little bit dirty in the sense that we still have to we then we still have an evm, but now we have a us into and it's going to be you know, in terms of a long term managing the software, it's not probably not ideal. We'd like to break free ultimately. The second I haven't sent too much time thinking about it, but this is a little bit more more of an exciting idea for me because it allows just to cleanly break three the architecture the evm and the idea is to take the entire state of the existing evm and drop it into a contract on a Shard. I can't say it was them. Is a beautiful really cool thing and you can do a lot of transpiling between state and different things. So this is theoretically possible. I can't say that I've fully vetted the complexity of doing this, but it's definitely a handful of researchers as a time where like more excited about the notion of forking the entire EVM, current evm state, into a contract on a single shards and to rolling rolling the rolling it in as its own charge chain. So so we meant on a single charge. So one charm be legacy Chard, basically. Well, right, so if you, if you, if the EVM became its own chart, it would be a legacy shard. But I might just take the EVM, drop it in to Shard Zero or Shard one thousand twenty three, add a certain address, and so then you have to just make an interface to hit this contract, so the vlk point, but that address would then be at a tack point because they could maybe manipulate the contract just similar or you know, I mean it feels like it's somebody gain control of that particular ability to WHO's The ploying that a Dr you basically deploying that contract, like, oh no, that would be a that would be a hard court. You would fork in. You would take a snap shot of the like the community would decide to take a snapshot of the existing evm chain and to place it into a contract on a chart chain. Who have contract is in a regular state chain to change to do that, similarly to deploying like a fog contractor in the system all contract that would happen out of fork. To be an a regular state change, it would put it out of particular address. There's a lot of weird design things think about, like what's the interface to this contract and how do you validate things? I...

...mean it's not yeah, like I feel like if somebody stumbles upon the key for that, that the end of the that would own that contract. Like yeah, but how they chuck that up? How you research? Is basically what we're saying. I would yeah, How would they stumble upon that key if it's that address one? No one has the key to address one. No one ever will have the key to address one. CRYPTOGRAPHY, our entire system is based upon the fact that you can't arbitrarily get a key to the rand. That's right. Your new system is going to be quantum secure. Correct. So well, account speak, quote secure. We are want to move in the directions is to allow quantum while for things to be quite secure. The signature scheme, when we have everything, is kind of designed around the idea that a lot of these signatures can be wrapped up into start friendly hashes which are quantum secure on the order of the five to ten years. Got It the current team. It's being design in such a way that it can be replaced very, very steamlessly with quantum secure hourythms. The starks are not on the short term that they're actually becoming a lot more efficient and in terms of size, but in terms of just the ability to build them and to use them. We're still three to five years from really getting all attraction on that. But but in that five year time arising the goal is to definitely switch out a lot of these components. Were more start component. I was, I think, one of the few the things that I've was left fuzzy on when I first looked over this type of stuff, and other people had the same thing, was like, is the number of shards dynamic and if not, how is that determined in the first place? Right, so the current Speck has one thousand and twenty four. This is a function of the amount of time. So shards have to be cross linked back into the beacon chain. So recent references from our chains are brought into the deacon chain to be finalized. When a recent reference from a shard chain, it's finalized in the BECON chain. That serves as the basis of its sport choice rule, and so the fourth choice rule of all the shard chains is then premised and linked into the DECON chain. The sorry, Miss, totally drop the question. It's not dynamic, as you're sad speculable. By not dynamic, the function, if the function of this soap. We have to sign messages about these shard chains and it takes us to a minimum number of validators, depending on the total validators, to bring these messages back in. And so if we have a theoretical maximum validators, which is a function of that thirty two weeks to posit and the total eath out in play, we have now the maximum number of signatures. We got the process, which is also going to be a function of the total number of charts. So we've limited the number of charts that one thousand and twenty four so that if all east participated in this mechanism, we would still have enough time to process signatures. It is not dynamic. It is something that in the future you could add to it would be a hard fork if that's something the community wanted to do and something that we had betted as possible in terms of processing enough signatures. What kind of throughput does that allow? You know? With that, with that Um, have you done any sort of like what is the maximal scaling? How many transactions do we think we'll be able to get through with this given a certain block size? Just you know, you're looking at that on the order of thousand x, but you're also replacing the virtual machine. So, assuming you're doing similar work and you as a machine, you're also looking at some multiple of games. So you're looking at that balsand x times whatever efficiency games you're getting from the U as a machine. You also get a little bit of gains in that the timing of event are more in lock step because of the Provo state nature. Make sure we don't have that quas on process anymore, and so there's there's some marginal games in moving to the proof stake as well. They're on the order of one to some multiple of one thousand xl so we're talking about like millions of transactions. It is super talked so run a five hundred Tho right now. So be but five hundred millions, and rightly. What am I sir? Right per day, so you're saying per day, per day. I remember last year at Dove Con Three when the Taluk was giving his sharting talk or like kind of a future of the...

...road map to a theory. I mean he talked about charts and each individual shows is kind of being the test net for future upgrades that could be rolled back in this. So each indivil Shard kind of has its like it's own rule set associated with it that feeds back into a peak change. That's something that's being scrapped. Or you you planning on all shards having the same core architecture, excluding maybe the legacy shardift? You end up doing that? Yeah, same architecture. The complexities that arise from heterogeneous sharding. It's just massive and dealing with different rule sets and and the consensus on all these different shards and moving validators across Shard. That is not the test bed for layer one and in the long run is probably going to be layer too. You know, if there's some radical awesome thing that's happening in terms of things that we want to integrate into layer one, it's probably going to after sharding is rolled out. It's probably going to happen in layer too, and maybe we'll stay in layer too. But if it's something that it's so good that we want to integrate, then it would probably move into layer one rather than using, you know, Shard One tho twenty four as a test bet or Onezero of the test that it's kind of a kind of a fun idea, like we could imagine the functional Shard that like is completely functional and like people use for certain ways or you know, but it's the complexities are just it becomes quickly a pretty intractable problem dealing with the consensus on these things. Calling you over, he has some bether. I have everything that's over questions. I'm trying to like figure out what what has been said and what Hapsen. So I don't want to. I I've got like highlights all through the starting SPEC that's the beauty of just planted by are you don't have to worry about. We're going through. I go through trying to figure out. So I've gone over the validators in the costers, so be in the main pos chain, actual pw chain. You know? How do you? So? I think one of the problems that we originally saw was that proposers had a great deal of power in the previous system, meaning that they could actually seem like there's ways that the proposers get influenced things, and I know that was addressed. I was wordering if maybe you can talk to me about how you dress that. But what kind of proposer issues you noticed? And you know right, you know what the changes were there, because that was one of my main concerns when I read the two point no SPEC and I've monitored death research enough to know that you guys saw that and addressed it right. So proposers, previously the problem, the problem, the main problem, become the proposer of the sea can chain is the player in the randomness game. So during a cycle, which right now it cycles sixty four blocks, we have showed previously chosen sixty four proposers, they're going to play this randall game where they're going to be essentially revealing random seed that they previously committed to and the the Ranternets for a few your game is then going to be dictated by the acts or of these revealed seeds. There is ends up being a pretty massive problem here and that if I'm the last proposer, I can decide what I can look at what the previous proposers reveals were and I can then decide to reveal or not. So I don't I can't just make up a number because I previously committed to my number, but I could decide to show up or not, and that's going to influence grandomness. Now I'm the last two proposers, I now have two options to do that. From the last four I have more. So and then if I can, if I can manipulate the randomness now, might be able to continue to manipulate the randers in the future. And if I can manipulate the randomness of the system enough, I can now maybe allocate a lot of my resources to a single shard. And a single shard has only, you know, one thousand, one thousandth of the validators, and so it's a lot easier to attack a lot of us, to gain control of and so hardening up the hardening up the randomness from the space of rand out team is very important, but also, and so we've been moving in the VIDP direction to harden up Rando. But another, another thing that the tower expending a lot of time on is, okay, let's say that somebody can get ahold of this, can be that last propos or can like manipulate the randomness to a certain extent. How can we prevent how can we reduce their ability to do so and reduce the effects they have? And so you've seen I think it was called. It was called URPJ, which is a fork choice proposed fork choice rule. It's recently been called changed to I am D Arcj, with recursive proximity justification. I amd is immediate message driven.

Those are the same thing. So if you see those hermality, it's actually just a renaming. So this sport choice rule is to try to prevent the power that any proposer can have and also the power of proposers being able to kind of run shortland, short range forks and change the beacon chain protocol, change that you can chain kind of pork choice. And so the design goal is the current fork choice should be a good predictor of the future for choice. As then if a block right now is included in the canonical chains. As people see it, it should, in all I can in most scenarios, most likelihood be included in a future version of the court choice. So That's stability. And so this, the sport choice rule, is designed for stability and ends up being a lot harder for proposers to kind of bypass each other and if I have a string of proposers try to manipulate the fork choice, to try to manipulate the randomness. So that's one of the components, is this pork choice rule. Another component is the VDF hardening of Randeaux, which is still definitely a for debate and I know there were some other design decisions around the separation of concerns between what all the power of the validator has versus the committee, and a lot of that has been the combinations of the sport choice rule combined with an assumed honest majority and committees which are the subsection of validators. That then a test to blocks, taking a lot of the power out of the proposer's hands. And actually the you know, the fifty percent honest committee is not a crazy assumption in that if we assume to third on a majority in our to two third even rational majority in our validator set. Then these these committees are small samplings of at least, I think, a hundred and twenty eight validators of the the total validator set. You easily get to that half, that fifty percent in the committee. Like, you know, a billions of a chance or something, or one in a billion chance or something for it to not have that fix percent in the committee. So again, removing power from kind of like shuffling in these tweaks, removing power from the individual proposer and kind of shifting it toward are more light and likely more likely larger committee. So let's talk about World War three then. So in the events of a major war, which I mean I hate to beat a native Nancy here, it would be really unlikely that in all of human history or like this doesn't exist. I think it's interesting that you guys are definitely designing towards that. The Internet will be cut off between certain nation states. Now, if one of those nation states has a significant amount of state Fath in the validation scheme, that could allow for other national states to sort of manipulate or the rest of the world to manipulate what's going on in the actual their version of the chain itself. How would we bring things back together in the events that their proposers have more power than the pep the nation states proposers have more power than another nation states proposers and can manipulate things so straight act. So in yeah, partitions, when we talk about World War three, we're talking about being able to survive from major and long term network partitions and also to be able to survive having to be alive in the sense that even if there's just one validator left on the chain, can keep can keep pumping. I like it. One one dollartar left in their own partition, you can keep building the chain. This is solved primarily through one is to not have any component of the system that requires some some percentage of valadors to be online for liveness, so the chain can continue forward. So that's why we've gone intoferent direction for the rang another another thing that saves this is we need two thirds of validators participating in the consensus to finalize the chain. But on the order of I know it's recently change in the spect of wonder what this time is. We looks real quick. Okay, so it's currently...

...set to approximately twelve days and that if a chain split happens for twelve or greater days, the we have two splits, say fifty on both sides. They're both they can both still be live, they can both still build a chain, they can both still do what they need to do, except they cannot finalize. But we have this exponential bleed out or drop off of offline validators. So from the respective of Split A, you know, we have the online fifty percent validators, on the order of twelve days they will become the two thirds majority and begin finalizing again, on on and on the same side. On the on Split B, we have the same thing happening, where the other fifty percent, they're going to continue to build the chain and to continue to process transactions and continue to validate, and the offline fifty percent, from their perspective, will bleed out on the order of twelve days and they'll begin to final one. So World War three happens. We have massive network partition and we have validators on both sides. If the partition resolves in less than twelve days, these validators can then build, they can rectify and build on the same chain. You might have this kind of this interesting things that happened that might happen there. They might actually purpose to pork out what happened or there's all sorts of crazy things that can happen. But if they do nothing, the chain will resolve and be one chain. If twelve days path, the chains will now have finalized separate histories and will now be two different chains. In that case, a year passes, the network partition ends. We have we now have to adum chain and from the etherium design decisions. That is that was the expected scenario, is to have two of the rum chains because we prioritizing liveness. Otherwise you have the chain go down after twelve days. I mean you have the chain go down immediately because you don't have liveness and you probably fork anyway and probably have two chains anyway. And so this is kind of an in protocol mechanism that allows for liveness in the case of that work partition. Let's let's bring that scope back a little bit and look at what it means to be a single validator and like spark right now, I guess Evan ved ness put I thinking is and is weakn atherium that I think it takes around six thousand dollars currently to become a validator on the beacon chain. That's gone down a little bit, or it'll be, you know, depending on the prices. It's reasonable, right. But all right, what are the resources and like, like liveness requirements for a single validator and one are the consequences if they like I have a power outage and my ben My valuator shuts off for a certain amount of time. Right. So a really cool thing about this design is that we've moved to fix size deposits rather than a minimum deposits. So it doesn't, you can't. You don't play with thirty two eight or more. You play with thirty two weeks increments. If I'm a Validator, I am thirty two eat if I am to validators, if I want to play with more than thirty two weeks, I can be to valuaors and so sixty four and so on. A Soup Fort. But we've done this for a number of reasons. Internally. It's actually a lot easier in terms of accounting, in terms of organizational things. If I have, if I just have a list of a list of validators in the protocol, I can just move them across the protocol and assume that wherever I put them they're providing the same amount of security any weight. Those a waiting associated a lot of things you have to do correct correct so makes it make easier mixt or perspective. But it also has a really nice property and that the resources required to validate scale linearly with the amount of east I participate with. So if I say a validators on a validator is required to be building at any time on the order of one to two shard chain. If I'm now to validators required to be building on the order to course chard chain, this kind of they can have a little bit of overlapping, that shuffling. You can just think of it as one. So validator is always responsible for one charge chain. If I want to play with sixty four, I'm now responsible at any given time for to chart team. But my resources that are required or scaling when nearly and so I'm also by adding more each that I want to play with. I have to add more resources to help out the network to be you know, maybe that's maybe now I have two separate nodes or I have one node that has more resources or and so that...

...ends up being kind of this nice linear scale property that we without taking that into account, you kind of lose, like with prooval work, the more you're participating and adding to the network, the more resources you kind of have to you have to add more competitional resources and probably networking resources to the network. And so we get that property by scaling line early. Now what happens to those are our requirements? I need for every validator I need on the order of ski resources, where C is like the standard resources in a computer and a consumer computer. And so if I am validating with, you know, a thousand times thirty two, thirty two eight, I need on the order of a thousand ce, so on the order of a thousand standard computer resources. Now what happens if I go offline? If I go offline, when I participate in the protocol, I gain I slowly gain reward. When I go offline or if I'm sensored, can't tell the difference from in protocol standpoint, I slowly lose reward. If I'm off if I'm offline and fine finality is not occurring, then I I slowly lose rewards and it ramps up over time. So I if I go offline for a long time, you know I'm I'm losing a little bit and listening a little bit, and then I kind of hit that curve, that excedential curve, and I bleed out pretty much entirely. So, if you have a machine at home and it goes offline for our you're probably fine. You're a machine at home, it goes off line for six days or a month, you're in trouble. You're going to start losing a lot of money. And so, in terms of being a valid to be clear, losing a lot of money in terms of what my deposit is, or losing a lot of money in terms of potential gains. Now, in terms of losing your depot, you're slashing. At this point you're not being so flashing. Is You've done something very nefarious and we take your money and check you from the validator said, being offline is a lot slower of a bleed, but but it is taking away from that deposit and not particular gains. Like if I like turn my computer off and walk away, I lose whatever's a part of that state instead of instead of like having all the money that's associated with with this, with that Validator, when I turn it off, right and if a high portion of the network is still validating and finalizing, you're losing a lot less money than if, like fifty percent goes off line at once, but you are losing money over time by not being online. So there's a lot of considerations. You know, what is a good valida or set up with like does it look like one computer at my house? Maybe, if you have one validator and you know you've got you're around a lot and you can you monitor it, but maybe maybe you're set up looks more like I have three machine in different locations and these machines have, you know, can talk to each other and come to a consensus where they sign anything. So, you know, I maybe I need two of three of my machines to sign messages and then they brought like to agree and then they broadcast and if one of them goes offline, I'm fine that. I'm kind of I'm really curious to see what the what the different set ups are, what the different solutions are and kind of how people choose to address this problem of being alive, because it is different in the the proof of work. Proof work you don't have any you have opera commortunity costs Youp offline. You don't, you don't have the losing of your investment. You know, your machine doesn't slowly break down. Well, it might, right, but for long enough time. But your machine doesn't slowly break down just because you stop validating. And whereas if you're thirty two week, starts starts salidating, it's kind of like your your machine starts breaking down here. Yeah, yeah, that's a different, different came I'm still not clear on maybe you can help you out. I'm invest I stake in the proof of working stick thirty two weeks in a contract. Hey, this is this is my validation. I'm there for good. Or is our ability to stop validating? Right. So in there we can set up that if I go offline after this amount, I can automatically pull myself out. Right. So we because we're doing this kind of assumed thirty sweet accounting where I can assume everyone's kind of the same. We've been discussing a mechanism that if a validate or drops to a certain threshold, they are kicked out, and in that sense it's it is a little bit more protective mechanism for the validator and it also is more as protective for us because we can. We can still operate on under that assumption that all my validator is the same, the same weight. So in that case, say you said it at that two thousand...

...bill to be determined. This is but this is kind of because we want the property of being able to assume everyone's the same weight, we also need this kind of ejection mechanism, and so I did miss speak a little bit. Value to words are not. You never driven to zero. You're driven to that minimum steak required to still be a predisipant, and then you're ejective. You're exactly thirty two weeks. Why? It's a stake over thirty two weeks, and then my minimum is is like that. That, to me, makes more sense. We've been talking about thirty two these because that's your steak in the game, that's your network, that's that's like saying, Hey, if I drop all of this, I'm eradicated, like I need to make sure that my stuff is in order, otherwise I'm a bad validator. Right. So we can. We you can imagine. What we're proposing is pretty much the same thing, and that say the absolute minimum. You can still operate with its twenty eight, but you have to come in with thirty two. I Know Justin it's at the proponent of being able to top off. So, like say you are offline and you've gotten down to thirty eight, but you you want to play and you don't want to be at risk of being ejected and then going through a four month withdrawal period, which would just be like a standard logo. You might then have the opportunity to top off and like add two or three and go that thirty two or even thirty three. So you have there. You have the right intuition that there is there's this theoretical minimum, but we want everyone to operate when I want to start people above that theoretical minimum. Yeah, because you know, of so all your mouths is based around the thirty two eas representing a certain percentage of the current circulation, of total supply, I guess I should say of circulating supply, the total supply of either available on the network, and that's all based off the thirty two point, right. So it actually wouldn't that need to be a little above thirty two to be a minimum staking and then thirty two is the threshold you shall not go below, so that we can ensure that the math is still correct. Where am I misunderstanding that? So the math is such that? Yeah, so the math is if all these validates at thirty two, then you can still process all the signatures in the correct time. validators still have to participate, they still have to they have started that thirty two. But if they are then if they bleed out to twenty eight and then leave the validator pool, that for is gone. It's burned. That's FROZ. Wanted to know what actually if you're actually reducing yeah, like, why don't they district that as network awards them? Like why is it burned? Why is it not like like we circulated? Well, there's a problem. If you if you start giving that all to, say, the validators, the other validators, then you have incentive just center and you can now grief validators. Like if I'm a I'm a majority coalition, I might then be like, well, fuck the sorry, it's okay. They say whatever you want to first all the time. Bro Screw this other one. Third, like let's take all their role words, let's just sensive them. So you are having pretty perverse and sentence there. Increases the does that? It increases the possibility of like you know, gain theory around it, right. So, like if if you have a large validator set, like a somebody who's just like I want to Shitload of validators, and they say I would like to then get more money, they can. They have the resources to then, since or other people across like if we think, and maybe governments are doing this, they can cut off the Internet, right, and then that if that particular subset of people can no longer validate, then all of their resources get redistributed across the network. If they're a big portion of that network, the May get more money. So it's like you don't want to do you don't want to give people the option to have those games. Right. And essentially, by reducing supply, a burn is equit is at least proportionally distributing value to all eat holders. Now, whereas a giving to the validators is only giving to the validators and giving them an intendive to do bad things. You could definitely imagine the community coming around some sort of proposal to make some sort of double fund or you know, like I don't, there's there's tons of options there. It's a matter of if people want to do things with that. That I've seen any ips to be like, let's use all the eitherout the burn address to like make a community fund and that, you know, it's interesting. It's not something that I'm it's kind of a core protocol developer. I'm like, let's just get everything right. I don't really want to think about like the governments around, you know, redistributing burned ease. But you know, it's a it's an open platform that the community could decide the forking grand and things like that up there do incline. Is just becomes particularly dangerous to I love the World War three scenario because...

...it's like, in the events of the World War three scenario, you have all these validators in a nation state and let's just say that it has nothing to do with an in network partition and the people are cutting off. But literally what's going on is that network cottages are going all across the you know, the globe, validators are getting burned and money is being hit me directly by actually in its directly being hit through violence. But or maybe even let's just say, okay, let's just remove the whole world war thing three thing and talk about climate change. was is something I'm particularly worried about and major catachree happens, Yellowstone National Park blows up, whatever, you know, elstone, whatever. Yes, and like we have major, you know, super volcano events or something. Network collages are random. We have solar flares or whatever. These are things that can happen and can impact a particular nation or even of the globe, and it's not even related to world war three. It's just shit that happens, that that will eventually happen will be a catastrophe, at atrophic scenario. All this value that is being stored in validators are in by the way, I first see a lot of a lot of people investing invalidators. It will be burned and that will hit the economy in some way and that, you know, the total, the total value of per eth will go, I guess, up. And so individual holders of money who are using this as some sort of way of paying for their average, the and their their their milk, because the value of the EA has gone, has kind of been hit. There will be some immediate economic consequences surrounding the the the fact that this this is burned. I don't and I don't wander. I don't think you can build a system of all of rules. That covers catastrophic events completely. That just completely ignores the decision making process of the aftermath of those kind of catastrophes. Like a scenario, say it is burned, the price of ether goes up. I would hope that there's be a social keep, like social rally around getting help to those people based on the new money they just got from all the burned east. I would hope. But like and and sure, if we burn ten percent of these with nothing else happening, I would I would think that the value of individual ease would go up. But you're talking about eas burning and a catastrophe. Yeah, we're having relative like. There's so many variable coming in a play right, a pot, you know, that might push it up, but who knows that? Yeah, I just see, I see there possibly. I just do see that it being reasonable to consider the idea of instead of burning that we put some sort of boat mechanism in there for recovering that eith on a regular cycle. Yeah, man, right, right, right, in the IC maybe played. Sir, did you putter in these things anyway? So that's that's actually another question. funning not clear on and maybe you can help me out. If we're staking in the proof of word chain, at least initially, and I think you're you're depositing from the proof of work chain. Yeah, I'm not that you're really you're in the beacon chain. Like you have excited the prover chain and you've entered the beacon chain. Yeah, I thought that was the mechanism for general. I thought it was a burn on the on the EVM and then a basically a hey, congratulations your bird, we have proof that you burn something. You're now on the beacon chain. Was that? Yeah, and and and burn in the sense that it no longer exists in the avum and an exists in that other separate part of the protocol. I like to use the term directional deposit or one way to pose it. But okay, so the question of having these. Are we setting up to etheriums? How is this difference? That some sort of fork, meaning that the proof of work chain still runs, the proof of you know, proof work based, you know, contracts. Are we going to, course, everybody start operating contracts on the like, because state is not going to be automatically transferred its what you said. So is this not like two separate etheriums at this point? It's transitional as it's a way, yeah, gradually upgrade. It's transitional. At first it looks pretty separate, and for for purposeful, purposeful reasons, because it allows us to really innovate on this too. But in the in the even the shorter medium term, we can actually utilize the beacon chain for positive things. On the proof work chain, I didn't mention that part of the beacon chain protocol is to when a when a block is proposed, we bring in a proof of work hash reference, a recent proof work hash reference, and in doing so, when we finally have the proof of stick chain, we can choose to defer our proof of work for choice to the proof of stick chain, very pretty much in the exact same way that the past for contract was going to become the root of the proof of work for choice, the beacon chain can then become the root of the proof work portrays and so on. That way, when we when we're happy that the you can change seems to be a stable protocol,...

...we can then gain we can take some security games from the new beacon chain protocol and kind of Begetto more tightly couple these two things. And in the long run, yes, it's going to require some sort of pork to ultimately decide what to do with the evm chain and where to place it in the east chain. And and, like I said, you know the evm chain is it's the plane in flight. Tons of people are building on it and we want to, like you, want to keep that, keep that momentum going and at the same time build out this new infrastructure and then handle our community, you know, excite the community, keep the community going strong on each and then, on the medium term, start transitioning and moving the community over. And Corey, you brought up a couple read question earlier. I didn't quick as answer and we answered. The simple answer was he's want of state doesn't exist any too except maybe in the future when we roll it into a contractor rolled in it his own chart. But on the on the medium term, when the East to Oh state execution exists, I might say I'm augur. I have a token, I have this prediction market contract and I have a user interface that build these contracts and allow people to participate in the prediction market. I can I can do I can easily redeploy that contract and East Oh and I can make that that user interface now show Thee Oh prediction markets that are going to be moonlit eventually when they all resolved. But I can also show the East thousand and one and I can also when people make a new prediction market, I can make the default point of this new contract needs to oh. And so I'm like gradually transitioning my community over. I also have a token. There's some complexities. They're like, how do I author? Actually, I think I think they hard sports their token before, right. I think they had an issue. I think the yeah bug. It was written in serpent and they keep they orchestrated a community for to essentially take a snapshot of the Balance and create a new contract augur might choose to do something similar. They might say, okay, we're moving like we're doing it or as a community, we're going to go to eat oh, we're going to make a new contract and we're going to take a snapshot of the balance on this date. And so there's there's a lot of like social complexity there from the from a technical standpoint, it's not it's not massive. Or are to say if one is great, you know, we're happy here and we're going to wait until we get rolled into charding on the on a longer term, that's kind of Yah. Yeah, each project can address that type of thing and respectively to the complexity of their of their platform. You. Yeah, it's a beauty, but it's also it's a pretty serious complexity in the sense. But as long as you give a little past, if you give them options that pass up, great, then they can do that appropriate. Okay, so what if you have to pause your APP for a week in some circumstances, if you're doing a massive upgrade, or you make choices to do it gradually in a way that you know is seemless of the end user. And there's a lot of ways in which you can do it if the paths are there and it seems as though you're raking them correct and I hope that tools thus practices and a very rich community discussion on how to do this. It's going to going to emerge when it when we get there. You know, it's can be good. Still kind of like perplexing me, and maybe I you did addresses. I just don't understand it yet. Is it still feels like you're creating twotheriums. So the value on the beacon chain is separate the economic value on the beacon chain is separate from the economic value on the proper work chain, at least initially. So right. No, yeah, people places and another. It be a basically east to token et. To some people that are concerned about the difference between that right, because we have, at least on the medium term, a directional movement of either from one side to the other side of the protocol. Now. So first it's just through validation and so you have but at that point, which is kind of another interesting thing, we can talk about that valators can it. Can actually withdraw. They can't go anywhere. They just need to keep validating. Not Tally. Their concision doesn't exist in starting. But when that excuse and exist in starting, I would like to see us this marvel contract that allows people to also deposit over into the starting so at least from the standpoint of there's at least a directional opportunity for arbitrage and the sense that if ef too oh eve, if they if the east and the east to land tape is overvalued compared to the Ethan Oh, you are going to you're going to very quickly see people transferring over that bridge into the too and in my prediction.

Of My opinion, it's not going to that asymatry. Is probably going to be an other direction. But you could imagine it. You could imagine they're an asymmetry between the two because of the kind of makes me curious why you're going to release some phases rather than just release on a test that work and and basically just do everything on the I mean, obviously you could be doing a test network, but like why not have like the E to, a test network and then when everything's finalized, we know everything, how everything works, how it all interacts, one hundred percent where we need to be, then you start, you know, because, like you keep I'm not sure, maybe misunderstood the process of the phases, but to me it seems like if you were to roll this out in mind, you know, hard fork phases like that would be kind of problematic in and in the sense that you are fracturing the value of etherium on the proof of work chain to basically create a new currency. So yeah, I'm not I'm not quite as concerned as you are about that. But because you don't learn a lot of things until you have a real network. A lot of these things that we're going to be doing. We're rolling out a new PTP layer, essentially like a sharded pap layer where people can tag their messages, like am I talking about this shard that chart, and I am I resting this so I can then, if I'm assigned to be a validator on Chard one, I can then like subscribe to the Shard one topic, and so there's a kind of these partitions in the PCP layer. This is we're using some really interesting software, probably from the PTP, and it's like we've been doing a lot of simulations and a lot of experiments to that. But a lot of this, like it's new stuff and if we wait until everything is ready to go on on a test net, I don't think that we truly going to be able to have vetted a lot of this stuff at Stale, whereas if we do it incrementally and we do it in a way that it's pretty isolated from the original protocol but still are putting in value and putting in the opportunity to make money for these validators, we stalizing galters to essentially join the real network. We're going to see things that we're going to see these new things operated scale and figure them out. You know, figure out the the bumps and issues along the way, whereas if we wait until that executions, you know, shark be can change ready, shard chains, data villability is ready, state execution is ready, crush our communication is ready and then we roll it out. Only then are we probably going to see some of these problems really come out and scale and you know, we're going to be dealing with problems on the fly and all the problems on the fly rather than iterative problems from the fly. Another instant thing is, it's kind of cool, is that the the validator rewards scale with the square root of these totally validated. So these are numbers are very much up in the air. This is an easy number to think about, but say ten MILLIONI. When ten MILLIONIS is validating, validators, are making approximately five percent returns per year. If only two and a half millions is validating Valadar's think ten percent arrier. If forty millions is validating validaors make two and a half percent. So this is scaling. So we have this kind of early adoption of the Deacon chain where validators there's a lot of risk. ASSOCTU, it was a validators can't even withdrawal yet. So you have to really be like an early adopt eager to prestating the system. So you might not have that. You I not have nearly as much east come up to participate at this point in the protocol, but the validators that do for the state might get crazy gains compared to a year down the line or two years on the line when the protocols more stable, more features you can withdraw, then more he's is going to show up and you might have much lower returns because you know the risk. The risk and time horizon is much different, and so you might have the short and medium term strange a sems to symmetries between the values of these two sections of the system. I keep saying, you know, sections and subsections and different components of the protocol, because I truly do. I see this as one protocol. I see this as kind of a road to get from point A to point B, but all under the same umbrella. And you might see some it's going to be some growing pains and there's there might be some asymmetries between value of the some the components of the section. But rolling out iteratively is is the more prudent and conservative approach that I think, going to get US bafefully what we're trying to go I would also think would call us in that I'd call it like building infrastructure for an entire ecosystem, like it's for trying to build different...

...paths and different roads and upgrading them in various ways, but ultimately we're serving the same people and that's, I think, what we need to keep in mind is, at the end of the day, this is like a big group of people and we're just building tools to help them do shit. Yeah, yeah, really like the idea of validators investing in the future of the etherium. I think that's one of the feelings I got early on, because I've bought a there in between two and three dollars. I felt like I was investing in a network that was, you know, going to go places. I believed in it and you know, right now it just feels like you're playing with currents, you know, Tokens, Bola, but that's actually feeling, again, like you're investing in the future of a theory. You're taking your money and you're putting your putting it with your mouth is, and you're hoping to gain some rewards as a results of that. I feel like that's a really wise way to look at it. So they again, they are earlier, early adopters of the beacon chain validation protocol have, you know, the opportunity to probably see higher gains than than the later adoptors. So yeah, so I have a couple more questions and they're more implementation questions. They're they're not really complicated, I don't think. But I noticed it was a slot. Was it as a period of eight seconds, and I'm just kind of wondering where that number came from. We like power to. You've noticed, pretty much everything's a power to and makes math fun and Nice. But we are based off our neck or network simulations and bait work, based off the requirements in that work, we believe that eight is probably going to be at this point. We believe it's going to be appropriate number. When we actually get a test that up, we might find that, you know, under maybe optimal conditions it's okay, but it maybe a degrade quickly. If that or thought operating under optimal conditions and if we find that will adjust that number accordingly. It's our best guests right now. Okay, and another thing that I think we need to kind of go into for our audience, because probably should go into a little earlier, is the difference between active state in crystallized state and what that means. So I think you kind of have an understanding of what crystallized is, right. So crystallized state is we have this notion of a cycle. That's kind of so we have we have flots which is a second we have in which a block can be proposed per spot. Sometimes you can have missing plots, you know, where you didn't have a block post. But then we have a cycle, which is fifty four slot for accounting purposes of updating the state. We we only really update like the big bulky state, which we call the crystallized state, every cycle. And so every cycle we're like, okay, what happened? What do we have? What happened in all these blocks? What attestations do we have the Whi, which operate kind of a casper vote to like what what on these cycle boundaries can we finalize? Justify? What Cross links can we update? Like the big, the big work we do on these cycle boundaries. We update it in the crystallized state, the crystallizeddate, you know, much larger state, the active state is really just the accumulation of all the just the little things that are happening in the block. So, like a block comes in, primarily the bulk of the work of processing a block is are all the actestations in this block vallid so we process the signatures they are, we add these attestations into our active state and we do the bulk update on it when we get to the cycle boundary. And so by separating the state it actually helps us. So by putting these things into a state, it helps us serve white clients and helps few people kind of like follow the protocol without having to validate everything. And so we kind of chunk everything into the active state so that we can really serve it to people, improve things about it, and then we do the big update in the crystallize state and then we can also serve, you know, serve things based off of that state route other people. So preachings about it. Yeah, stick these check pointing roses. Sorry. going. Then crystalize kind of grows with the size of the number of validators and it seems like my basic math says it's going to be probably around the size of six hundred megabytes for just the every crystallization. Is that accurate? That sounds roughly accurate. It depends on the the number of validators, and we were talking about this the other day in terms of what say on a validate or what do I act to keep around? The naives, the naive answer is I only have to keep around everything since it was last finalized, but that actually isn't going to help me catch bad validators. So, like, if people do something to Farious, I want to be able to notice it and prove it to the chain via slashing condition, via slashing message, and so in that sense I probably want to keep more on...

...the order of the things that have happened in the past four month. And so I'm, you know, a lot of a lot of this. And primarily I don't have to necessarily keep the crystallized state as every chat, like I don't have to keep exactly this cristal like date, exactly this Christ life. They. I'm actually probably keeping references to attestations from the past four months, crystallized states rather, and I can rebuild each crystal life they as needed, if if I need to. That's some implementation details are on that, but the state doesn't because we're not doing a guess of the Trans the lack of transactions and kind of arbitrary stacus. The state doesn't really blow up in the same sense that you know, the VM or even a shard chain over time. This is this when you say at the stations you only care about the beacon chain side, do you care about what's happening on the charts itself? Do you need to keep all one thousand twenty four was as states updated, or because such exist on the beacon chain? It's going to be just some holy shards. And it also brings up the question that we probably should have addressed a cross shark communication. But like how, how are these validators validating, you know, as it across all show ord or just a Shard are interested in? Right. So I'm a when I'm a rng is going to shuffle me, it's going to tell me what shard are shards. I'm responsible for it. When they order of one to s shards at any given time and my responsibilities on that Shard or to build and attest to that Shard, to so kind of do like a kind of kind of mirrors what's going on in the beacon chain. And so, but my my I'm going to be shuffled much slow much more slowly onto these shards than I'm shuffled around on the Deacon Chain in terms of my cross linking. So I might be on a shards anywhere, you know, from to two weeks to a month and a half I might actually be on a Shard, and so that and I'm also shuffled. valuators are shuffled continuously and slowly. So I don't just get to the end of a month and be like, okay, everyone switched shards. You know, it's like, okay, validator zero at your now over here and then some time pass, like a vow it or one, you're now over here. And so doing that you get a lot more stability and who has the state of each shard and my my role is to build these shards, but also as a committee, when I'm attesting to these shards, I have to I have to think the Shard from the last cross link. I have to think the Shard and I have to say whether the a I have to the things that I'm cross linking in. I have to actually stake and attest to the fact that the data is available. And so there's there's a whole nother kind of component of the consensus. There's a game around attesting to the availability of Shard data, and so you have now the validators not only saying this is the cross link and not only saying we should finally deacon chain here, but they're saying, Oh, and I stake my money on the fact of this data is available. And so there are my requirements, because I'm always validating. I'm always validating on the order of one two shards and I'm always attesting to a Shard at a time. I'm kind of constantly have I need to have the resources of one to two, two, three, you know, on the order of a few shards of data around, and so my requirements are going to always be I have the full beacon chain, I have at least like what would be considered a full think, but it could be like a prune, full sink of a Shard, and and I have these snippets of state from the various shards that I've had to attest to and also a test to the availability of the data. So I'm, you know, I still have on the order of CE resources requirements, but not, you know which, if the shards are getting big I have on the I have to, you know, be able to handle the shard state, but in a pruned way. You don't have to necessarily be as archive. Archive. Note only short and just a clarify. You're you're believing that each chard would have its own state, so you would deploy a contract to a a Shard in only one shard. That correct. Yes, unless I wanted to deploy a contract multiple shark you can't do that. But you have to then rely on cross linking, Cross link and mechanisms actually share value between those two cross are communication. What's that? What's that looking like? Super Active area research, exciting area research in terms of a synchronous car. Shard...

...communication solved. Not a hard problem, but you have you end up with kind of a large latency in that if I'm on Chard A and I want to communicate the Shard be or transfer value to Shard Shard B, then I essentially say so on Chard a. A receipt is created that can then be consumed on a Shard B, but it can't be consumer Shard be until what I might transaction on Chard A, it's finalized via the beacon chain. So it until a cross link is brought into the beacon chain from my chard a. Once that happened. My receipt can then be consumed on Chard B and then that cross shout, that communication going to happen. But then if it has to communicate back, then we have to do that, you know, do that again. And these things are happening on the order of a cycle or multiple cycles, which is, you know, on the order of eighth and ten minutes. And so for a lot of things maybe it's okay, maybe it depends. It really depends on the use case, depends on how user facing some of these things, depends on the requirements the system. But Cross, sure, asynchers cross your communication can happen. Not too difficult. More exciting are research is synchronous Cross rere communication, and it's not something I spend a ton of my time thinking about. There are a lot of people thinking about it, but the idea of can we do better than that? Can We? There's some really interesting work in the like essentially probable as six state execution, like maybe the receipt will be consumed in the future, but can I probabilistically at this point assume what the state is going to result in and how much can I assume that you know my nine nine percent sure, nine nine point nine ninety nine. If, if I am, then like okay, cool, let's just assume that it happened like for it's kind of like opt in Paticu. You I updates if you want to look at like the Front End World of what the how that works right, but then there's there's also other potential things. Are you add stuff the protocol in the sense of, like I might add there's a really cool each research post recently from Batalk. That's like, when I make a transaction that's going to be a crosshard transaction, I also specify everything that's going to touch, and so by doing so I can kind of like isolate this transaction from other potential crosshard transactions and the kind of the validity of the transaction can be premised upon the fact whether it was included in a block that didn't have any other things that were touching what it was going to touch. Again, I'm not. That's about as much as I'm going to say on the inchronous cross our communication not something I've been spending a ton of my time think about. So that the brings of kind of another thing that that's kind of I'm going to address a contract in this system. The contract exists on Asshard. Okay, so I need to say this contract at this Shard or is the contract just known across the network in the beacon chain has some sort of reference to where the where that where the contract exists? The address space and how these things are addressed need to be blocked down. It would probably be a combination of Shard ID and address and that's where things would loose. There's a lot of things to consider in terms of user interface. A lot of this like I don't want my users to think about this. I don't want them to have to at all. And the next phase of like you know, Webtary at Jas and Webpary Dot Pie. Need to really consider what we want to expose to our developers and then developers really need to consider what I want to expose to my actual users. And there's a little bit out of my area and in terms of user experience, but there's a lot that the community needs to start kind of digesting about what this might look like and start thinking about what might be a back best practice in terms of like exposing this to how how are people can interact with the chart system. Yeah, it seems to me like all you need is a short like an address, and the beacon chain can keep track of the address of the contract and tell you where the contract presides. It's a look up right. It's a distribute, it's a dht basically, and in you can in any not even that, like you can you literally just say hey, beacon chain has a naming system sort of like, kind of like us, where you just say, Hey, this is where all the contracts are. Right now, one of the problems, like I kind of...

...see happening, is I exist on Shard five hundred and twelve and then for some reason a crypto kiddies POPs up on five hundred twelve and they're bogging down my transactions because I'm on five hundred twelve and my applications negatively impact by this other application. Why can't I just put my application to a different Shard? and to me that should just be a simple kind of like state swamp request put into the beacon chain that just kind of swaps everything over. Right. So so some of this beacon chain at this point it's like, in terms of the state of the shards, is very decoupled and for for simplicity in design. Once you by decoupling the state of the shards and just linking them through these cross links and through the data availability proofs to prove that at least everything's available on the shards, we can have a really clean beacon chain design. But in doing that we lose the ability for the Deacon chain to be this like load balancer and the sense that like Deacon chain is not responsible for like. You could imagine a system, and I don't know the relative complexity of the system, you could imagine a system where this core beacon chain thing and not only finalizing all the shards and finalizing itself and making the Rg, but it's also like monitoring the load on each chard and like shifting things around. That's a something that has been discussed, something has been thought about, but it's a very the complexity seems to blow up pretty quickly. Right now. The idea is more of this economic load balancing and the sense that you're right, a crypto kiddy shows up on five hundred and twelve and now I'm like, what the Hell? Transaction feus are so high, I might want to get the hell out of five and twelve. And is there a mechanism to do that. Potentially, one one version of crush our communication is this idea of like yanking or locking contracts, where some talents floods. Yeah, where I might say I'm going to I'm trying to book a train ticket and a hotel room, and these exists on the hotel the contract that represents the hotel room and the contract that represents the train ticket are on a separate chart. I might yank the so there's like a hotel contract and it has individual contracts for each room. I might yank the room contract over to the Shard that has the train contract and now I automically book both. But if we're in this is, this is a mechanism, it's kind of kind of like a lock, but kind of kind of a Yank, and then I can then I can send the contract back over. But if we have a mechanism for Yanking and can give and can say a contract is allowed to be Yanks, then we could. We could imagine, you know, if there's an owner of a contract, they could they get it, be allowed to move it around, and maybe that's the only thing that they're allowed to do. It seems to be a little bit of a dangerous design decision to start allowing people to move things around, because it's like odd people are expecting. Somebody might expect your contractor exist on the Shard and so you might end up thinking this shard. Look Up. Would be easier that you could say this contract exists on the shark right. I I it's interesting. That's something I need to think about a little more. But you're I like if you have a like a mintory memory management table or something like Hercial memory tables, like, yeah, shouldn't be that big. I mean like it wouldn't increase by tremendous amount. I mean increase by the number of contracts are deployed, but not not like a ludicrous amount. And they could all exist the beacon chain and he could just say this shard exists on this place at this present crystalized state. You can only swap over on crystalized states. Yeah, interesting, probably not going to go in the initial design, but something I'm gonna feel on a little bit. All right, I think I think we should start to wrap her. First off, thanks for coming on and especially going longer than normal and answering a lot of his questions. I think it's going to serve a tremendous value to the community as a whole. Gus. I know a lot of people have very similar questions because they're asking me them. Is there anything that we didn't get to that you think we should have or anything you'd like to say their community? Overall, I think we covered most of it. Without like getting into the crazy, crazy nitty gritty, I would say that, yeah, we do. We do a east to implementers call every two weeks. We have one tomorrow on Thursday. I don't know when this...

...call is actually to come out, but Thursday, September twelve. There's like five or sixteen now implementing the new protocol. I know Harry's actually going to be on our next call. They're getting excited about the speck as solidified. They're going to be. They're interested starting to implement the new protocols. A lot going on. There's a lot still to do. If you're a developer, get involved. A lot of these guys have are using the tag when their gethub. Good first issue. That's a great that's how I got involved with ECOSIS in general. I just started working on Piper's python repos because we had good first issues and it's like, oh, cool, it's a great way to get started. So get involved, help out. This shit doesn't build it self. If you're more of a community member, watch the calls is you know, if you're interested that kind of stuff. Pretty cool stuff and we're going to have a lot of exciting things talk about during death concept. You there check it out and if not, this going to be super cool live stream. You know, this stuff takes time, but we're doing it right and and it's coming awesome. Danny, that's a lot man look or through and I guess said that. I really appreciate me. I'll also see a DEP con so hopefully can buy a beer.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)