Hashing It Out
Hashing It Out

Episode 12 · 4 years ago

Hashing It Out #12: Full Casper Chain v2 - Casper + Sharding

ABOUT THIS EPISODE

A set of notes were just released for a revised version of the Casper protocol which merges its goals with sharding research. It uses a beacon chain to behave as a source of randomness with the RANDAO being central to that chain. This enables a commitee of validators to be selected for validating individual shards rather than the entire chain and sign it using BLS signatures. Corey Petty and Collin Cusce discuss these notes and try to suss out what this means for Ethereum if it were implemented.

https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ?view#Beacon-chain-state-transition-function

https://github.com/randao/randao

https://arxiv.org/pdf/1710.09437.pdf

Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, everyone, episode twelve hashing it out doing something a bit different today. We do not have a guest. We actually are going to talk between ourselves, so it's me and Colin say what's up, calling what's up Colin, and we are going to walk through. So that all sultry and stuff. That was yeah, that was pretty that was pretty hot. Quite a sultry yeah, I don't ladies. What's a soay? We are walking through the new Casper plus charting SPEC released recently for theorium how it's changing from the standard. I guess those are two different initiatives, and the scaling road out for Atherium. They are combining them because there's a lot of redundancy and it seems to be a more, I guess, Solid Road Map in which they can continue, at least that's what they believe. So we're going to walk through the new proposal and then ask questions about it, have a conversation, try and figure out what's going on and explain it to those who are curious about it as well. So hopefully by the end of this experiment will all be on a better page of what, if theoryum is trying to do to scale from a proof of work blockchain. So something. I mean the way you just described it and it sound really final and I've seen a lot of links out there that make it sound super final too, but for my understanding at this point. But this is is the result of a phone call. I mean they've worked on this for a while, but like the latest, the latest, the latest, theorium whatever phone call, or you know so called a compromither whatever, delop and meet up. Yeah, they're just like hey, I've got this alternative solution. It seems like it would reduce a lot of the redundancy that we're doing. So in the caster for FG and CAST CBC. So so everybody knows cast for FG is the first iteration, Casper CBC is the Final Holy Grail Pos as we know it, kind of iteration. And then there's this whole concept of charting which people have've load of been working on and like they sell a lot of redundant work going into the US and they're like, oh well, you know, I think maybe it makes more sense to do starting plus Casper at the same time, because really like, you can do a much better sharting much easier if we just had a better proof of stake mechanism built into it. And that's kind of what they seem to be concocting now. I don't think any of this is finalized, but it they have talked about deprecating what is it you ipe? One Thousand and eleven? Yeah, as a result of this. So it could make this whole FFG effort a little little moot, or at least the CBC one, for sure, but that's fine if it works better and it seems like it would come together quicker. So I don't want to call this final but it's definitely a movement towards coalescing a lot of the previous work that was somewhat disparate into a single channel of everyone trying to follow the same thing, and the framework in which they've laid out here, at least on a service level, seems to do that quite well, although there's some massive that's a fundamental changes to how the etherium ecosystem works, based on what they've laid out here. Yeah, the way they laid out. I'm...

...not entirely clear yet if this is actually I think it's a start, but I'm kind of like looking at and going there's some things I just don't understand yet, based off these notes, and I've read it like four times by now. I've posted issues on their github to get some clarification about stuff. I mean, it's not like very specific, it's very high level in general. That a trying to get, but I'm not. I'm not. I think this is a as a proof of concept to shows that the protocol, the way they handshake things, could kind of work. But I think there's still a lot of stuff that iron out here, because the body, the bottom of the sense, says that it's this is only at maximum eighty percent. Yeah, done in terms of the things they need to flesh out, and even the things that they quote unquote, flesh out here still need more fleshing out. But I want to make sure that people get an idea of the mindset of the developers and a theorem and where they would like to go or like the solutions are thinking about in terms of solving the scaling and it's guess let's just let's just start. So the idea is to have basically a new chain starting, that is, a series of change chains want, with the base layer being a randall proof of stick chain. So randau was basically a decentralized autonomous organization that produces random numbers. I will have the so Randall's actually a contract. Okay. So, like the proof is the beacon chain. I from what I gather, and I'm sorry to cut you off. Therefore, what I gather, it's like they're still going to have the main chain. You have a main root proof of work chain. That's how values going to be kind of generated at some level, I from what I understand, and then people are going to start staking their their etherium and a fixed, nonvariable rate of thirty two ethereum into side chain called the beacon chain. The beacon chain has in its base layer protocol chain, a rand out contract, and Randall is like random. It probably means something in an accuratum way, but now we all knows. Decentralized Autonomous Organization, said sor right yes, autonomous. Yeah, ran, it's just a random number generator. That's what it is, and I think we've mentioned on the show before that the very foundation. I mean you brought this up or a is. This is your your concept, as you explain to me. Is that really what proof of work is? is just generating random numbers? It's it's literally the whole purpose is to just genera generally enough randomness to confirm something, so you could sign something in a way that we know. One person has control over what that random process is, using that randomness as a source of truth for a time period. So the very fundamental my understanding of this is that the main chain is still going to exist. Then he reference it quite a bit in this paper and then there's only these beacon chain and the beacon chain essentially acts as that beacon of randomness. Let me okay, so the main chain will exist, but what they're saying is that this, this whole system that we're about to describe is a movement away from the main chain into into this new system. So you would deposit, you basically, which requires two different changes to the main proof of work chain as we know it, a contract that basically says I am staking my thirty two F from the proof of work chain onto a shard number of the proof of steak chain that we're about to discuss. That's not the way I see it. That's a burn. You're burning some coins on the main chain for coins on the noose on the new chain. So I thought you can withdraw as well. That's what I'm not sure and that's not quite cleared up on this this a lot burn, a real...

...lock. I thought it's either at a lock or withdraw. But I mean there's a there's a chance that this moves into a migration from the main chain to the system if it works out nicely, and so you move away from proof of work altogether, but you do it in a seamless way, or hopefully a shameless way, through burning on the main chain and creating on the other chain. But I'm not I'm not sure about that because, yeah, I'm not sure about that either. Because if you look so the beacon chain has its own proof of steak systems. It's basically a proof of steake chainbers the main chain is still boot proof of work. There's a main chain reference in every block on the beacon chain. It's baked in. So if the fields of the actual the actual block Hash of a main chain block is stored in every block of the beacon chain. So I'm not sure necessarily that this is actually a migration effort yet, just an expansion. It can be just an expansion. It might be a first step, but I don't see proof we're going away. If people can like, for instance, with the current Hashwight, if people are not like actually devoting a lot of resources to hashing but instead of going staking, although it really it wouldn't matter, a staking is so light, you could do both. I don't know. Like I can't from this. I cannot tell what the intent is. Are we migrating off of proof of is the functions of the main chain is baked into the foundation of this beacon chain? Okay, well, at least we can we can agree with the fact that the beacon chain is to serve as a manager of the validators for all sharts. Yes, okay. So the Deacon chain, which is this random number generator at and some in some cases, is also the main chain of the proof of stake system. It's stores and maintains validators. It processes a things called cross links, which we'll talk about in a moment, and it also processes its own consensus and finality gadget, which will discuss. So it's basically the route for all of these shards that that look at it. And this is this is the same structure as definity, but instead of using threshold relays that definity uses, it uses a randall for creating a random number generation. So let's talk about the randow real quick. Okay, so the random number generation. I think we've we've encountered this a couple times in previous podcast. Is that really at it? This is your concept that you taught me, cory, so mad props for really break it down like this. But really, what a lot of this this consensus stuff depends on is the ability to get a bunch of uncertainty and sign it. And really what you really wanted the very cores for everybody to have the same random, deterministic grand number and then sign that and then if everybody's doing that, you know that it's correct because there was on the same page. The problem is generating this new random number. Now, proof of work basically does that for you by having enough data that is on that you can't predict coming from multiple sources coming in that you know that essentially is your random seed for producing the next block. In the case of definity, it's actually using cryptographic method methodology, using BLS, ring reading signature schemes to to select a subcommittee of validators who then fifty one percent of whom have to sign a block. In this case it deviates quite a bit in that, instead of doing that, it's basically kind of similar to what definities doing, but also not in that it's using this rand out contract, as I know it ran dows at, just a contract that they might make it in the protocol somehow, as baked into the tod beacon chain. So it's it looks like it's a it is a baked in aspect of the beacon chain. Okay, previous iterations have...

...been like you can actually go to rand out, like get hubedcom sace rand out and you can throw up your own random, random rand out contracts. It's basically a it's a method for having enough people like like say hey, I want to be signer or a a, I want to throw my skin in the game for producing the next random number, and then the contract accepts them and then around pops up and then they all again sign again and and that throws in enough random jibberish information that then the next block comes up, they can actually all pull anybody could pull out the next random number, and then there their steak in the game is basically hey, we're going to we're going to get a little cut back on this, I believe, and then whatever's left overs actually goes to rand out contract which is sent to charity. You know, actually I have I have the rand out contract up, or the get up up. I can just read the there's three phases to the Randall. Yep, I can just read them all real quick to give us we have like we know what they are. First phase is collecting valid shots brees and it says anyone who wants to participate in the random number generation needs the send of transaction to the contract to noted C with a certain amount of F as pledge and a specified time period, for example six block period, approximately seventy seconds. I guess that's a that's a parameter of the contract, accompanied by the result of Shaw. Threes. s is the secret number respective picked up by participants. So it's a show three of S, which is the secret number that is picked by each participant. So you send these shaw of a secret number. Second phase is collecting valid secrets. So after the first phase, anyone who submitted a show three of a secret successfully send needs to send a transaction with the secret number in the first stage of the contract within a specified time period. So you send the show of a secret, you then send the secret afterwards, after the time period. So contract see will check that s is the valid secret by running a show three against that s and verifying against the Hash that you previously committed. A valid secret will be saved to the collection of seeds to finally generate the random number. So you precommit the Hash of a secret, you then commit that secret. It checks to make sure that's valid and then it adds it to a pool of valid secrets, which a you can hash all those together. The third phase calculating a random number and refund pledge of theorium and bonus. So after all secret numbers have been successfully collected, the contract will calculate the random number from the function fs one as to s, and so it's going to from all of the committed secrets. The result we written in the storage of C and the result will be sent to all other contracts that request of the randdom number before so you basically hash a random number from all of the committed secrets that have been verified. It will then send back the pledge to the participants in the first phase and the profit is divided into equal parts sent to all participants. As an additional bonus. The profit comes from the fees that is paid by other contracts that consume the random number. So anything that consumes that random number is paying for it and that profit gets split amongst the people who are contributing a secret to the doubt. And so that means that this is this is a function of people submitting secrets, and do more people that submit secrets means that it's more trustworthy, because only one person needs to actually be random or truthful for the entire thing to be random. That's the idea of this thing. Okay, Yep, that's a randout. Yep. And so if you actually want to make sure that the randow is correct, you just participate. That's all you have to do. You have to throw up one. You throw in your own seat every time and you've got skin in the game. You don't even need to add something like unpredictable, like the blockheader,...

...because if as long as you're providing your random seed, they can't predict what that is until it's done. So the problem with that those that it takes time. You know it. We're talking three initially. It initially takes time. Yeah, but then it's not the kind. It's kind of stacked on top of each other's for each additional block will have the previous three blocks behind it. So it's it's fine, that's true. Cool. So, yeah. So, I mean the randos pretty dope. It makes a lot of sense, pretty straightfor a way of just calculating. You know, the thing I think is like, does it increase the cost to run if, say, you have five seeds compared to if you, say, had five million seeds? As that died, just gets a lot bigger. But yeah, as I age, isn't Mac constant APP they don't have that specified here in terms of how many people contribute to the proof of stick system of the beacon chain. Yeah, but you could basically have the efects of similarity. I bet it's all validators. Would it would have to be a random random select based off the previous Randall value. I imagine it to be a part of the validation, because the the the beacon chain, is also storing and maintaining all all validators in the system and I would imagine all validators participate in this part of the system. Well, as far as the beacon chain goes, we already know that there's it's doing a a committee selection. So we haven't gotten that part yet. But if you look at the if you look in this document, it's basically selecting a subset, that of validators to sign the the block for a Shard. MM. So you basically income a committee on various shards for a given epoch, and epoch is one hundred blocks. And Yeah, that allows you to basically say you're in charge of validating these block, these shards, that we will produce blocks. And so the number of shards is a function of the number of maximum validators. At least what it's what it says here. It's either it's either a function of the active validators or the maximum act validators. I'm not sure what it says in the documents maximum Acti validators, but that might be for calculating the maximum computational burden of what this is. So something that bothers me about that. It doesn't seem like I feel like a hundred all right, hundred blocks, fourteen hundred seconds. So say five fourteen seconds a block, which might be high. Now, that's enough time that if somebody happens to have a significant amount of validation power, yeah, they could make you do some damage. Should be they shouldn't have a SIGNIFICO. You mean like they're shards. Happen to have bet in the ones that actually have transactions on them or what? And I mean they could suddenly dump a bunch of F into the valid nation pool. I mean it's harm free to pull it out. If you get the get what you want. You your truthful actor until you get, until you get a majority of the validator. I mean it require a significant amount of F, but like that's not unheard of. For it it's not impossible for a state actor to pull off. Well, that the the distancept of MAG basically be every block. would be the much better way to select your so let me make sure that's not sculus work. I think that's much work. They don't want to actually get the function right there to do the shuffling. It's pretty straightforward. Well, let's continue discussing it. We'll talk about the kind of INS and ounce as we get through. Like the general concept of how this works. So as you become a validator by submitting your thirty two etherium to...

...the proof of state chain, you then get basically assigned a set of shards that then validate and as well, if a shard wants to talk to another Shard, you create these things called cross links, which is basically a I want to access the state of a different Shard, so I need to go through the beacon chain in orders to do that, in or in order for that state transition to actually happen and for both shards to know about it. So the main actual load of the beacon chain is processing the communication between shards, and that seems to be predetermined, I can't quite figure that out, by the committees that get basically allocated different shards when they join. Said makes sense. I'm still kind of confused about the cross link thing. So when I read the description of it, it says cross link is a set of signatures from a committee attesting to a block in a shard chain. Janus basically at it's a it's a it's a bls signature. I think yes, it is um which can be included into the beacon chain will be included in the beacon chain. Cross links are the main means by which beacon chains learns about the updated state of the start chain. That's just for validation purposes, though. How would you know? That's how the beacon kind of keeps track of what shards are supposed to look like. So you I think. I think there must be something that they're not talking about here. Well, the idea of starting in general is that is you separate the entire state of the system so that a validator only has to care about a subset of all the transactions that are happening. But if a once shard wants to reference or move tokens to a different Shard, you you need some form of communication there. That communication is in the form of going through the beacon chain, because that's like the root, right, and so in order for you to communicate through that, you then have to attest that the information being sent from one chart to the other is correct, which is done on the form of a bls signature from all of the validators who have been assigned the sending chain. It says like this person wants to send information of this chard and then the validators of the sending Chard say, at least a subset of them, say, Yep, that's good. And then once that happens, it's then been, I think, quote unquote, justified on the root chain, which the receiving chain can then act on it. Right. So you need, you need two blocks just to get your F from one chain to another, one chart to another. Are you staking? Sure? I don't see anything here to talks about staking in a particular chard. No, I don't see that right. So like value mechanism, transfer, blah, blah blah, shouldn't well, each matter, each Chard, is is its own blockchain. It can do anything, it's it's much for all intents and purposes, is probably going to end up just being a theory and blockchain in itself. And it actually says that in the notes at the bottom. I'm on the wrong page, the notes of the bottom. It says another wrong page, I mean page. Is My wrong here? I mean the very topic is star chain, one of the chains which transactions take place in account. Data stored, but I think what what's what's bolloing me is where, where's the value actually stored? The that you have stored in the sharts. The shards are just individual either in blockchains, and you need communication between them M and that's why I that's why I viewed the steak and its steak into a shard chain. Are Did do anything? Yes, when you send, when you send your initial deposit,...

...you say that the contract that's on the main proof of work chain one of the change the need to be made. It says on the main chain a contract is added. His contract allows you to deposit thirty two F the deposit function also takes as arguments a public key, withdraw Shard ID and withdraw addigas and the random commitment. That means that you make your commitment for the random for the Randall. You then say which Shard ID you plan to be on, which shortd you plan to withdraw, the etherium your depositing and then withdraw address in which you will withdraw from. So you're committing yourself to a specific Shard ID. I don't know where you figure that out. Yeah, I mean that's basically you're saying, I'm going to work on this Shard, is where I actually put my money, and from there you can move it amongst all the other shards via the like exchange going doing, doing cross lines, right. So the thing that bothers me is a that those cross links aren't instant. Of course not. They can't be. Right. Well, not the way this is built now. So I'm kind of like, Um, well, I mean that's that's kind of the whole idea is that if each SSO is big enough to be an atherium ecosystem, you just have my application lives on Shard. Id this, but that's that's that's the thing I'm trying to avoid. And the reason I'm doing that is because like freaking Crypto kitties gets really successful on one shard. Now you have to reference multiple shards. How you balance load across multiple shards? Why does it get big across multiple shards? No, no, it gets big on one shard. MMM. So crypto kitties blows up. All right, suddenly that Shard is like shock, you know, like like like really like bog down, and they decide they need to go into two charts. Let's just say they picked this number themselves and created their own Shard, because I don't see anything that has shard creation in here either. Shard creation is a function of validator set. So like the number of shards. It looks like there will be a maximum of four hundred four thousand shards. Okay, so they they don't get to create their own charge. It's basically something that that would be baked into the system. At this moment. This moment shard seemed to be precreated. The validator set shuffles as to WHO's validating each shard at each epocket. It reshuffles who validates what Shard at each epock, and so a validator will have they be assigned a given number of shards to validate all the transactions in that Shard. Right, okay. So if okay, so then if they don't validate that Chard there I'm guessing slashed yes, which has not been the penalties for for doing those types of things have not been laid out. That's part of the missing sections. So so okay. So then the the US like, okay, Cryptic Kitty is like, I'm going to throw up my contract, I want to stick it in this Shard. Okay. So then each shard is basically its own selfcontained thing. What we're really doing, though, is just creating some mechanism for transferring between shards. So let's say Cryptic Kitty blows up on four three thousand nine nine hundred ninety nine. Okay, it's do an amazing so amazing that that particular Shard is just like caked with I guess it wouldn't matter then, would it, because, like, the validators will still have to validate all the transactions that go on all shards. Yeah, so no particular shard could ever be imbalanced. I feel like it's just going to be. I don't know how the deterministic method for deploying contracts will be so like, because the at the end end of the day, the end user or the developer developing on this new system has to then somehow find a way to pick a Shard to deploy to. Right, and that's the thing that's kind of like interesting me, like, well, can contracts reference other...

...charts, for instance? I don't know if you can, at the moment at least, especially since day the use Blake, which isn't available in current implementations, to slid. That'll be that'll be baked in more than line. Yeah, no matter what happens in the system, if, let's say, four million users suddenly dump like ten transactions each on across all shards, I don't even matter if one's focus on one chard more in another. The validators still need to turn through them all. All those transactions, MMM right, just distributed. It's the longer not it's no longer every validator turn processes every single transaction. And I kind of think about this in terms of how it was done for Couldenna, except that couldenna is automatically load balancing based on proof of work. So they just do proof of work across whatever chain there is. Except this one is instead of using the graph theory to figure out who processes water, how he'll transactions move across them. The Randall assigns the valid active validator set to look at various shards and right at those things, and that shuffles every epoch. So you can't. You can't, you know, be the guy that handles given shards. You just have automatically going to sign various loads based on how much work each shard is doing. So say I'm an active validator, epoch to eat part, depending on how much work is on each chard, my load. Maybe my load may change. So, like you know that Crypto Katies, Shard gets passed around and over has those has to then handle that workload. While maybe some of the other shards either you know, don't get anything done or you can't know, you can't predetermine which one that is because it's part of the those get assigned through the randell very much like definity randomly assigns of validator sets on whatever is built on top of definity. Remember, we talk to them. You could basically like it's a precursor to sharting, because the consensus layer of each blockchain isn't dependent upon the random number generation right of and I actually wrote it, I don't know if you've seen it yet, one of my comments on the issues. Take it, I mean literally, says this is a resemblance to definities, resemblance to CADENNA. Here's how it differs. Here's my understanding of this. And then I have some questions. I think, yeah, it's it seems okay at right now. It's just like I still have some questions about how a user would interact with this on an application leable. Yeah, so, because we're going through like every single like so all the shards of the processed basically someone evenly across the board, I would assume. But then there's really no load balancing in this that I see. So, like, let's say one particular Shard is being dealt with a million transactions or ridiculous number transactions at once in one block period. Does that increase the transaction processing fee for that particular Shard? It does, you know, like the number of validators for that Shard does not increase. It's pretty much fixed from what I see. So there's no load balancing. Will always be a subset of the act of validators processing that Shart Right. And I don't know if they get lucky and happen to get more transaction fees based on the transactions that flow through those that particular Shard or that's evenly distributed, because it looks like if we look at see state transition, I don't see and it looks like they'd get it because they don't have any way of thing like shard transition functions. So like, if you're validating a Shard, you must just basically get that. Like who gets to propose a block on a Shard, a subset of the validators within that...

Shard? I actually don't. I think that all the transactions are broadcast. I don't know actually, Oh, wait, there's okay, that must be determined through Casper F G, right at testations in the cast for FG mechanism. Okay, so here we go. There's a function called get a testers and proposer and it takes in state and it does that at testation function, which does get shuffling, and then the proposer is a particular person in the attestation. So it returns a list of the attestations, particular at sestation, being the the the validators, the picked validators for this committee, and then it picks one from this literally randomly selected committee based off of something called the skip count. Okay, that that what that is is basically saying in an epoch, a subset of the act of validators will be assigned a chart and each block a person will be randomly showsen with. Then that pool to become the proposer, and that's probably through the cat for FFG mechanism. What happens is that person shoos the bed. Good question. They gets slashed, but that also slows down the entire network. Your ear. So, like your you can't depend your protocol on one person being honest, meaning that if I happen to be that person, I really wanted to screw with somebody I probably have an attack vector there if your computers offline and it's as yeah, power goes out, you know, and then you, I mean, but even then, like that's that's me. It's like those are legitimate reasons, but you know, maybe slashing is a reason to make sure that shit that doesn't happen to you. You know, that's what definity does in terms of their slots. Right, they sign, they randomly reassign who gets to propose a block based on a time period and if you missed your time period, the next slot opens up and say then you'll have two people that can propose a block. And if they both missed and the next time period opens up and three foot walk complose a block. And the just basically ensures that as you increase the time or the delay of people proposing a block, you increase the number of people that can propose one and then you can slash based on the result of whoever sit it's a block in that pool of people. Why doesn't everybody just submit the proposal block and then you pick the one that's most common? I don't know. Well, can't be most you can't be most common because it's basically random on the ordering of transactions. Yeah, true, but, like you know, I think and then you get cases where nobody is most common. I got you. Okay. Yeah, I don't know. I feel like there's a scoring system based on it. You basically have the score based on, I guess, time of submission that. I don't know that. That seems to be stuff that either needs to be worked out or is worked out and you're not part of this, this document. I feel like I'm missing something there, because there's that. That seems like something that we catch pretty early and that's our maybe maybe we're misunderstanding it, because also, for those who are listening to this, if you have answers to any of these things that were ruminating on, please talk to us. Send us a tweet, send us a comments and send US something. Send us an email so that we can we can figure it out and go through it again and correct ourselves. Also, if we're saying something doesn't make any sense, ask us to reiterate it and we'll try and get to that as soon as possible. So, yes, let's rehash that. First Second. So it's Seles, a committee of validators from the using the rand, Randaw seed, and you know the count of validators and basically create it shuffles the validators and in a determined sequay and then selects a random selection of those validators to to...

...validated particular chard. Then it needs to particular pick a proposer, someone who says this is the block, we're good, this is the the the actual view, view, my perspective of the block is the one that you're going to be validating. How do we know that that guy okays, everyone's going to validate his perspective, which could be as valid as anybody else's? That guy cut has a little bit of power there any. Yep, well then that's that's kind of the idea is like. There's are always going to be the power of the person who can propose a block and since the proposer is randomly generated amongst the subset of validators, which I guess to get it to get a sense of the number of validators they're discussing, says the minimal case with one validator per thirty two eath. The minimal case is a MAC as is one million ether or so, thirty Onezero, two hundred and fifty validators. That's the smallest number of validators. That seemed to be in the pool. I don't think that they're saying this will work whatsoever until it gets to that number. So this may not even start happening until you get to one million ether and you have three thirty one tho, two hundred throughty validators. At the maximum, which is literally worst case scenario, and every every single person who owns a therem stakes, which isn't going to happen, is four million validators. So this is something that can work. So as you increase the number from thirty Onezero or four million, the number of validators looking at a specific node increases, and so with the smallest number, thirty one tho. That's thirty one tho divided by our almost eight validators per shard, and so that's the minimal case apparently. And then that pool of eight people then can then gets randomly chosen to produce a block and the other ones has to be seven because that's the it has the odd for bls signatures, I think. But then that pool will then some of will pros a block and the pool then verifit verifies that block through BLS signatures. If there's somebody that wants to then move information from outside of it. They have to submit a cross link to the beacon chain to be validated, of a beacon chain which another shard idea can figure out what to do with it. So it's a hierarchy, right. Be You can change. Is Basically just saying this is the this is the validator set, this is what shards have been assigned to those validators in that validator set and this is the random number. So what, Um, what's this going to look like for users then, because this is something idea like right, so like, like, I'm all right. So let's start from the developer perspective. I'm going to throw up my my application on a particular shard. Now I want that application to exist in other shards. Am I gonna have to like have a state shard which all my other contracts talk to and then send state updates to that? And will that delay the processing of my, you know, payments stuff? This this really does feel like it's going to depend significantly more on layer two solutions. Well, it's going to also depend on a different naming scheme for addresses, because you're going to have to reference a Shard ID whenever you said something. Yes, so address addresses need to be need to have some type of Shard reference and check some across those things. So that and that. That seems like it's not even thought about yet. I don't know. I haven't looked at all the EP user in finct that based on this, but I think we could just depend that to the end of the address, because a dresses are less than two hundred and fifty six bits a length, right, so that wouldn't even be too much an issue. I think from a addressing perspective it increases...

...it increases the screw up space of sending something to the wrong place. So, like say you were like you want to send something to a contract and you get the Shard wrong. You can get the the ad contract address rise, but the Shard ID wrong, and if that check something type of thing which is a building, that is that the way that contract addressing works. It's going to be damn near impossible to squat a contract address. You know, to to exploit that particular situation, though, I think it makes sense of contract addressing actually took into account the Shard ID itself. So if you post the same contract on multiple shards, the exact same contract, it would it would produce a different well, I guess it's still wouldn't matter. Well, the's so it's still adn't. That's interesting about this is that they say this is still taking into account the main proof of work chain. That means that there's more changes that need to happen to the main chain in order to communicate with all of the shards, because you would like it to be the beginning of this thing. Says a one way to deposit, so like including block referencing, and a one way to pose it. So it looks like money can only flow from the main chain, main proof of work chain, to this pos system as of as of this specification, I guess. So that's the migration question we had earlier. Is this in my agration or a? It is a migration. So one way to posit off of the proof of work chain onto this proof of stack chain. Right. So then the question this becomes problematic for people who can't afford thirty two a YEP, basically, because he there's no mining pool situation there. So you get, you got all your Fth, you get you get in a mining pol situation. Like what's the mining pool going to own your fth from now on, because it's going to stake on your behalf and then it had there's going to need to be a creative solution for that. There's just going to mean to be a creative solution that. They're just got to be, just like some of the yeah, a non staking way to get money off of the main chain and into a shard. It is. It's a working place, it's those it's those large F holders. Basically, they had now have the ability to run multi Tom ellidators and then sell proof of steak iftherium for proof of work etherium to then further create more validators. Right, I don't think that's even necessary. I think I think it's as simple as creating a just withdraw to a shard on the main chain. Like, I think it's just that simple. I got to a let me move, with draw it to the main to the to the Shard. Have done yeah, that would you know, I don't think you need to go through the hoops of even of even validating just to burn your coins. You know, I think that's the true, as though. That's the way this system actually handles inflation and total supply and so that, because there's no payout scheme like it's that's not even sure whether or not it's reasonable to be a validator, because you don't know how much money you're going to end up making relative to thirty two F by participating invalidation. Right. Well, any money you make is better than them. Like, I don't think lack if you're not spending it is locked up and you can't use it. That's first off, the lock period is a hundred is as a hundred lost. That's that's nothing. Come on, you can wait an hour, not even if the Black Times decrease, which is something I think might happen. You know, I feel like. I feel like the way this is constructed, that's still flexible enough that it shouldn't matter to anybody. Now, for Inter international payment mechanisms, that could be an issue. But you know, like you'll say, you're transferring money. I don't know now, even then, like it's like it's not going to be as fast as swift, but many bothers. Half an hour? Yeah, fourteen seconds,...

...times it's fifteen, and if you've under seconds, it's point there. Yep, there go thends. Half an hour. So yeah, that's that's not terrible. And and what's also something that's interesting about this, and I don't think has been like discusses that. You really can, like you really can be way pretty assured that a transaction is confirmed way COO, way quicker, just the way this works, because you don't need to worry about having fifty one percent attacks on a particular shard and to third percent attacks are statistically even, like more unlikely, given that this Shard pool is selected from a random selection of people across all participants, of all validators, and if they're going to attack a particular Shard, that that would mean that they would not only have to be randomly selected to have two thirds on that particular on that particular Shard, but that they could profit from that particular shard being attacked in that way. I mean it's just like the the the amount of likelihood of that happening is stupidly low, like ridiculously low. That makes sense. HMM. Yeah, I will know anything about it. Yeah, because, like they're random. You got this pool of, let's just say, I don't know how many validators is a minimum. Start with, the minimum seven. No, no, thirty one, Tho, two hundred fifty men validators, Kay, and there's four thousand shards. So math, math, math, every one to five strown lights. Put seven over, seven, seven, okay, Oh yeah, okay, cool. So let's just say there's seven ninevalidators. For each one you have to literally get put in the bucket that's interesting to you and do and have two thirds of your validators put in that bucket and then be able to execute in the timeframe that you've noticed that there's a problem. Are there's a way to profit off of this and then actually profit off of that? Oh, it's mean. It's very it's very similar to have definity works, except for the source of randomness. Yeah, it really is. I don't even see why they need rand now, although it's not bad, like it's actually in this context, is part of the protocol. It's not BAS deffinity. Is definity patented? Maybe that's why, I mean, you can't so, like, I don't know, doesn't matter. I don't know, like I don't know if it even matters. They're definitely they're they're trying to these from your comments, from the developers, and the talk is to maximize the decentralization of the validation pool. Thirty two F as a base deposit is a really good way to do that. Now, from there, it's really hard to figure out, because one person who owns a ton of F can do thirty s like divided by thirty two. Is a lot of validation validators in that pool, but it gets access to those underneath that. Like, for instance, that if you look at els, like the validation pool for Yous, is twenty one people across that, and that that entire network. definity? I'm not quite sure, but it seems to be relatively larger. Or I don't know, I don't know what they're stick like. They're they're staking minimum is, and so that's the reason for picking such a small number, which is an inclusive number and allows people to then participate in mining without having to worry about access to hardware, etcetera, etcet etc. What is like? What is thirty two at the moment? Sixteen hundred? Sixteen hundred. Okay, yeah, it's Sixteenzero. Sorry, Sixteenzero. That's not right. That's not terrible Internet that that's not that's a lot of money, don't get me wrong, but it's not a lot in comparison to the capital required to be a real player in proof of work mining.

Cool. So, like, yeah, fifteen hundredth is is is what it was. The thirty two is way more reasonable and definitely less prone to centralization, so that's fantastic. You know, this also gets rid of a sick problems. So that's great, meaning that if you're the way the proof steak works is, it's just completely not a sick susceptible. It all really, really hard just to evaluate whether or not this will work until we know what slashing how slashing works. Yeah, it's well as the incentive mechanism, like, you can say it's going to be fair and the in Sintove, the carrots and sticks or a line to make people work properly, until you know what those rules are, which is just the skeleton framework of the like infrastructure of how communication works for the system, and we're random number generation actually comes from. Now you can't say it's get like there won't be collusion, there won't being like that until you understand the exact specification of the carrots and sticks. MMM. So for those who are curious about that type of thing, they're going to have to wait. Yeah, but let's talk about the benefits real quick. So unlike, unlike definity, which is cryptographically you know, set up and you know it's based. It is based off of pretty pretty solid cryptography problems. Like a network like that has, let's just say, there's a sudden advancement in quantum computing. Suddenly their cryptography also has to be quantum safe. This can happen in the Mirriad ways or or a particular algorithm is broken for whatever reason, which is not unheard of. You know, it could. It could raise attack factors because this is an incentivization model and the cryptography is almost like the secondary part of that for validation and signing purposes. But the the the the actual like building of the consensus mechanism is based around the idea of incentivization rather than cryptography, pure cryptography. I see. I feel as though this is actually safer in the a hundred year of view, then something like definity orcadenna would be, meaning that you don't have to upgrade your protocols, you have to worry about collusion of miners and that kind of thing like might with Cadenna. I'm not so sure about that. So I like that part of it. The other part is that, like currently, storage of blockchains is ridiculous and one of the benefits of this system as it stands is at I think they said it takes four hundred megabytes to store the beacon chain in like even the most extreme scenario. Yeah, which is just really like, especially the fact that that's not even it too much of an issue. Right now. We're going to be looking at things like it. It device is able to participate in this network. Bytes per something, I guess, overhead analysis. Now it's like, if you get the beacon chain, you do have to store all the blocks that you're then validating and then as to than switch. And it's at this don't get wrong, the system is checkpointed. Yep. So when you're sinking to something, you only care about the previous checkpoint. Even in the most extreme account will mode and the scenario seems to be out of the question. So if you would like to maintain the entire state of all history across all shards, I'm not sure how much that load's going to be and I have to do some like that. Have to do some subclculators to figure that out weight. So her block for epoch, I think it's wandered megabytes. It's for a megabytes. That's faded your minds. Every thirty minutes a hundred. So that that would be where you have like full Max. Will you just utilization? Everything on the network is a validator. Of course is never going to happen, because then who's spending like you? Let's say you have a supply of hundred twenty million...

...ath I mean like, somebody's got to be doing something on the network for this even work. But let's just say so, a hundred two bites per validator. That's just the state, the current state, and I keep in mind that's the current state, not the Deltas of the state. You realize that this is so? This is all based on the assumption of a hundred and twenty eight million etherium as a supply. GAPP. Yeah, I'm looking to even the most sting case. Don't even see inflation. I'm saying. I'm talking about inflation. How the total supply changes over time. This may the movement to this May mean that you basically stop inflation all together and you have limited amount of total theoreum, ever, an existence, at least until a serious, hard work happens, and so you only have shuffling of atherium as opposed to introduction of new etherium, your ether. Sorry. So yeah, even even with like a single beacon chain. Block in the most extreme case is Ninezero, one hundred and forty bites, nine point one kilobites. That's something that no, that's fine. Yeah, so I and I feel like with pruning in the fact that you don't need to maintain a full, you know, a full node on, say, your Raspberry Pie, I don't I see this allowing for participation, to participation in a blockchain on mobile devices, which is currently anitiating herdered megabytes for the entire chain. That's include all shards. That's a well, that the entire chain state, the current state, entire chain, current state. Yes, foreigner megabytes. So that includes all shards. Yes, with at the okay, so that actually the number of shards is a function of the number of validators. Yeah, that's interesting to knows, that means that it could wax and wane as time goes by, depending upon the number of people who are participation in valut part participating in validation. I mean that's a problem because if I launched my application on Chard four thousand and four thousand disappears, it's either going to have to be I need to be able to stake in a particular shard. I think any know, if any happening, is that it's just going to set it to the Max, which is going to be four thousand, around four thousand shards m and then you distribute across those things. I see. Yeah, that makes more sense. I think I see what you're talking about. The maximum validas are blah, blah, blah, approximately four thousand shards. Now it's it's actually saying, yeah, I see. Okay, I think they do have to fix the number are shards that the protocol allows. It's just saying, okay, okay, as the number of validators moves, you have to account for what the what each validators putting into the beacon chain. Yeah, which is so like that, and that changes with the number validators. So it's always going to be the same number of shards but the amount of attestation across those shards changes as the validator set changes. So as more and more people become validators, more and more people are submitting information to the beacon chain. In the worst case scenario, where everyone is a validator, who can be, that's full hundred megabytes in the worst in the minimum case scenario it's three point three megabytes or so round. Fair and as are just cross link records for the crystallized yes, yeah, and you only need to know the current one for that too. You literally discard it if you really wanted to. The next time, like after pre epoch, get the next one. The processing of all this information seems to be relatively insignificant. Who said he does you know? It's about one second for the four hundred megabyte on a...

...laptop. Yeah, Uh Huh, which is why they're using blake, because it's much faster. Yeah, Blake is a cryptofographic see, see how it's a show three scheme called, instead of that was a Kekak, Kettak, say the word like two, which is a faster hashing algorithm that's more similar to shot two and fifty six. Yep, that makes sense. Yep. And so I mean long and short of it, it's so. You know, I think it's less susceptible to long term damage due to increase in computing power. It is less or revolution and computing power. Who knows, maybe we can alien technology. Knowledge drops from the sky a sudden for the super advanced species and all of our older junk breaks like that would suck at this is not prop based around that. It doesn't matter. Well, we're problems. That happens. Yeah, whatever, I mean. I think I think it starts with setting. I like that Rick and Morty episode where he says Watch me, watch me take kit take out an entire intergalactic civilization by changing at one to a zero. They just says, surprise, of the dollar to zero. There brace starts killing himself. Yeah, I mean, whatever, right, all the other side, the Geek stuff, will put away. I mean, I feel like this is just because it's based on a game three in setization model. It has a different set of attack vectors then a cryptographic model would, and I think if we could bash out a model that it seems to work really well, which we could do itritively, by the way, we can start seeing problems or theorizing our problems as they come and fix it. I feel like that's easier to do. Maybe then, Oh, crap, all the cryptography broke, you know what I mean, which is something I probably shouldn't be as afraid of as I am, but I kind of am. A lot of the conversation I've had in the past year have suddenly been bringing up quantum secure blockchain technology, is seems to be a concern for a significant number of people, a significant number of significant people, and so I'm kind of like, okay, this might be on the prize in the next five ten years. So now you don't think so? No, I don't out a state actor level. No, quant don't think stography. No, not a problem that. Quantum cryptography? Quantum? Well, first off, thereity have some quantum photography research of their uters breaking traditional cryptography. Yes, that's don't talking about it. Yes, that's not going to be a problem the next five or ten years. Right, that's what I'm talking about. It's not going to be a problem and the next five Oh, it's not. It's that could be an issue. No, that's we're going to we're going to be able to justify whether or not it will be an issue and the next five or ten years. But it won't be an issue and the next five or ten years, not even for a state actor. No, okay, because I feel like the Manhattan projects of of of the world where information is sort of this is where I'm going to get a little weird again. The Manhattan Project for a world where we have a we're having trade wars instead of like weapon wars. The Manhattan project for that would be something that can completely decrypt encrypted documents and and, you know, exposed secrets of a nation and show you are out the dirty laundry. I feel like that's where the focus would be at. So I mean we pulled the attack bomb together pretty quick. If we're kind of in that scenario right now, starting to feel like then the arms race of today is figuring out quantum computing. Yeah, yeah, and then will be a state actor, which means resources. Of course, where we are now is so far behind in terms of even a clear vision of what it's going to look like that...

...it's not going to be even frankly, then lit all they here's the other things, like if I was a stated actor had pulled a enigma machine on and just like hide the fact I had it, you know, you know, I would start using leaning pretty heavy on the coursion side instead of the you know, the exposing side for at least a few years. And because of the birsty nature of this kind of revolution, meaning that you know, as soon as everybody knows we have it, everybody has it, but as long as we're the first, we probably got it safely locked down for a few years. I don't know. I feel like I feel like it's actually a concern that's somebody could have that Aha moment that we're missing and that leads to a revolution for one particular person over another and that could definitely do damage in the system like this. And all right, so let's just let's just let's just say, okay, screw that. Whatever we get cross quantum secure, you know, cryptography, blockchain in the future. What's the next step? You know, this is not like it's like we stopped at quantum computers. Just got to be like a next step for that. We're going to have another revolution down the road. So, you know, I kind of feel like quantum computing, I mean that quantumquily using cryptography as the longest term solution is not necessarily the best solution and that we probably should be exploring more game theories, incentivization models, so that, you know, it would take a significant number of number of you know, values suicidal people to actually harm the network. Who should made. Yeah, don't let technology be the deciding fact or what not. It works that humans be the siding fact. Are when our work and then, and if everybody's willing to break it, then it's should be broken. All right. Well, you know, that's an interesting way to wrap up this interesting episode that we just had. I hope that other people got as much out of it as we did, as we talked through what this is supposed to look like. If you have questions, comments concerns, contact us in any way. You can figure out how, mostly the twitter or through our emails. I'm petty at Hashing it out Dodge Stream, Collins, Collin at hashing it out dog stream, all or else, Alla Halla. Get me up on twitter. Colin, at Colin Cuche, that CON cusce and at Corpetty CR PE TT y. We will see you next week.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)