Hashing It Out
Hashing It Out

Episode 98 · 1 year ago

Hashing It Out #98- ETH 2.0 Panel After The Launch

ABOUT THIS EPISODE

Jaye and Corey lead a panel discussion about the Aftermath of The Beacon Chain Launch for ETH 2.0. Panel members are Ben Edington (ConsenSys) and Gregory Markou(ChainSafe)

Links: ETH 2.0 Panel

Jaye Harrill

Ben Edington

Gregory Markou

Sponsor Links

The Hashing It Out Social Media

Hey, what's up? So Avalanche, let's talk about it. What's an avalanche? Snow comes down real fast, fierce gains momentum. But I'm not talking about the natural disaster. Or if it's not a really disaster, I guess it no one's around. But anyways, avalanche. What is it? You've heard about it, not, you're gonna hear some more. It's an open source platform for launching decentralized finance applications. Right, defy. That's what you want. Developers who build on avalance can easily create powerful, reliable, secure applications and custom blockchain networks with complex rule sets, or build an existing private or public subnet right. I think what you should do right now is stop what you're doing, even if it's listening to this podcast. Stop, pull over, go to the gas station. If you need to go to a subway, there's a subway, like everywhere. There's always a subway, all right, all right, there's always a kroger. Just stopping a parking lot somewhere. Go to Alva LAV, Alva labs dot org, to learn more. All right, stop, go to Alva lab. That's a va labs labs dot org. Now entering incast work. Welcome to hashing it out, a podcast for we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. Everybody. Welcome back to hashing it out for for the first time on video. This is going to be the first video content that we produce, which is going to be an f two panel of sorts. We did one of these before mainet launched, on a theorem too, for the for the years now, I think. Was it a year? Was it? That's long ago? Yeah, holy cow, they don't feel like that long at all. We're still under lockdown. Weren't right now? This is before. Wow, okay. Well, we did one of these a long time ago, apparently, and on that one we drank wine and there is a wine taste in appropriate with so today we're doing some of us are doing the same, depending on the time zone. I'm drinking nineteen crimes, Callie read. That has US picture of snoop dog on it and I will let others, let the let you know what they're potentially drinking or not drinking as they introduce themselves. Jay, when don't your Doo. Hi, I'm Jay. I'm drinking coffee from the local and W it's a little early for me. I don't have a problem drinking in the morning. I'm just like any coffee. My name is Jay. I've been in the space for a long time, as you as many of you know. I know what I'm not doing this. I work with Quan stamp and we do security there. Yeah, and did a I think what we were we had done is we talked about these too a few months back as part of a meet up and it's good to like kind of keep talking about you too, and I do roughly monthly little get together, so keep an eye on my twitter and you can check that out. Excited to be here. We'll put that the description to make sure try to try to fit as many links as we can at least just with us to take notes and make sure that we put the things in the description that we say we're going to. Great. Say what's up? Hey, hey, yeah, I know, I'm drinking water. It's like one and I got a lot of work to do today. Also, you know, just the last...

...time I spent one all over my keyboards, so I don't want to go through another keyboard as you were freaking out about your keyboard as someone us trying to like walk us through a wine tasting. Yeah, and Jay was. I told Jay don't send me questions and gave me like the hardest question about Seafi and which I had to google. What's see if I was. But yeah, excited. And for those that don't, I'm CTO and on to co founders a chain save and we run and maintain the loadstar client, which is running a full sink right now on main that we just started advising every to run a boutier client as of today, which is, you know, for the sake of a December town, awesome bit. I am been Edgington, got myself a Californian Malbeck. It's six o'clock in the evening here. I'm in the UK, so definitely wine o' clock. And I have been with consensus for just over three years working on if they're into. For most of that time before it was if they're into and it was just shouting and scalability. First in a research capacity, and I'm now product owner or product manager for our tech who eat two clients which I'll sort of target market. Our niche that we're carving out is is mostly amongst the institutional steakers, but it's a perfectly fine client who drawn at home. I've got one running right now earning the rewards, I hope, or better check. That's part of the discussion that I'm going to bring up is a few of those things you just mentioned. And I guess I'm called Patty. I work for status and see. So status we we run the Nimbus client and me and Jay are part of the hash get out podcast and where we talked about generally anything to centralization, block chain, whatever really want to talk about that's interesting to us. We try and dig in as much as we can and we're not very shy about being technical in any way. So welcome to the show. It's basically US talking and I'm going to kick off with if they're rom to maynet launched. So the beacon chain as a deceiver first was officially so like. What happened was enough money in the etherium. One deposit contract was deposited which reached but passed the threshold which launched the Genesis Block of the F to beacon chain, which is the first phase of etherium to, which is the effort of Atherium to migrate to proof of stake and increase or change the architecture so that the number of transactions can rise drastically and the burden on any given validator is minimized. So there's a lot of other now, yeah, threading it now. There's a lot of implications of this type of stuff. We can talk about some of it or not, but what I've kind of wanted to talked about was like, now that it's started, what do we expect? What has been our experience so far as people who develop and run validators? What are questions that people should be aware of or be asking and what are the different client teams? That's that fully representative here? We aren't. There are more client teams that are running very good software. Like what are we focusing on as client teams as we continue to develop and get ready for further phases of a theoretry point out? So I'll kick off. I'm running. I'm running for valuators myself at a single note, which is something that I don't think a lot of people understand a single note can run multiple validators and their consequence of that. Have Y'all experience that so far when running support for the people who are running or software? Yeah, it's interesting this running multiple...

...valid validators per mode thing. I think the very original concept was, you know, one CPU, one vote, which was, you know, going right back to set as She's vision. That's kind of got lost in proof of work and so you have one validator the unit. But the the deavs were too cunning and we found ways to run multiple validators per node. So adding validators doesn't really add much workloads to your ones. Who got to beacon node running. You can add we're happily running two thousand plus validators per beacon node and it doesn't increase the load a great deal. So in that sense I feel we kind of missed a step on the vision of one cview, one vote, but it's it seems really hard to achieve that. It's it's a decent compromised position because running a beacon not it is is pretty lightweight. So they can, you know, we can distribute and get the decentralization we want. In any case, that probably doesn't ask you a question, Corey. As of les today, just running a beacon note the scale isn't an issue. Like running multiple validators doesn't increase the load on your machine very much, but does have consequences, right, like say I need to make an upgrade and I would like to I need to restart my note or for some reason, whether I have a power outage, you need to responded in. Might rememberment note goes offline. The amount of work that I miss while offline is directly proportional to how many validators are running on that single machine. I'm going to say, oh, that sounds like a lot of slashing conditions, just inplating lashing. Right, that's going to be it's going to be fill the amount of money you miss by providing quality attestations. You know, for certain amount of time if you're offline, you'll lose that money as well. But they're not nearly as severe as a slashing condition, which is what happens when you do something wrong, which is good something I'm going to get into later. Right, that's something that I really want to get into later because we've seen it already happen within the ten days so far of people running clients and put most of the people who running clients right now should know what they're doing. A correct you having like a given you input on this in terms of like, yeah, yours of ore. Yeah. So from a client team perspective, we're in a bit of a unique kind of position. So, for those of know, we're like a full type script and momentation, leveraging some assembly ship stuff to, you know, get Wazam out there. And because we're not, we don't have people actually like running validators. Now we run our own, and I'll get into that in a second. But because we don't run our own dollars, our support sides actually from our utility libraries. So like the deposit Ui that I'm assuming almost everybody used to deploy, like, for instance, was using our browser bls library and I believe also our SEC library, last time I checked, so the serialization library in the key generating libraries, so you can generate. So like our focus was there, working with like, you know, the F folks and Carl to ensure that like everything was operating correctly. So we are support was like before it all launched, right, we were like helping make sure we're conforming and stuff. When it came to actually running the validators, you know, like a we run right now, we're running a lighthosted up. We're going to CE MIGRATE TO LOADSTAR once were like confident and ready to get that going to show public. You know, we're good with that, and some of the things that we've noticed that have been like interesting is especially with the upgrade stuff. So, for instance, our upgrade, biggest issues we running is actually our eat onenoe falling out a sink and the ability for like, you know, we run gas simply do that was our choice and we have another mind back up. But you know, like if falls out a sink, for you know what we ended up happening as our machine hit actually it. We peeked on ramp and we ran out of Ram. So we had to do a hot swap and get that upgraded, which had some downtime. And the...

...biggest thing we found actually was like the guest note took too long to sink. We had to fall back to like an inferia like service to actually get us something going again. The beacon notes sinked pretty fast. So when it comes to actually, you know, needing to perform that upgrade, we found that to be really, really easy. We run a doctor set up and basically that allows is like it's quite literally doctor down, doctor pull, and you pull the specific images and then docer back up those specific image names and we run that in one one command, and it's we're talking like seconds. So I found the downtime with actually running validators actually quite nice. It's not that bad. The biggest issue we had we missed some attestations during that period because, you know, our loading services actually weren't stood up correctly. So we found out we weren't covering a certain edge case because there weren't logs that we found out after the fact that, you know, there's no sorry, I might have got phone call of my cut out, but there's no log actually being emitted saying, like your valadier missing that testation, because that's not something that's actually loggable and not something we can scrape off a grip out of the logs into data dog, which is our monitoring service. So that's something I'm working out the lightouse guys to like how can we better, you know, figure that out and get like more verbos logging to know like hey, this didn't work. You know I like your attestations aren't going through. That's kind of the biggest challenge right now that I'm gonna probably assume people are having, because, from what I've found out, a lot of people are just, you know, on you know beacon chain or eat beacon scan or whatever, like just looking, you know, one hundred you know, I'm like looking through the list. Did I miss anything in the last like hour, because that's a bit I think that's the biggest challenge right now, right and it's like how do we handle that? Because there enough there. There aren't other better ways to to notify that. Like I remember when I was first running anyone mining note, like I would try and parse out any like specific event logs just so I can kind of leave it and then come back to and see what's going on. But I think most people maybe they might know how to do that, but there's a good chance that they don't. You know, we've got this in Turku, so it will in those when it should make an attestation and that at a station should show up on the on the chain, on chain within thirty two slots, which is six minutes, and if it doesn't, then we have metrics to track this and we can alert on them through all these sort of promethias and and Graffon or alerting. So it's possible to do is delayed, of course, because you know, you have to wait to see if it appears on chain and we can track the distance that testations are included. So if they're included very quick weekly, all the delayed by if he's slow its, we can also tract that metric and that's a good indicator of the help of you unw node. If it's should memory will seep you that that will increase. Let's take us that that actually order the go ahead. Before you get to this conversation, to keep this, maybe sell, as self contained as possible, let's try and give a short overview. Like a so, as a user, I would deposit thirty two F and too the F one beacon chain, to posit contract. After that's US accepted, I then have basically keys associated with a validator on the F to beacon chain, a completely new blockchain, but same Assi Talk. May people get into that later, but it's a different blockchain. It's a bird on F one. That means like okay, you have a validator on F too. Awesome. What is their responsibility of a single validator? What are they doing? And what are they trying to make sure they're doing correctly? Would like to answer that, Greg. How about it? Yeah, I mean like a valador is kind of dumb to some degree, right, because you know, all it's really doing is making a get request to the beacondote, and it's just saying telling, asking the beacon noode, like am I some you know, for a set of information about the current state,...

...to know am I supposed to sign something, right, whether that be an attestation or proposing a block, and those are the two main roles, right, it's a testing and proposing a block. Proposing block gives a higher reward than a testing. And, as we discussed earlier already, like a testing, missing attestations isn't the end of the world. You know, the slashing conditions come from, you know, not actually being able to pose these blocks, doing double spins and such like that, or not double spend but rather like a double proposal or whatever. So the valders themselves are aren't doing a lot of the leg work, at least in my opinion. My point of view, a lot of the leg work really relies on easier beacon note able to to produce enough data. So that you know every soap how many seconds you're polling for that beacondote the validor is polling that beacondote to know is it ready to do something? That's my like really distilled version of it. So you have this s have this Valaty, if this, if this beacon note right, you can note software that's running that's gathering information for multiple sources. It's sinking with the other beacon note software than network to make sure that they're understanding what's information that's going on just within the beacon shame, and it's pulling information from my theory one to make sure that it's understanding what valvator set is currently alive, like what is the active validator set, so that it could it could perform appropriate randomness and doing that is a lot of network traffic. Eventually it will become more network traffic as we subdivide the networks of the charts and they have to then manage switch hop on two different sub networks very, very quickly and manage the peers associated with all those different things. Beater conversation. That's for us, for another Hashet out later on down the line. So I need to understand all of the network traffic across the different networks and make sure that when the network wants me to do something might be can change. Said, Hey, validator signed this message, propose this block with this information, etc. It does that Instib it's a network in a hopefully reasonable amount of time. That then the rest of the network can can attest to those things or make sure that it did them correctly. Right. That's why what been saying as the number of validators, like the number of validators on any given node, can scale really quickly because it's not doing a lot of work. It's just signing shit whenever wants to, whatever it needs to. The real work is in the note software, keeping up with all the different network information, assuming it putting in the right place, keeping track of like all the things that have been done by the validators, you don't Redo it, and alerting when something is wrong. Is that Ben would you agree with the Fett summering? Yeah, so, I mean, I'll see you the work of the validators announcing to the network. This is my view of the world. So we've got all these thousands of Beacon Jenny nodes distributed across the world and they all have a slightly different view of the network that will seem different. Network traft e go a pair at the time and they're all slightly out of think with each other. And my valid data periodically is called upon to say this is my view of the network and to sign off on it and putting down the state that gives you the rights to Finn Off on that and then that's broadcast the network and you receive all the views from the rest of the network. And then the beacon, though, can adjust its view of the world to bring it in line with everyone else's view of the world. And we need a majority of validators that are the degree, because if somebody wants to spoof a view of the world, so you know Corey wants to promote his own view of what the network looks like, then he needs to have a majority, or at least a third of the network to achieve that. It's getting crazy your own curry. I'M gonna press a little further on both of you. Can you further distill all of what you said into a tweet length, and I will allow for up to three tweets in...

...that tweet length, as if it was a thread. Go, Greg, I knew you're going to be that? Yeah, I'd say, like, here we go. Valdator has to rot two jobs, simply attesting to existing data on chain data and proposing new box within these who scopes of work. They receive all their information from the beacon chain. They don't communicate to any external body except for the beacon chain. That you the beacon chain node. Sorry that you have told it to communicate with. At that point it's up to the beacon chain node to disseminate all the information gather the network. The other communicate with all the other new peers on the network to give that information so avout it can pull the beacon chain node and know what to sign, when to sign and submit that back to the beacon chain node to then be propagated throughout the network. That's really good. Yeah, what great said. Validates propose a view of the network and either agree or disagree with each other. That was a lot better. That is certainly sweet. Okay, so let's assume for a minute that whoever's listening maybe less familiar with with eath to. Then then maybe they should or maybe then their experience or what have you. There's seems to their there's obviously this difference between node, validator, be can noe, be can change contract? where? What is the relationship between those four right? So let me see about the so be can chain contract runs on if they're in one and that is the the register of all of the stakes that have been placed so far in thirty two et increments. So that's the the source of truth for whose eligible to be a validator. The beacon node contains all of the state of the beacon chain and is the point of communication with the rest of the network. So beacon modes talk to each other and they watch the deposit contract and in order to watch the deposit contract you need an if they're in one mode, running gath or openly theorem or nether mind or whatever. And so really all that's doing in the current context of a theorem to is providing a view of the beacon contract the the deposit contract to a theorem to. So beacon modes are watching the deposit contracts, they're talking amongst each other, they've maintaining the state and then hanging off each beacon mode, you've got a number of validators which are periodically attesting to the state of the local beacon lode and telling the network this is my view of what's happening on the network. Each valid data has a private key associated with it. So when you deposit in the contract, you create a private key which is what's registered in the contract and as also registered with your validator, so that that ties the whole thing up and closes a circle. So those your full components, essentially perfect. I think that provides a little more clarity for people. So anyone can just be like. They can just if they...

...if they want to run a beacon note, they also need to run an eat one note for it to watch that and then they can go further and run validators as well, e's One note from somewhere. Depending upon how much you trust other people, you can consume it from someone else's youth find note and that's that's a very different conversation. But yes, you have to have that. You have to have it. If like a there in one data as an oracle and your beacond note. Where you get that information and how much you trust it is up to you, and maybe that's maybe it's reasonable talk about the consequences of not having proper data there right. So, like if you had each one data compromise and somewhere, shape or form. What could happen to your validator? Yeah, yeah, good point. And I and one of the biggest challenges we've had getting up and running has been people's each one notes. It's just still not from water resources. Yeah, exactly, exactly. It's still not lightweight and and so especially around Genesis, people weren't in sync yet and so on. And and also, initially the amount of data. If you start up in an if they're in too client, it has to show up a lot of data in the in the early moments from, you know, all the deposit contract data up to now out of the one node. And this proved to be a bit too much for some of the nose we were using. So we had to dial back on the rate we sucked information out of them just so they could stay responsive. But I think we're over that now. The only thing you can't do if you cut, if you can't see the deposit contract by your each one notice proposal block. So when you propose a block you have to, it's mandatory in the protocol, include any pending deposits up to I think, sixteen deposits and if you miss out any, if you don't because you don't know they're there. Well, you're too lazy, then you will. Your block won't be valid. So that that's basically the only consequences that whenever you propose a block, it won't be valid. You can still learn upwards throughout a stations that's got no relationship with the eatuand at all for purposes of right now. I'm going to talk about slashing, like slashing issues later, but if you get slashed, that's a harsher, harsher punishment to your validator. We'll talk about how you get slash later, but is it possible to be slashed by producing an invalid block because you have wrong or like maligned data from F One? Don't go Greg I. that's a good one. I'm going to say yes, but I almost want to go look up the steck and get back to like two minutes, because this is really fine of that question. So'll just now. Yeah, yeah, no. To be honest, the reason why I'm going to say yes is because it's considered invalid and that means your block would be voted against on right from an enough attestation perspective. So I'm gonna say yes, but you know what I mean. B Rb in that in that time period, I think it's reasonable to start talking about slashable events in network, and I'm a preprep proposed the proposing an invalid block is not a slashable offense. So all you do is you lose your block reward. Now your block reward is is a decent reward. I mean it's worth as much as I'm like a tied stations, but it's it's not, I'm not a huge junk if you if you miss it. So it's probably worth a bit more actually, but it's not a massive loss if you miss it. So if you have invalid data, your block is considered invalid by the network and...

...you lose your block reward, but there's no punishment for that and certainly not slashing. If you propose to conflicting blocks at the same height, then you can be slashed. That's what I wanted to get into you. It's been ten days. There's already been quite a few slashes, more than I expected, and I think, based on the pre basonal information that I saw last time I checked, all of them are almost all of them are due to people running the same note multiple times. Right. Yeah, so they've been five events unless there's been another one today. Four of them are individuals who had single nose and in each case they were running two validators. Was the same set of keys or the same the same private key on two validators and and of you know confessed to doing this. And basically as a backup they wanted to have a high up time. If one of their validators went down, they wanted to have the other on, you know, a hot stand by so that it would fall over. But actually the the result is that the these eventually these validators are contradicted each other and that's a slashable offense. And it's not so bad. Currently you get fined in ediately, five eight. You have some extra penalty, is not very large, as you probably lose about point three or point four beneath. But the big consequence is you're then kicked out as a valid data. So and your eath is the remaining stake. Thirty one, point sixty or whatever is then locked up until we merge ef one and neath two, which could be a year, year and a half, two years away. So is then unproductive. So that's the main consequence. You don't lose a huge amount by getting right now. We reduce the penalties for a few months, but you are then locked into the system with unproductively full seable future. I didn't realize that getting kicked out of the network currently kicks you out for two years. I thought right that the that it was just for a period of time. HMM, but that's one. We're live. That's not right. So anyone anything right? So, yeah, because, yeah, is that it's just phase zero, like the you can chat and doing much other than what it's supposed to do, which is which is a dependent function upon the other phases. So we have to make sure that this works appropriately, in worse well, and provides incentive for people to do it. But the the east that lives on east to is a one way function as currently and it's going to be a while until we've finished the other phases. Of you two before you could you could have any utility whatsoever with the youth associated with it. And so right, like he's saying. Yeah, go ahead, sorry. Yeah, just finish off on the slashing. There with ten nodes which belonged to a single staking service who had done as the homebrew ant he slashing solution, and one of the notes who got out of the think, and they managed to miss the slashing protection. They didn't detect the condition and and ended up with ten modes being slashed. They pretty quickly turned off everything else and the fixed it before restarting. Yeah, and to just after looking at the SPEC yet been Ben's accurate, and you know, as long as you're proposing what you think is accurate right, like, you're going to be fine from a slashing condition. Slashing conditions are the intent around. You're trying to be malicious. I think that's probably the better way to like really describe it. It's like you're not going to get slash because something just didn't work right. You know from like you collecting data. You're going to get slash because you tried to do something very you tried to be malicious to the actual protocol thinks. I think that's wrong. Overwhelm the provable that you've done something that's negative torture network. But it's interesting,...

...though, because I'll pray as occurs it currently stance this is most mean that I don't want to call it incompetence, but like bad devils, product practices, over optimization, lack of understanding sash conditions, things like that. I know one's actively trying to take over the network. But yeah, that kind of gets slashed unless the network can prove that you've done something like bad, like that. That can lead to a most like a malicious act as opposed to just a complete information. Yeah, this is a point. The network can't tell if you're being evil or incompetent. It has no idea. So we do have a concept of correlated activity. If you see a lot of people simultaneously breaking the rules, then the penalties are much higher because that looks like a coordinated attack, which is going to be much more dangerous. But odd one off events are very lightly penalized for that reason. Yeah, and I'd like to put this into perspective for some of the listeners. I actually asked this to uqury with. So for those who are like thinking of running a validator, you know understanding why you shouldn't be malicious. If we can put that into sort of monetary terms for those, you know, unaware, right, like you need to take thirty two eat, which is, you know, one eath is sitting at today as of today's values. Like what I've no, I don't never chek by I've USD. So you know, to put that a perspective. You're not getting like if you get dropped from the network, the thirty two weeks is going to sit in that deposit contract for the next two years roughly, or longer, depending on how long baysail takes, and you will not be receiving any apy on that. So the the thirty two eight would not be working for you at all and it would be simply locked. So it is in. So it I guess. I guess maybe it would be good to talk a little bit about it. Sounds like some of these people were trying to do optimizations or mitigation, but they weren't, but they were. They were thinking it's in a selfish terms rather than in a network terms, and I think would be interest. It would be good for kind of guied into like how should someone be thinking about this? If they're thinking about optimizations, it sounds like they should be thinking like more simpler, you know, just make sure that it's up and not trying to overcomplicate it. Yeah, sorry, Cory Code on. What is currently the optimal way to run a validator and Beconnde is to have dedicated hardware that has a few times the the computational resources needed to run a beacon note, which is relatively light. Right. If you'd like, think about a Nimbus Limus node, which is like the lighter of resources on the network, running on a pie four, you can. I don't recommend it. I recommend something that's four cores, a kickobytes a memory and before we're good bytes movies, an a quality, quality amount of hard drive space. I think someone said an our node program right. We're running it like the amount of consumption of hard drive space is not trivial. So like running on like a raspberry pie four is not good, right, because those are usually run off of flash memory, which isn't isn't good memory, it's not robust. The...

...main thing is you want to have something that's going to be online for a long period of time and stable and it's dedicated. It always has a good Internet connection and, like you said earlier, there's a good source for each one, which, if you're going to run yourself, is a significant amount of more significate amount where more read the computational resources to do and what you're really trying to optimize should just you run the Vecono software. You make sure that you have a good practice on how to update it, and that's going to be dependent upon the client software that you have, and it's usually going to be stop the node, like, update the software in the background on it and a separate folder, not the same folder, move the binary to where the your your scripts keep it. And if you don't understand these words, that we need better software practices is to make it easier for more people to do which is kind of stands to like push the idea of like who should be running these things is the people who understand what I'm saying right now. Eventually, when we get better at this, we can broaden that. So you stopped, you upgrade your software based on whatever updates you need to do. That are critical. Stop the note and restart it. You'd never and you need very solid devils practices in terms of how you stop your note, how you upgrade it, how you start it, and that's just like, that's just running. It cool. We have dedecorated software that's running a piece of software that we know how to upgrade and it stays online all the time. Now what you need to optimize for, and this is what I think the current most of the current work is being done with the client teams a vastly it is for us at Nimbus is. How do you monitor this thing? How do you know when you need to do software upgrades, but those software of Software Upris and tail how do you know when you miss at attestation? How do you know when your number of peers isn't sufficient? Like all of these kind of metrics around what your note is doing and that you're doing the right thing in the right amount of time. What's your inclusion distance over a period of time? Right? So, like how fast did your attestation get get get accepted by the network? All of these things you need to have a very quick way of understanding and if things aren't meeting acceptable criteria, you need to have a way to be alerted so you can address it appropriately. It kind of said this earlier. was, like it's not immediately obvious, like when it you can sit in a testation but it didn't get accepted or something's wrong with it, and there's there's kind of like subtle ways in which attestations aren't getting aren't being included optimally, and so like the monitoring part of all this needs to be what's being optimized and deaths. That's not only a client side thing, but it's also a network side thing in terms of like, like you said earlier, I think it was you correct. I like most of the people who are checking on whether or not they're doing because appropriately is they're checking beacon change, which is beacon change spilled out, but dot in instead of the full word, which I want to add that is ether scans version of network history. Yeah, so that's, you know, everyone had, everyone has different view points of the network. If anyone's been around long enough to be doing trying to get into into sales back in two thousand and sixteen, you know would know that you want to spam different nodes because it's different views of the network. So in that same in, that in that same vein, like, I think. I think what what what people are getting at is is there is not one single source of truth. It is a collection of sources of truth and no depends right. It depends on where you're looking right, because if you look at the if you look at the aggregated source of blockchain itself, a system source of truth takes it takes a lot of individual sources of truth to get to that point and that's where like finality comes in. R It's a whole. Like beauty of...

Ross is, you have an emergent, emergent finality from a lot of individual perspectives. Right. I mean finality? Okay, there's there's a finality, sure, but you still have you still have to consider like it as a clock, right, if I'm running a if I'm running a let's say, I'm just running an each one node work comparing, you know, like my node compared to what will agree with what is being set on either scan. But they're still going to be slight timing variances. There is it doesn't matter much for this definition. Probably not, but obstensibly I still would believe that there's still a slight view of slight difference in view of the network. Yeah, well, of course, said about finalities. Is Key here. So finality is a point of which a whole network agrees that it has a consistent, single point of truth view of the network. Nobody deviates from that it. If they're in one proof work doesn't have finality. You've got some assurance that nobody will ever come along with the chain that's different from the one that you see. But it's never you can never say never. In if they're into and improve a stake in general. You can say before this point in time we will never change the history of Ja. Everybody has the same view of the history of the chain and it will never change. And we achieve finality any two and about twelve minutes, which is not very ambitious. I mean you can, if things are effectively final, much quicker than that under most circumstances, but absolute finality is is achieved in about twelve minutes and anything before that you can guarantee. It's you have the source of truth. So a definition as to what finality is and an implementation of that is actually new for you too, because we defined a finality before based on what Bitcoin was doing, and now there's a different idea of what finality is because we've learned that it's not necessarily fine final. That's that's act. So this is a longer subtle conversation on the differences of what consensus algorithms are running, whether it be traditional consensus, our class of consensus n come out of consensus, or variations of traditional consensus that include crypto economics and what so like. S Not come out of consensus, which is standardly referred to as like proof of work, is probabilistic has probablystic finality, which means that over time you have a larger and larger probability that a something won't be changed, and that's why you need to wait six blocks in Bitcoin or whatever it is, with just the to like to say like, okay, this thing is good, but if you're buying a cup of coffee with bitcoing, you don't really care if it's that. It's the likelihood that it's overturned. Is that big of a deal? Traditional consensus is when the network or the people participating in consensus, validators in the point of that, to come to decision. It's done. But with cast for FFG it's a mixture of the two kind of. So you have but the concept of an epoch, which is so what probabilistic finality until epock is over and then it's snapped over. It's final for the rest of the four the other time and so like. That's that's, I think, a larger conversation in terms of like what it means to be final in the economic differences of probablysic finality throughout an epoch and and the security around like as you move through an epoch and then when it's a done, which is a very a very subtle and complicated conversation, interesting thread for sure, but yeah, maybe we can kind of deviate a little from that on finality. But it sounds like overall the network is going well, apart from a few people...

...who got slashed and unfortunately, how they there's like what, eleven people in ten days, or eleven validators in ten days that have each at least thirty two e've locked up for two plus years before they're allowed to potentially rejoin the network. Maybe that kind of leads into what the network is designed for, given, you know, two thousand and twenty has been kind of rough on people. The the design of the network is is meant to outlast cataclismic events. What does that mean? Correct let's you always what the tough ones are. Yeah, no, I mean like as we talked about. I mean like catastrophic standpoint. You know, we could lose, someone could like you know, we could lose, and this gets into a larger conversation without like the actual web is technically wired up from like a sea cable perspective, but like, let's say like the connection from like North America to Europe gets cut bran, we lose that ability to communicate with those people and now we have to run our through ourselves, through like Asia, to get to Europe R and that's how the North American connection to Europe it sends up happening, like we discussed. It's like not even the penalties that are going to be that bad, right, and it's like we it. The protocol itself is designed to be resilient to the idea of like mass outages to some degree. Obviously, unless our threshold drops were like we literally don't have enough out there to kind of push the chain for it, which I think we sign one of the test nuts. I just can't remember which one, or a bunch of the Valds just stopped validating. So I think it's been designed quite well for like those type of like dd a scenario, like day zero kind of scenarios and whatnot. Yeah, I mean, I don't know it's like to see that type of like to see like a catastrophic situation where we lose that much of an hotwork would be would be interesting, you know, and I think it's been designed well and it yeah, it's referred to as the World War three scenario in some of the documents and the the idea is that all all the validators votes are waited by the stake and so if validators go offline, and so my view, sitting here in the UK, if I if I can't see half the world's validate as I suddenly disappear from view. I'm on a much smaller network and half, half the validators are missing. In those circumstances I can't achieve finality. There are not enough validators alive out of the whole set that I know should be there to agree on the state of the the chain so I can progress the chain. I mean if half the validators are there, we will make on average blocks every every other slot. So we can still make progress, but we can't achieve this finality which is desirable. And so in those circumstances we have what we call the quadratic leak, whereby increasingly the validators who don't show up, their balances, in my view of the world, are decreased and decreasing decrease, and eventually they get down to around sixteen either from thirty two, and they're kicked out of the of my local network. So that takes normally two to three weeks with relax that. So would be about six weeks plus now just in case of any trouble, and the idea is that it when enough of being kicked out, then I now we now have enough local validators. The network small enough. It's it's just the ones that are that I can reach, that I have my own network. I've basically cut off the rest of the world and have a separate network and we can we can progress and start finalizing again. And after that the different networks, you know, presumably there's one somewhere in China or whatever that's, you know, separated from mine after three weeks and they'll never it's essentially a hard fork. Then they'll never be reconciled. So there are,...

...you know, three plus weeks to fix any problems, but after that time we prioritize regaining finality and making the the network safe again. Can we talk about that acts as prog good point, because it's been proud of a few times. Let's talk about the difference between like a voluntary accident and getting kicked off the network and like what that really means, because it's been brought up like Oh, your thirty tooth is like, you know, you got to wait a while, you know. So like let's maybe then, if you want to touch on that. It's like the idea of like topping up and also like how much time you really have right before you forcefully get kicked off versus like wanting to just like withdraw. Yeah, I'm the three ways right. So that want is to be slashed and which cases involuntary. One is to neglect your validator so it's not running and eventually you will get down to sixteen eve and if the rest of the network's functioning, that's going to take three four years and you'll eventually drop below the threshold. And then there's voluntary exit, whereby I say I no longer wish to be participating in the network and you can exit. You join a queue, but usually would expect the Q to be short, so for validators connects it every six minutes, and then you've got a short period of time. I can't remember exactly how how much, a couple of days, and then in principle you can withdraw your eat. After that, however, we have no mechanism to do that until we've done this this merger of Eve one and needs to do and but at that point you could exit and you could rejoin. We should talk about this, this world emerge and what comes next time. We've sort of had this two years before anyone can with withdraw anything. I hope it's going to be short than that. So the initial road map we started a couple of years ago, the very linear road map, for you do face zero, which it phase one, which is sharding, which is phase two, which is a sort of abstract execution engine sort of concept, and then we implement either one on top of one of these abstract execution engines. And it was sort of pure and clean and long and had this end goal of merging each one and eat to but the pressure is on, honestly, to do the merge earlier and get eat one off proof of work and on to proof of stake and also to take advantage of the scalability. So gradually that merger has kind of move forward in the road map. So it was moved to phase one point five, which is sort of after sharding but before execution engines, and now we're looking at a proposal called executable beacon chain where we actually don't even we don't even have to wait for sharding to bring, if they're in one, into a theorem to we put each one straight into the beacon chain. We can do that before we even have shards. We could. One of my colleagues is building a demo of this right now and it's not technically that complex. There are a few loose ends to it to tie up. So it may be much quicker to get eith one into eat to an off proof of work and at that point people's valid data balances. When they exit's you'll be able to bring them back into the one that we will know and love and and can can free them use them. Yeah, that brings up a really good point to write because for the last two three years everybody's been seeing youth to development. It's just the beacon chain right. It's like we have all these other things and if you put them, if you try to put on timescale, everyone's going to get chart and be like, well, you took this long and that's going to take this long. It's going to take this long. We're looking at nine years right. He's like what, you know, if you were to put it in front of like a product, you know, manager, that was no idea. They're like this is crazy. But I think the real key thing is for also remember, like you know, the beacon chain and what we've done, and I think Danny said this really well, that's see us connect in our vent we held last...

...week, which was like listen, like we spent this long because we have to get the consensus algorithm from each one and recreate a new one. And basically where the beacon chain sole purpose is to prove that this consensus algorithm works and we can iterate really rapidly at this point. And you know, it's like get something into production and now we can start shipping quick I think that's like something that you know, classical web to development will also like appreciate. It's like, once you get that first base layer, it's like now we can start iterating on features and stuff right and like, for instance, in our case one of our priorities is like clients, because we want to be able to essentially say you like your Meta mass no longer needs to leverage like and there like service, and now you can just run a light client, which is, you know, why we went browser first, and our team's already we've been prototyping that. You know, like the steps needed from the beacon chain and we're already like full steam ahead, getting going forward on like time to actually start working on that, because everybody's already got like phase one, some phase one components ready, you know, and they're you know, working on and that's where, you know, we're going to start seeing like clients come on phase one, which is like here's a whole new usability layer, right, and then, you know, it's not just shards. It's like we start seeing a lot of the like ecosystem support things coming into play, and that's what we're going to see a lot of, like more rapid film and I could see phase one happening really, like parts of phase one happening like well within the next six months, in which case we're really far ahead to like see that eat one merger get so much closer. I think that's a super undervalued appreciation of like why the beacon chain is really important, because like we got the hard stuff done. You know, it's like now we can just put features. It's like a transfer. Yeah, we can do transfers now. Like might not want it because some edge cases, but at least like we have the fundamentals. They get it done. I did not realize that, the first of all, that eat one merger had moved up in the timeline. I was more aware of. You know, maybe it's going to be on a ghost chain, like like a like a plasma shard or a plasma a contract or. You know, maybe would be its own shard or whatever later, but this this early merger, that's really that's actually really interesting. We're kind of coming up a little bit to time, so I'll kind of give it back to Corey, but before I do, I just wanted to say that I'm looking at beacon scan right now and there is currently nine hundred and twelve thousand plus eight eligible ether to vote on attestations, which means that is I think that's the same amount is as how much is is staked. So we're just under a million ether staked, which I think you know, given the priceit e to right now, is I think I think we would that is almost, if not just, at a billion dollars us the worth of locked value on the for this, like this this phase zero. So that's so this, you know, for this you know the fact that we're iterating. You know, it sounds like everything's going to be able to inter it really quick because the difficult part was happening has already happened. That's a lot of confidence that. That, that e two is not only what we know it's happening, but it's it's happening and that's really exciting, especially for those of us who have been watching said for a long time. To See to see that is it's really, really cool. So yeah, thank you. Yeah, and there's a two week q you to get in. There's another three hundred Um Kve qued up to join and we'll join over. So if you stake today, you won't actually become live as validator throw two weeks now because of all the steaks that are ahead of you in the cue. And Yeah, it's awesome. I mean when the deposit...

...contract was announced, things were very slow for the first couple of weeks and journalists were contacting these things. Well, you going to do you're going to know the thresholds, you're going to delay whatever. I was like no, no, it'll be fine, and we find it got a bit tense that the weekend beforehand, but then suddenly it just went vertical. The deposits in the contract and it's it's been the voter of confidence. Has Been Awesome to see. Ye, very delighted something Super Aft. Kind of off topic. I just notice is like if you actually look at the year epy and you like spy Validator, you actually noticed that the person who's made the most right now has produced only five blocks. The most has been ten, and actually they actually have two slashing conditions that they've triggered, which is really interesting from an economic standpoint. Right. That picked up slashings. Other validators getting slashed and you get a decent, chunky reward for including slashings reports and in a block. So yeah, yeah, that will be very good part of the incomes from. Okay, I was looking at wasn't? They got slashed. We're running out of time and different stack full of Jay in our time. I'd like to try and wrap up a little bit here correct give anything quick to say to wrap up the episode easing. You want to tell anyone stank, anyone, shoutouts to whoever you like? Okay, yeah, my biggest one is honest. I've been shown this for like two years now and you brought up it's like make sure you're planning a hardware accordingly. You know phase one and face two are going to get scary. You know devcom. Last year we talked about, you know, tens of gigs of data, you know, needing to be stored or, you know, and given week or two. And it's definitely something to consider. And you know, if you do want to run that Raspberry Pie, highly suggest considering only the validator on it and connecting it directly to where your beacon notice it up. And you know, like you said, you know, we got a plan accordingly, and not only do you have to know about your software upgrade pass, but we have to also know about your hardware upgrade pass. You know, how you it's something to start practicing probably in the new year. Is, Hey, if I need to extend a thirty two ram for my Akegs Ram, how am I going to do it? And how am I going to do it so I don't get slash with a double, you know, a double valider situation? So definitely something to consider for a lot of people is knowing how to do software upgrades and also knowing how to do hardware upgrades, because that's something you want to do probably in twenty four hours. You know, don't want to leak too hard and check out load start for some awesome and dead tooling. And we're being used everywhere and you know people are starting to use it. We also have a web three JS has our ECH to support coming out probably this week, our next week, and so you'll be able to you know, what sell whip do jus and quary some stuff from the beaconote if you want to have that javascript experience. That, you know, awesome. Been in closing tops. Yeah, I mean I like to reflect on the journey at times like this. It's been incredible. A lot of people said it couldn't be done. We've done this in perhaps the hardest way possible. We've kept it permission less, we kept it open, we have kept it decentralized. I'm talking about the protocol development. All come as a welcome. We take ideas from everywhere. We don't have a dictator for life. You know, Danny Ryan is close, but you and I think it has served as well. It's been hard and bumpy and they've been a lot of really we've done everything in the open. Right. A lot of protocols were retreated to a closed room, a lab and have done their thing out of the public laire and and haven't really produced a lot. We've done this all in the open and everyone seen all the missteps and everything, but we've delivered this beacon chain thing and I am deeply proud. It's just the beginning. I like to call it proof of proof mistake at this point, but we've got a long way to go, but I'm more confident than ever that we will deliver all of each two in a timely fashion. Awesome. Thank you. Okay, I mean it's been a pleasure just, you know, being there kind of on the sidelines and then, of course, you know,...

...being with with quite stem, being able to work with a lot of great all great groups and of course tech who as well. And Yeah, I mean it is crazy that we're here. Huge congratulations to the community, like just so much work. Huge congratulations to all the teams and yeah, the I mean this is this is this is phase zero right. So still a long way ago and I guess, as always, Shill it be shilled, right, s sh that's a t shirt, but I've I've ever heard one. Okay, so thank you all for joining us. I appreciate you kind of helping me share kind of the wisdom that we've learned in the process of trying to deliver this thing. What we've what we've gained from running it, building it. It's very clear that we're all very proud of the work that has been done by the Plethora of people that have that have contributed to it, but they're still quite a bit of work to do and a quite a bit of education to give for those who'd like to participate. So we'll keep trying to do this for those who are listening or watching for the first time, on hashing it out. I hope you enjoyed the conversation. The production will get better, but the conversation will always be good. And it's the xt time, guys. Thanks.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)