Hashing It Out
Hashing It Out

Episode 89 · 2 years ago

Hashing It Out #89-Optimism Karl Floersch

ABOUT THIS EPISODE

Karl Floersch is an Developer that is working on layer-2 solutions for Ethereum. Karl walks us through the development Plasma, Casper, and then moving on to Optimistic Roll-ups.

Links:Karl Floersch

The Bitcoin Podcast Social Media

Hey guys. This week's episode is brought to you by avalanche. Avalanche solves the biggest challenges facing etheriums, developer and decentralized finance or defy community, that is, velocity, security and time to finality under three seconds on the first decentralized network, resistance to fifty one percent attacks. With complete support for the etherium virtual machine and all of the tools that have fuel defies growth to date, including Meta Mask, web three, dot JS, my ether wallet, Remix and many more coming avalanche will be at parody with atherium. For Defy developers that want a much faster network without the scaling issues holding them back. Get started today, building without limits on avalanche by going to chat dot ava x dot network, that is, chat dot a v X dot network. Thanks. Now entering incast work. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology and the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. Welcome back to Ashton out. I'm your host, Dr Corey Petty, today's Co host Sardine and John, and our guests today is coral flourish. Been wanting to get you on the show, I think since we started this thing, so I'm happy we finally did. And let us a start off with the standard way. Tell us about yourself. Who you are, what you do amazing. I am Carl flursh. I am working on fun things, that optimism. Before that I was actually at consensus for a little bit working on U Joe, and then went from consensus and new Joe Onto etherium foundation, working on Casper and the kind of early eath to work, and I confess that I got a little distracted with my eath to work and I ended up getting really involved in plasma. I was I, I'm I'm into quick fixes and so I just wanted to get it give right to that infinity scale, and so worked on plasma, you know, a layer to scaling technology, and then from plasma joined actually plasma group which basically, you know, was working on a kind of generalized plasma framework. And you know, how do we actually get this, you know, layer to tech to be more general purpose, and from there we realize. Oh well, you know, a really, really nice way to make it more general purpose is to, you know, ditch plasma Um and and go to roll up. Now, now, notably, plasma is still great, but but that was kind of the progression. And so we re kind of disbanded plasma group and kind of reformed as optimism and so, yeah, that's been that's been my life for the past few years. Outside of that, pretty normal stuff. Cool. So, yeah, I don't have any figure. I couldn't you just a blank stare my audio screw up for seconds. I'd restart again, of course. All right. So, so I was thinking. So I think the the main topic today is likely to be optimistic roll ups. Probably a good place to start is for you to talk a little bit about what roll ups are in general and...

...maybe the backstory on the development until now. Sure. Yeah, so the kind of concept, general idea of a like roll up has been around for a very long time. The kind of earliest tracings of like optimistic roll up, for instance, was post by Vitalik about shadow chains in two thousand and fifteen. But they've really been around on for a long time and in fact the thing that has been probably the most. That is the most different from, you know, roll ups today versus the kind of early thoughts about roll ups before was, I think that it is now much more common knowledge the kind of limits and capabilities of the different technology. So there's there's essentially roll up and there's plasma, and these two are kind of the kind of the opposite of each other in some sense. And and another way to say that is roll ups use on chain transaction data and plasma keeps all of those transactions off chain. So if a user were to use a plasma chain, they send a transaction to some you know, some operators, some you know, third party who is, you know, not necessarily in lne minor, and then that party will apply this transaction in some off chain, you know, blockchain, and then post a commitment to what happened off chain. And, by the way, there are new terms for things like this, like the Lidium, which is like the zk roll up flavor of plasma. These names are honestly so confusing. So it's really just like on chain, that's roll up, off chain, that plasma, or just say on chain data availability, off chain data availability. That's like the easiest, and so the transaction doesn't go on chain and plasma or an off chain data availability, but roll ups the transaction actually does go on chain. And why is this actually useful? Well, to kind of like give an intuition. Well, we want in layer two to create a blockchain within a blockchain in some sense, or really a state machine within a state machine, and we want this property that you don't have to sink the layer to state machine if you're just sinking the layer one state machine. But if you're thinking the layer one state machine, you want guarantees about this layer to state machine so or this layer to blockchain. I kind of use those a little interchangeably because they're because they're kind of similar. Now, the way that we actually generate the state in layer two is by downloading all of these transactions in a roll up. So roll up we're posting all these transactions on chain and if we're layer one minor, we're just going to run the layer one can sensus algorithm, run the layer one trans state transition function and we're good. However, if we're running a layer one and we want to sink the layer two chain, we will not only run this layer one algorithm, will parse the layer one and pull out all of the layer two transactions, apply them to this separate state machine and sink that as well. And so that gives us a kind of layer one chain and a layer two chain. And this is what gives a scale, because if you are you know you you have the option of sinking layer one, sinking, you know, being a light client of layer one and thinking layer to you you can play with what the properties, the scalability properties, are of the layer two. You can play with the trust assumptions of the layer two, and it turns out that roll up is a one of the most similar to the layer one in terms of the trust assumption. So that's first that. That was the kind of like high level you know what we're trying to do with these layer one state machines, layer to stay machines, how these kinds of things, you know, you can kind of consider them. But now, for like how the reason why roll up is a little bit...

...different from plasma. The reason like there's a fundamental limit in plasma. So because we keep the transaction data off chain in plasma, we have to introduce this availability challenge and that basically means that the state can be indeterminate for some period of time like one week, and that means that the the programming model, the kind of smart contract programming model, is different, fundamentally different, in plasma than it is in roll up in the kind of worst case scenario. And so this is why we were like, okay, we need this like foot. We need transaction data to always be available. So we'll always post it on chain and we will, you know, sink the the the you know, the roll up chain and get approximately the same security guarantees as the layer one. Hopefully that made sense. Is a lot of information right there. So I remember looking about plasma and there was a kind of lack of fundamental requirements and constraints to like or I guess, specification in general for plasma. Has that changed since we've moved this, this concept over to roll looks like? Is this a general framework that can be applied to any layer one blockchain under these specific constraints and circumstances? Or is it or is it kind of buried just there? Ah, it can be applied to any layer one blockchain. Both roll ups and plasma are just fundamental properties of blockchain architecture and they definitely apply to all block chains and in fact, many blockchains use roll ups, but don't call them roll ups. There's basically I've heard. I've heard from from near protocol to Polka Dot, a bunch of people, the near Pedo people, said that they were, that they're basically you can consider there, you know, many shards. Is, you know, many roll up optimistic roll of chains, I mean optimistic role of chains. And I I've looked into or I've heard people talk about the Polka Dot, you know, architecture, and it's similarly, many different optimistic roll of chains. So it's they're all very similar. And really this similarity comes from the fact that because layer one in etherium has this like general purpose, you know, turing complete virtual machine, it allows us to build basically any construction on top of that. So like any way that we organize our state in layer two, we can do that on the layer one because it is general purpose and that is the magic of you know, Turing Completeness and and you know, machine simulation. You're saying there's IT's a general framework of aggregating information with reason trust. Exactly. That's that's exactly it. If we could just go back, change the words roll up and change the word plasma to be something that had any, any tie to what they actually mean and the fundamental properties of them. But then the eath Maxi's couldn't show it in three words on twitter. It's a marcles free and smart conquer exactly. Oh boy. All right, we're going to be using the recording that's in the in the meeting here. That should be fine. I'll just do some post processing after that. Okay, so sounds good. We are recording a good and we continue sharp with that. Yeah, so, I I what I worry about now, now that a few of these implementations have actually hit painnet and people are using them, the security involved with these things. Like how can we be sure? How can a developer who wants to try and utilize these layer two solutions...

...for scaling be confident that something isn't going to happen that just breaks the whole thing, because this is really new, like why should someone start to then develop using this technology that allows the theorem to scale? That is a great question and, to be honest, there are his I don't have a perfect answer. There are a number of places where these protocols can break. So they can break in the kind of fundamental specification right or they can break in the verification of that Specification, Aka the actual implementation. Does the implementation match the speck? Now for the actual specification, the way that a developer can be sure? I mean, of course they can read about the architecture. More than likely they will rely on the social signaling of people who have been established as being, you know, experts about these kinds of architectures and, you know, we'll kind of follow what other people say, and that's totally fine, assuming we do actually like, you know, provide good provide good, you know, like resources for people to look you know, Oh, this person has evaluated this framework and you know this. This construction actually makes sense and follows the you know, and provides these guarantees under these security assumptions. And of course, like security is only in relation to a threat model, and so part of this process is going to be kind of figuring out what is a sensible threat model. And definitely even the experts really disagree about what a sensible threat model is for different constructions, and so it's not going to be clear cut to figure out what construction makes sense on a whole. You know, roll ups that you can at least get the guarantee that you can exit from the roll up without being censored. That is something that you would like and buy exit, I just mean go from the roll up into layer one. That's like that's something that you definitely want from your construction and you know, hopefully, hopefully, the specification makes a very obvious why that property would be preserved. But then there's the implementation, and that if because that's a whole nother nightmare, because there are so many places where these protocols can go wrong. In fact, etherium itself, you know, it was an insane project and it's a miracle that it actually was put together and didn't contain more bugs than it already had. And so like this process, we are building like an etherium inside of etherium, and so of course it's going to be a very difficult prospect to get right. And then, not only this, what happens when these protocols go wrong, because how do you actually coordinate a migration on layer two when it's much more difficult to fork if you have layer one assets locked up in the layer two? You need to build upgradeability into the layer one contracts to actually move over to the new chain. That has a broken feature, which of course it will. So this is like it's a hairy mess. When you get into the practicality of the you know, implementing these protocols without huge amounts of bugs and having a reasonable upgrade path. Where's the responsibility lay? Is that? And is that like that technical debt have to be absorbed by the developers who are building these things? Or is it? Or does it? Is it? Is it adding extra responsibility and understanding to the end user? That is a great question. So generally it has it seems to be the case that end users will just kind of too soon extent, blindly follow what developers put out and then they kind...

...of like what projects get some kind of social recognition as being relatively safe and secure. And so in the end I think the the onus, to a look for a Lark to a large extent, is actually on the dapt developer who is providing a who is kind of using the layer two and hat and and kind of maintaining, if they are maintaining, or suggesting, I guess is really the word, suggesting, a particular bridge between the layer one and the layer two. So one thing that we have been thinking about is essentially, you know, the the you know, when you deposit into a layer to one thing that you can do that's actually kind of that's helpful is you can deposit, instead of depositing the action, the actual token, you can deposit a wrapped version of your token or something like that, and in that way you can kind of get a little bit of safety, assuming you're wrapped token itself has some upgrade. That is sensible. Of course, if you are like, you know, using a kind of more pure approach and you don't have some kind of, you know, upgrade mechanism on your smart contract, which I own, you know, probably suggest, then you just have to be ready to like quote, hard fork your smart contract and like migrate over to some other chain and point your front ends to some other you know balance, set of balances in the worst case. So these are just like things that DAP developers are going to be, you know, need to be aware of when they are migrating to layer two protocols. It's really interesting how much it's still does kind of feel a like the same thing all over again, but you've moved it up a layer. The way that you you know, you're talking about just having social consensus about Oh, crap like this evm inside the evm turns out to be fundamentally broken. Let's just all go to this other EBM inside bbm. But what I wanted to I wanted to hear a bit more about what it is that that you guys are doing. I think a good kind of like pre wreck for for this or like useful companion episode is the one with John Adler from before I was here. But help me get up to speed on roll ups. And so my understanding is that they're doing something a bit simpler, like their state transition is basically maybe like yourctwenty tokens, only just like sending, basically sending receiving Tokens, as you've alluded to. You guys are doing like a full on evm. What is what does that look like? How how do you actually go about implementing that, and what are some of the challenges? There for a question. So it is a first it is a nightmare to implement these things, but but nonetheless we persevere and we try to find the simplest possible approach. So at a high level, all of these roll ups can have different, quote, state transition functions. The state transition function of l one is the EVM state transition function, and so what we wanted to do was we wanted to preserve the developer tooling and the, you know, state transition function of the EVM, and so we have been working on this thing called the quote ovm, and this is, you know, the a set of smart contracts. Interestingly so so the OVM. The name right. It's really o evm to an extent, because it is an evm that can be executed optimistically inside of an optimistic roll up. It technically can also be executed using plasma, but that's way harder to implement. And so the way that it works is essentially we want we create a small like sandbox kind of smart contracts and box that has all of the different functionality that you'll see in the EVM, from create, create to all the kinds of opcodes. But notably, we do not actually...

...create a it's not a machine level virtualization. Instead, this is an environment virtualization. So it's kind of the difference between you know, you know vm, where, which is machine level virtualization where you kind of actually execute right all the opcodes in solidity, for instance, and you execute it there, versus docker, which instead just creates a kind of self contained environment that runs on your bare metal. And so we we said, okay, it's hard enough to build anything, we might as well do the thing that requires the least, you know, changes and rich you know not, you know, not build of evm inside of the evm. Instead, we can just create this like virtual environment, and so kind of on a on a low level, how that works is smart contracts, the the off chain, kind of chain, the option off chain, optimistic roll up. You are able to interact with it almost as if it was like an etherium side chain. So like think of it as like a like a rink Ab test net. Right, it's not. It's not the same provider as l one, but it you know, you can deploy contracts just the same. You can, you know commune, you can, you know trade, your balance is just the same, except it has one extra feature, and that is, you know, lone to ltwo communication. So you can actually send a message from Lne to Ltwo and you can send a message from Ltwo to one. But that's essentially what it's doing. And we execute all of this off chain in a way that if it were to ever go wrong, if there were ever to be an invalid state transition that was, you know, executed off chain, we could go back on chain and prove that particular state transition, the was invalid. And the way we do that is is kind of similar to the technology that's used in stateless clients. So we show the we deploy all the contracts that were touched in that transaction, deploy the storage slots, you know, prove of the storage slots that were touched in that transaction, and then actually play that transaction on on all one and then we just get one, you know, one execution of a transaction that we can say, okay, does this match up with the, you know, what was posted, or is it? Is it fraudulent? Is What was posted fraudulent? And this this is basically to allow us to kind of I didn't really go into the difference between zk roll up and optimistic roll up, but optimistic roll up, the way that the state transitions are are, you know, the validity of the state transitions are preserved is through these fraud proofs. And so we have a fraud proof that executes a full, you know, evm state transition function. And Yeah, so that's, at a high level, what it does, and the the goal is to keep the developer tooling of Atherium, because thetherium developer tooling is horrible, but it's the best out there. So we just need to approve it. But I've all, I love my little evm contracts. What's the cost of doing that? The cost of leaving and proving it on chain. So that what was that scale with? So it's scales with the number of there's the number of contracts that are touched and the number of storage slots that are touched. Now, notably, we don't eivate. Does Not scale with the state of the off chain system. And so as long as there is, you know, an upper bound on the, you know, number of contracts that you can touch and the size of those contracts, as well as the number of stores slots, in the size of those storage slots, then you can establish an upper bound on the actual fraud proof itself. Now, the the fraud proof will in it if it is proved invalid. There is, you know, of course there is some cost to this, and so you definitely need a bond to be posted when you actually...

...submit one of these state you know many of these state roots, and you submit something invalid, there must be some kind of you know punishment to the party that's ebmitted the invalid state route, and so that punishment, you know, some portion of that is actually sent to the person who proved the fraud and hopefully covers the cost of the actual fraud proof. Is there any is there any worry about, like overall economics something like this, where the cost of proving a fraud is too high to creek really do so? Because it's not, it's not worth it. Yes, and in fact that is one of the big reasons why a kind of small, optimistic roll up chain is a little bit more dangerous than a kind of big, optimistic roll of chain. It's very it's interesting in this way. So the if you have a kind of chain that no one cares about, then you have fewer people checking to you know, sinking synchronizing that chain, so there are fewer people who can actually detect fraud in the first place. You know, if no one really cares to sink it, then clearly a fraud will just get through without without thinking about it. And then the second thing is if there is not that much value, you know, to be lost or if a disruption and service is not actually, you know, worth it to anyone. Then yeah, there's a there's a possibility that, you know, spending you know, a few hundred dollars on on a fraud proof or more is is not even going to be worth it. And so this is like there is actually some economies of scale for these, for these chains. For sure. That's the trump cart John. Sorry. Well, yeah, this is probably a stupid idea, but have you considered like having the bond paiding gas token so that it's kind of like flows with the network congestion? Exactly? Exactly? So we definitely have. In fact, there are we need to generally rate limit the transaction submission to these chains, and so one interesting way to rate limit is actually, you know, charge some kind of you know, some kind of fee in some sense, you know, like Burns, burn some amount of value kind of, and that burn could just be like buying gas token that is distributed to the person who proves the fraud. So you so, you so you can even like establish a kind of like reserve of all of the gas token. That is like hell. Then then the the bond can just be in you know, gas to going to pay the person back. So yes, this is this is a great it's honestly a hilarious but it's a little funky, but it's pretty cool. Yeah, we're getting into like I feel like that's like navel gazing mechanism design. Go Make Cory. I was going to say, like you said, there are some what kind of like economic economies of scale socio with this big trying to prod prognosticated the future of what the state of the theorem ecosystem looks like and how you almost have like emergent emmer like emergent communities across different level layer twos, and why they would be on the same one, because it seems to be worthwhile to have communities of people working on a specific layer two together because they have a kind of a unified goal and it grows a lot of that kind of the the reasoning behind you just said. You want there to be more people and more values, that it's there are enough people watching, there's in a failure associated theren't, there's commerce alongside and then like, why doesn't it just all coalescent to a single layer too? Why would? What? What? What's what's the purpose of having multiple of them if you need economies of scale stuff like...

...this? I think that there is incentive for there to be multiple layer two's. There's like you know, Oh, who's tech are you using? Right? There's like met, there's there's many blockchains. For the same reason, however, I think it will result in some kind of power law distribution where you do end up having kind of dominant layer two chains that most of the value floods to. And notably, this is not because of a kind of fundamental necessity for value to be too kind of coalesce on the same quote, state machine or same chain. It's actually more an issue because developer tooling and like asynchronous communication and it kind of like network of blockchains is just so much more technically difficult to achieve in the next year, two years, three years, maybe that it does. It's not like practical for a developer to to kind of build composable applications across many different, you know, blockchain environments, and so this is like so when I say it will coalesce into like one chain, there is a possible future where things kind of spread out a little bit more because asynchronous communication and smart contracts and the kind of programming abstractions that we work at do more, you know, intend elegent load balancing across block chains with similar security constraints, and so that's like possible, but it is hard, and so everything is just it's more about practicality. It's not about it feasibility. So, you know, what does that look like? Do you think that you might have like two or three roll up contracts and you get like defy on one of them and Games on another, I dowse on another, like the ones that want to be co located in the same way that all the shoe stores like to be in the same area in work or something, and then so that's that's one. And then two is like how do is there some form of you mentioned asynchronous communication that that you can you can bake up to to let them talk to each other between different roll ups? Yeah, I think that it's not crazy to think that there will be like like services in the same environment. That definitely seems reasonable. There might be multiple defy, you know, chains, or many, many forks of the same defy products across multiple chains, and like it's more about like where is the value right now, and so that's like those are all possibility. These there are definitely, it is definitely possible to do a synchronous communication across roll ups and in fact you can have you can even build kind of roll up chains that facilitate that asynchronous communication a little bit more, but they they okay. So so Zek roll ups, it's actually a little bit easier to do asynchronous communication across chains because that asynchronous communication you know that it is valid the moment that it is posted. So like when you post a new new you know, Zek a roll of state route, and I didn't explain this, I apologize. There's optimistic and Zek Roll Up. Zeek a roll up. You're proving up front all of the state transitions instead of relying on this fraud proof. Now the fraud proof is great because it gives us, you know, the EVM, but it's bad because it requires a challenge period and that challenge period is like a week. So if you want to send a message from optimistic roll up to layer one in asynchronous message, you can do it, but you have to wait a week. Or I mean you can technically wait less time, but the less time you wait, the less secure...

...your message, and so I just, you know, suggest a week, and that's based on kind of how much etherium has been dosed before it's been dos for three days and then, you know, you multiply that by two and add a day and that's you know, that's your asynchronous communication, that's your time out, and that's from very white hat, by the way. That's that. That's a great, great little one. But ZK roll up, you prove the state transition up front, which means that a message from one zk roll of chain to another zk roll chain can theoretically happen, you know, the moment it's they're posted to layer one. Now the the that means that that does imply that zk, that the second zk role of chain, kind of introspect some information about the first one that you're sending the message to, kind of like cross links in eath too, and in fact these are, you know, very similar concepts. And so just like there are going to be, you know, many there's many shards, there's also, you know, many roll ups, and these roll ups can communicate through cross links and the moment they are finalized, the communication can go through and they in fact they're they're very similar in how to how you reason about them. So yes, asynchronous communication is very possible. Yeah, and so just just you said like you know, if you have optimistic roll ups, you have to wait for the fraud proof, the product, the proving time, and makes sense. Is that? Is that like delay hard coded so that you know, like my roll up in in contracts, your ax ABC has to be like no, like different roll ups and where it might be coming from and be like Oh, we need to wait a week for that one like that. In fact, in fact, it is not hard coded. It is solely up to the layer one contract that is kind of executing the withdrawal to determine the delay period of the you know, underlying roll up. Now you can have it so that there are there are problem their ways to kind of not to to make it not configurable, like you can design a roll up that kind of like locks these things down, and so they're probably their ways to like mess around with this. But like in a well designed roll up, you the layer one contracts can determine. So if they think if you're for your application, you're like, Oh, I only want to wait a day, you know you can. You can just wait a day. In fact, hilariously, you can also do weird things like, Oh, I want to trust everything that comes from this roll up. I can trust it immediately. If I get this, you know, get five signatures from the five people I trust, then all, you know, meant the asset immediately and consider it finalized. And and that can be a totally that doesn't have to do with the roll up. It's just like a totally separate you know multi say that people are that people are signing off on or, you know, you can, you can, you know, tokenize these these you know, kind of instant exits. It's it gets crazy, right, right. So the delay is in the the original contract itself. Exactly. How does the user know these things going into going into a give an application, like you said, like the kind of just have to trust that whatever application they're using made the right decisions based on their security, or at least earliest informs them on all these things. But like the based on what you just said, there's a tremendous broad range of options that may eventually be applied to all these different roll up instantiations. How are you going to navocate this? That is a great question and in fact my method of my like mental my approach, my personal approach, and this even when I talk to people I...

...very much respect, like I they may have different opinions, but I am all for like maximally giving power to developers. Like I don't care that. I know that like in in practice these kinds of things are going to become unbelievably confusing. Potentially, there's like a potential for it to be just absolutely horribly confusing. You Deposit Eeeth into a layer two that has a withdrawal period of five days, and then you deposit eeeth into a deposit contract that is withdrawal period of one day. And now, on that chain, you can't think that those eth, those two different types of either a fungible, because one has a different security assumption than the other, and so it's kind of insane. But I kind of dig it and I think that the the kind of real person to answer this question is going to be like the, you know, amazing UX designers like, you know, Khalil from Una swap or something, that that comes up with a way to organize this information in a digestible, you know, manner, and like this is not just a problem that is like new to the space by any means, right, like the fact that we sign these Meta mass transactions and oftentimes, you know, in the early days, we had no idea what we were signing, like not even in it wouldn't even tell us it was an ear c twenty transfer. It was just like Oh, yeah, I'm signing a, you know, ABC one, two, three, and it's ridiculous and you're just totally trusting the front end. And so like, I think that one thing that people may need to get more accustomed to is that using a dap is not just about trusting their front I mean not about just about trusting their smart contracts. It's about trusting their front end, and front ends are really, really in control. I am I would say that I feel pretty confident in my ability to like read, you know, transaction data, raw transaction data, but I am very confident also that I could be fooled into signing a bad transaction that does not express my intent, and so I don't think that there's really any way to get around it. Now, like a question of like block explorer is like, you know, ether scan is going to have to figure out how do we display this information in a consumable way, and daps themselves, how do we explain this information? But to an extent we have some early examples. DYDX is an early example of a project that you go to their, their their website, you deposit into some thing that you don't really understand and then you start signing Meta transactions that you know affect the state of a of a you know, state machine that is, you know, kind of a little bit less standard. And so it may look something along those lines, and I have faith I'll agree with you at least, that optionality for developers is good that way, like, hopefully, you and the end you hope that good, good developers understand their users and then and then accommodate the available tech to their users, and the more options they have, the better able to do that. So like optionality and this gives them the ultimate availability to cater to whatever use case they're trying to solve. Not having that forces them to kind of pigeon hole themselves into what's available, and I think that's kind of a hallmark of a theorem of the first places. You know, build, build appropriately to what you're trying to do. Shit, that's a lot of stuff. I'm trying to like it as a company, like working for a company that tries to solve this problem. Providing an interface to atherium, like my mind, is racing in terms of the complexity, complexities involved of how this landscape is evolving, as well as like associating risk and then relaying that to the user. Yeah, terrifying tear. These early these early experiments with layer two are going to be very weird, very interesting,...

...like hacks, you know, broken front ends, just total confusion about what is what on the side of, you know, the users, it's and developers. To be honest, it's it's going to be a nightmare, but it's going to be a fun one and we definitely need it. We need to go through those growing pains. Is only one way to learn. Dean, what are you worried about? Staring at me this whole time, since you're through everything. I mean, if you like, I'm like, I like the idea of layer two, but every complexity which sharding brings, layer too makes even worse, because it's like sharding at least everything is essentially part of one network. You kind of know where and how to find stuff. With with all these layer two things, how do I find all this shit? Like how does just how does my application like if I build a front end for this? How does my application know what notes to connect to it's going to be horribly annoying and I don't see anyone working on standards towards a kind of do that. So I think it's kind of important to establish those as well as in Ns just solve all this team. I mean ideally, but I don't you could probably solve it with the and that's right, like you could buy the domain for your network and then encode that, but then it's also like authoritative, like whoever owns the contract owns the network. It just brings all these other complexities beyond that. Yeah, but in the other sense, like the whole concept of like, I don't need a permanent record for my donut purses, like layer two's give you, give you contexts to operate within a smaller subset of people than the like the the global state, and then and then you're trusting. You're ruining yourself in a larger trust, which I think is the main goal here. That is fair, but that smaller subset of people needs to find that context. Yeah, that's up to the developers and marketing and so on and so forth to be able to advertise that and and provide it appropriately, like like it's this is, I think, just the way the world works. Like you're going to have subsets of people that want to do a specific thing and they don't need access to the inefficiencies of global consensus in order to do that, but they would like some type of trustless route so that they can operate with, I guess it, operate in a more trustless environment than what they traditionally have to now and and and the ability to kind of have liquidity across the entire network while operating in these smaller subsets of people. I mean, think about the situation of like the I don't know, that's almost like what I used to argue with when plasma was becoming a thing and how I kind of imagine this entire space growing. And that is the same like the analogy of the way the Internet grew with respective corporations. It was mostly just lands, right. You had you had a land, and then companies were very skeptical to join the Internet because it was a gross CD place for the longest time. So do they do? They built their own Internet, just an internal corporate corporate in it, or just a land? If they found a way to establish a secure link to the global Internet so they can actually talk to each other as corporations. And then the Internet just grew and grew and grew and grew because, like you said, standards grew. You have better safety protocols, so on and so forth, and the best practices of being its proper citizen of the Internet got a little better. But it's not like lands went away. Everyone has a land. Your router provide you with land in your house and you still...

...have these things, but you just understand where the gatekeepers need to be and or twos are exactly what that are. But that with that is, in my opinion, they're they're the the lands of today's Internet, where the open, permissionless networks like etherium or the Internet, and you just need to understand where the gatekeepers are and what the context is for how value in communication flows through them. And I don't see that changing at all. It's just now we're dealing with value. Is there any reason to believe in you? That's wrong, Carl. I really loved it. I really loved it the entire concept around like standards and coming up with like what is what does security really mean, and establishing that right we have. We have a momentous amount of work to do to establish what a secure blockchain really means and like what we can accept from a secure blockchain and how do we establish those secure kind of portals between these different, different domains? In fact, by the way, I was I was asking asking vitalik about, like what's a word that encompasses a shard and a roll up in a plasma and, you know, all these different kind of state spaces and, you know, a good what he suggested was still main and I feel like that's like very, very reasonable. Like these are all kind of different, different domains in the same you know, in the same blockchain in etherium. That should all have pretty similar security constraints. In the moment that you break some some you know, security assumptions, we just have to like prune those prune those networks and just basically say, okay, you know, we only accept a certain you know, a certain standard and and I do think that we will eventually figure that out and it's not going to be it's going to be a messy process, just just kind of as you describe, cory, like not it's not one person saying this is the Internet. It's just a kind of emergent, you know, encompassing of many domains and that stands today, a lot of it's going to be gross. Yeah, unfortunately there is too much ponzi activity onto theium for anyone to be entirely comfortable with. In my opinion. Is there sti't that many Ponzi's other than defy. I defy is a good example. But the there are many. Yeah, if you look at the top gases there. But HEX has non atherium, is it? Yeah, it's hearse. He's twenty. HMM. Oh, damn it. They launched their like bitcoin fork thing. Now it's just as it's a proof of bitcoin ownership at a certain date. That means more HEX that goes through Richard Hart. Yeah, so Ponzi's shall not be named. Do Not Invest in Ponzi's. So one thing that like I hear when we talk about all these different layer two's and the delay levels of abstraction building on etherium, I start to think that perhaps it's getting to be too top heavy, like like how much can we secure? How many tokens can we secure with one underlying token? I think we just, you know, recently have started to transact more like like recently the volume of your c twenty transactions is as surpassed eath transactions or, EIF issuance. So probably issuance. So, you know, it gets to be like, if we really really get past that, it doesn't take you setting up a lot of transactions to make it easily profitable to to do some double spens and some some people, you know, Hash power manipulation attacks like like, and...

...so especially, I guess, like there's there's like a phrasing that has been used, I'd seen use with roll ups, which is like, you know, you get the same guarantee, security guarantees, like you inherit the security guarantees of layer one in your layer two. And I'm like, well, can you just copy and paste this, the security of this proof of work chain and double it like that? Sounds like free security to me, and so I'm curiously might have to say about that. That is such a good point. Oh my goodness, the little talked about problem with all of this is the parasitic ltwo problem, because layer one is, of course, you know, secured by main chain atherium. Right. You know, I'll, I'll all these miners. They make a bunch of money. Now one thing that scales with the value on the network is Mev or minor, minor extractable value. The more value on a network, the more money you have to gain to you know, when you arbitrage different you know coins, or you know arbitrage unit swap against something else, or, you know, there's just free money on the network, and we see this all the time with front runners. Front runners are like some of the top gas guzzlers on etherium. Now this Mev when we start building these layer two protocols, actually moves into layer two because the, you know, the value that you can extract from front running and from like, you know, being first in line is actually going to be extracted by the layer two miners, not the layer one miners, which, okay, yes, the layer two's are committing a bunch of data to layer one, posting this, these transactions, etc. But a massive amount of their, you know, of the layer one value comes from the fact that a minor has the unilateral opportunity to front run. Now what does this mean for layer one, when nothing is going on of interest in layer one, outside of just being basically a kind of big data dump of you know, transactions, and especially in eth to, where you end up scaling up how much data you can dump. And that, of course, because of the supply increase will reduce the price. So we have this weird problem problem where money is going to be extracted at the edges, the MeV is extracted at the edges of the network and the inside of the network is going to grow and reduce the price of you know, the those like, you know, roll ups, posting trend, you know, transactions to layer one. So if everything is using, if we're all using roll ups, which it seems very possible that, you know, if they am becomes just, you know, a hub for many different roll ups and Plasmas and whatnot, how are we going to be confident that the lone miners are actually making enough money? So this is this is like, you know, a crazy, you know question. And who knows? I think that they're there. Are, you know, ideally, you know, proof of steak. You know, there's this like social layer that kind of kicks in that, you know, it's like a too big to fail argument. That's one thing that I can honestly like, even you know, for for what it's where I don't know if too big to fail has great connotations, but nonetheless, like if the main chain etherium has some kind of big dispute over what is valid, like the social layer. Figuring it out is, you know, it's better to have one big failure that everyone like, we need to figure out tomorrow or everything's going to stop, versus like intermittent failures across you know. You know, Oh, last month this guy failed, Oh this month that guy felt like, you know, the the overall reliability. You know,...

I would I would be more concerned about so these are it is an open question, though, and my arguments for you know why the Parasitic Lt problem is, you know, a problem. Yeah, I can't convince you that it's not going to be a problem, because it's a little it's a little concerning. Now, one thing you can do, and one thing that we are are thinking about and planning on doing is actually trying to design mechanisms to extract this Mev from the layer to minors. So this is this is a kind of tent, like a tangent, to to layer two, but it gives us a chance to redesign mining incentives. And so one of these ways to redesign it is is introduce something called Miva, minor extractable value auctions, and the long story short, it's this interesting mechanism for auctioning off this Mev. And so one possibility, and by the way, the reason why you want to auction off this Mev is because you take it away from the layer two miners who are, you know, questionably securing the network. Like who knows if that's really it's really worth it to pay them that much money. You now have a pot of money that you can use for something, and that pot of money could is is a perfect the way to use it. Really, the only reasonable way to use it is for funding public goods and solving tragedy the Commons, and right now security is massively overfunded. I think that that's like pretty clear. In the future, where we move to eat and a bunch of roll ups, it might be underfunded, but then hopefully we have like social coordination similar to, you know, the the social coordinations of taxes and government, that kind of push money back into the center layer and like provide money to secure the network. Now it's a little crazy of a future, but that's just one picture of it. Like to push back a little bit. If, if that value flows to the edges as we potentially move into an ecosystem where most people are operating on layer two's because it's more economically feasible for them to do so. And then the MeV truck moves that direction and the and in the associated value and prices of layer one drop significantly. That's a market that the functionality of layer one isn't going anywhere and eventually those use cases come back to layer one, and so like there's no reason to believe that people won't just use layer one whenever it's the costly like it's beneficial with them to do so. And maybe it's an issue with kind of how an application is built if it has that functionality, or if you want to play that market on intermittent access for certain types of use cases. But once again, like that's not necessarily a bad thing because after all, like the underlying, like fundamental value is in digital scarcity and when you introduce layer two's into this type of thing, it's not like you're like the digital scarcity isn't growing or shrinking like differently. It's just being moved and distributed across different things with different constraints on how it gets moved, and so like there's a still value in the scarcity, because it scarcity doesn't really change. It's just how it gets used is there's more options and more concerned, I guess, the kind of constraints announced move. So, like I think that's not too big of an issue if things get cheaper over a certain period of time, because the functionality isn't going away, it's only getting expanded. People are going to find a way to use it. And and if, if it solves the front running and a and and back running, which is a arguably more difficult issue, then good because, like, the base layer needs to be efficient. And if we can move some of that kind of behavior from market players to do certain things in certain types of situations to at awfulayer situation, same superate situation, to any times, that's a good thing and we can deal with the consequences there. So I'm all for that because that,...

...once again, like, functionality isn't going away. Layer one still going to be able to do all the things, but then you have extra options to do them elsewhere if it's too expensive to do another layer one, and one one is like. Interesting note is that in fact this problem is not a hundred percent the problem of Mev reduction is actually not one hundred percent tied to layer two's in in other words, you can actually reduce MeV significantly using smart contract designs on layer one signal, just only on layer one. And probably the most significant is if we get to a future where most transactions are shielded, blinded, you know, you know minors don't know what the contents of those transactions are. Then it gets to the point where, you you know, it's not really as valuable to be a minor in those cases. And so even, and by the way, even in the kind of MeV auctions you know model, there are ways to take away the Mev from the auction because they're just generally ways to reduce MeV. And so, yeah, it will be an interesting, interesting future and I totally agree with your point that like people will go where the cheap, where the transactions are cheap, hundred percent. Well, I think that's what you're just brought up as an entirely new episode. So I might be might be best to start a wrap up up here. Are there? Is there any question that you wish we would have asked that we didn't? HMM, I guess the probably the biggest thing that I want to learn, and maybe this is not a question. This is a question for your off for your audience to an extent is like. Actually, I honestly I don't have a question. I tried really hard. I tried really, really hard there. It just didn't come. I thought you asked great questions, you know, and I had a great time. So I'm very grateful for for the whole thing. All right. So then, well, I appreciate that. Where do people go to find out more about optimism and and yourself? So optimism dot IOS are very, you know, shotty little website that we that we don't give it up love to and our docks are all so similarly shoddy. But our GITHUB repository is very active. We try really hard on that guy. So you know, optimism, optimism on Github, also on twitter, optimism, PVC and and I am Carl Dot Tech. You can go to my website. I'll post more blog posts when I finish this darn project and we, you know, improve the scalability on the theium. So thanks a lot at a good tone. Thank you awesome. Thanks, Bro.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (128)