Hashing It Out
Hashing It Out

Episode 61 · 1 year ago

Hashing It Out #61 – Solana – Anatoly Yakovenko

ABOUT THIS EPISODE

In today’s episode, Corey and Collin talk with Anatoly Yakovenko, the Co-Founder and CEO of Solana. Solana is a blockchain platform attempting to scale transactions without the need of sharding and we want to know how they plan to do it. Join us as we dive into their strategy, their current progress, and where they plan to go. We ask the right questions to get you to the right answers on how this technology works, so come listen and enjoy!

Links:
solana website
Anatoly’s Twitter

Donate to Hashing It Out!

...into its work. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, guys, welcome back to hashing out. As always, I'm your host, Dr Corey Patty, with me my trust to go host. Callin, Croiche. So what's up, everybody calling? What's up? Everybody calling? Who got an energy today? Yeah, them right. I got some spunk, Dude. I drink my coffee. It's not so. That's seven o'clock in the mornings. You're not. You're not drained, Dude. I am drained. Actually, I've been working really hard. Like I got four hours sleep the night before and last night I got maybe five or six. So, yeah, it's a crunch time. I just popped up on a bunch of caffeine. That makes sense. I'm just excited to do the podcast. Man, I'm still to hear it and told us to say that. Speaking of which, we had today, we have an Atoli, the CO founders, Yeld Salana. Why don't we do the normal kickoff and that you introduce yourself. How'd you get started into the blockchain space? And then we'll from there. We'll move into kind of what so want, it is and why at needs to access hey also also to be here. So yeah, this is Anatoly go funder CEO Salana. And how I got started in this mouse? I guess as around two thousand and seventeen. I hit too much coffee in our self will for in the morning, and I had this like fever dream of encoding passage of time is data. So creating a data structure that represents time passing just kind of a weird thing if you think about it. has some like kind of metaphysical ramifications, I think. But what's cool about it is it's time is a foundational component to distributed systems and at the time Bitcoin was like it's seventy per transaction or something like that, and everybody was talking about scaling blockchain. And I say I had spend most of my career working at Qualcom, which is a wireless semiconductor firm, so kind of well aware of how wireless networks or seller network scale to, you know, millions of participants and one side a source of time. I was really kind of I knew that I had something that could actually solve all these scaling problems and that's really what kicks started the project. So with like what was your background and like the blockchain space part of that, or did you just like have a knowledge about it and have this sudden insight and decide to just jump into doing specifically blockchain work? So, I mean it was an engineer like most of my life, basically, but it spend most of my career qualcome. I like worked on up eating systems, some you know, wireless balls, about a bunch of random things, and with bitcoin kind of came out, I was like well aware of it and they're you know, tried CPU mining and they were kind of thinking of what if we wrote a GPU based version of this, we can get all the Hash power. But I wasn't really like serious about it. It was kind of like this is a neat thing, and it's neat from the sense of like we have this permission less open way to synchronize information, but it didn't really think about the ramifications of what it was. You know, engineers often missed the social kind of phenomenons. We think about the tech from the text perspective. So they know. I also remember that a theory of my seo and kind of thinking of this is like sadly javouscripts for this really cool thing. They could have used the much better virtual machete a better language. It got totally missed. The whole you know, social impact about and like what kind of a revolutionary accomplishment was to get that working. So yeah, just just when I know a lot of people a kind of like misshole story about that. Yeah, it's hard to secret as talking when you're can see each other. But you know, I know told of people actually came in for the social reasons at a ton of people who actually came in just for the purely technical reasons. And if you're like looking for the perfect scalable, you know, fully decentralized system like Bitcoin, ain't it if there remaning. So yeah, I could understand why that be kind of a turnoff if you're highly technical and just interested in how trans actual throughput, you know, if you can meet the demands of the users of the system, you know that kind of thing. Yeah, yeah, yeah, I think the whole...

...idea that it could be a store of value just wasn't even on my radar yet either, and that's something that is almost obvious in retrospect to me. Yeah, and that's like an interesting thing that I missed completely. So around two thousand and seventeen I was in San Francisco and this big ice out but happened and like literally, I too much coffee one day end. I was up to four am and I have the skin of this revelation. Oh Man, I can add a source of time in the network, that trustless source of time rights without cheating. It's a source of time that doesn't rely on an external clock. It's purely the scryptographic process that generates some data. And they scoured the Internet for anyone else working on this and I can really find anyone. And you know, after convin convincing my family that I wasn't crazy, I cleared my job at like kind of started this project. So you actually got past the point of convincing your family you aren't crazy. That's that's a Ja. That alone is it is it's a feed of accomplishment. It took about a month. My Wife's an engineer, so it took me like hey, you gotta listen to me. That feels like I think this will work, and I was like, Oh man, that's that side. Well, that's let's get down to that, because I see it seems to be a key differentiator and what you're trying to build versus what a lot of other people are doing. You said a data structure that I guess encodes the passage of time. Can you elaborate on that a little bit more? Yeah, so it's a lot of folks have actually started working and have been working on this seven at the time when I figured that, you know had this revelation and it's called a verifiable delay function. Before that there wasn't a lot of papers published on it, so it couldn't really find anything. But a two thousand and eighteen. A lot more papers are getting published and what we're actually doing is a little call like a poor man's verifiable delay function. So the technical term is you have a some some mathematical process that takes certain amounts, some puzzle, kind of like a proople work puzzle, that takes certain amount of time to solve, but the proof is much, much faster and we have the same property, but the proof takes the same computational amount of power. So if that makes sense. So what we're doing is we're using this, you know, shot to fifty six function, the same function that bitcoin uses, and we're running it in a loop. So I'll put as the next input. You run this thing in a single core as fast as you can and because it's this recursive, you know loop, you can't paralyze that process. Right. So if I told you here's a bunch of samples of this thing running for an hour, you know that I spent somewhere some amount of time doing this and maybe if I had a slightly faster single core, may take like in a forty five minutes or slightly slower one an hour and a half. But you know that actually spend real time to do it, no matter how much money have, because I can't, you know, go buy a million cores and make it a million times faster. I can only really, you know, super cool and like get a really, really fast basic from THESMC and super cool it, and then I'll still give me about maybe two three x feed up. So because it's found by the physical limits and electrons passing through this like single fronted you know, single circuit. But what we're doing to verify it? As you take all those samples and a modern GPU card. NVIDIA has been like super successful at that's scaling these things. You have about four thousand cores. So if you sample this four thousand times a second, you can verify a second and a quarter millisecond. So for practical purposes we have a kind of a you know, practical media, which is very secure because it's based on Chati fifty six. It very cheap to verify if you have a modern processor available. What do you mean by sample and then he sectional? Let me, let me make a key differentiated here in terms of sure like when people think a proof of work and the timing associated with block times, that is a statistical average of what up the amount of computation it should take to find something within a random sampling right and so like. On average, a bunch across a bunch of different tries, you'll converge to a given amount of time. So like. And what happens with a proof of work is that you optimize the difficulty to go to a certain time of it. So every so many blocks, bitcoin will reapp readjust the difficulty to just for ten minutes of computation time based on how long it took to sell blocks. But in reality you get have you have kind of variation of this. So you could end up solving a block way, way,...

...way less than ten minutes if you're lucky, and that's normal. That's just a part of the distribution. What you're talking about here is not that. It's not a statistical thing. It is literally it takes this many times to run shaw. Given to much CPU power, and since we have a pretty good idea on what single core processing power could be, you have a pretty good idea on how much computational work and time is going to take to do it. That's right. Correct, correct. Yeah, let's still don't understand when you when you brought up the part about, you know, the single like linear progression of this show tofty six over and ever again. We have an idea of a how long that'll take. I get, but I didn't understand. As affarel, you know, like we have gfu cores and there's like fourzero course on that we can or whatever they're called, and we can run like I guess we're running that same function in parallel. Is that correct? So what is the like why? What is the word sample mean? They're like, what is the advantage to doing it that way? So it's it's because shout to fifty six is pretty image resistant, right, just you have no way to to predict the output. That's why we can do the single fronda process and guarantee the time. It's not. But that also means that there's no way for us to verify it any faster than by just running it again. So imagine you have the single core, right, single circuit, running as fast as it can, and you simply record the number of times that has elapsed, like it's run the circuit a million times, two million times, and the current state. So these samples you record is data. So now this data structure, right, let's say it's sampled, you know, thousand times a second. You can take the start and end of every sample and run it and parallel on a different core. The course finish right and there's no errors. You have guaranteed that. Back sets to checkpointing every sing as you run through this this circuit, and then at W and's check point you have a starting to finish and you can then paralyze that process as you go through. So, folks, that mediaf research that work with like Dampon and a bunch of other researchers in those are building much more sophisticated approaches that use, you know, like twenty, forty eight bit square modula to to build a mathematical function that takes real time to generate but faster than polylog time to verify. So the speed up doesn't the verification doesn't require this GPU paralyzation. But those, all those constructs right now have some very funky cryptographic tradeoffs that are hard to deal with. Also, the hardware is and ready yet and from our perspective we're not religious about our approach. You know, soon as that harbor is ready, will switch it. But shout to fifty six and kind of me as an engineer, that's not a you know, a media researcher, I can I understand this and it's very secure and it's very easy to work with because until an ad both ship shout to fifty six specific instructions they have to. Both can do a single round of shown on one point seven five cycles. It's probably the most optimized function in the world right now thanks to big so we have a we have a pretty good idea of how fastest thing can get and therefore for somebody to try to attack the network would require a much great there, you know, investment in resources and something new. Well, to understand what those attacks look like, I'd like to understand a little more about how it forms consensus using this clock. So you have this what is essentially I don't even want to it's kind of weird, like I assume every nodes running this on their GPU. So they're all running the same thing, but they're not necessarily at the same time. In other words, are not running the exact same thing at these same time. So it's got some kind of like this a secret as clock property. Is that kind of a weird thing to say, or does that make sense? And so how do you use that to form consensus? So what we do with this thing is a so that every every validated and network runs a single CPU cor doing this process, and this shout to fifty six process what's interesting about it as if I take one of these samples and it write a message. Right, take a message, you know, I use that sample in the message. That's just a data and I sign it. That guarantees that message was created after that sample was generated, because shot to fifty six. I can't guess those numbers ahead of time. Right, it's pretty intersistant as a scripted, strong cryptographic property that the numbers are unpredictable. So if I, if I reference it, it's as if I take the New York Times it's took a picture of myself with it and now everybody knows that I was the I after that New York Times was published it. Does that make sense? All Right? Yeah, I think I did have to keep a long record of these makes you only have to keep record of the checkpoint, right. I want, I want to I want to try and rephrase...

...what you just said in the way that I'm picturing in my head. It's everyone who's participating in the network who wants to, I guess. I guess validators will receive a message and they're running. They're basically all running the same VDF. There all running a single CPU cycle of shot to frety six. That's and they're getting. They're getting hashes out over time and when they receive something they would like to validate it. They didn't reference the current Hash, which will always be a part of whatever gets submitted to like whatever canonicalized blockchain. People are referencing later on down the line. Yep. So, so the part that if a reference one of these Hashes, that guarantees of the messages created after that makes sense, right. Yeah, okay, so imagine it. Take a bunch of these messages and Hash them and then pen the hash into this process and the record that accounts and million. I had all these messages and inserted them. It's my single thread, single CPU thing, just by pending it to the curent state and they recorded that that thing that event. No, the shot to fifty six is not modified in the sub predictable way and that guarantees that all those messages were created before that modification occur. So that make sense? Yeah, so I can sell the so you can actually and the end of the day, I who actually gets to submit transactions to the block chains. that as that, because of right now, with proof of work, everyone's fighting to solve the giants, you don't cou puzzle, and then they get to submit things and then whatever they end up submitting ends up becoming all the transactions that they validated. Is that there's similar situation where everyone's vying to win a game and the winner of that game then gives you, gives the rest of the community what messages get included in the next block. So in like so, back in like the early twentieth century, people figured out a radios and they observed is that you and I are in the same frequency, we transmit at the same time, we get noise. HMM, collision right. So what they did is they gave everybody a clock and they said, you transmit them a minute one, I transmit and every like even minute and you've transmitted in every odd minute and then we don't know. I collisions right, and the were participants can basically grow to how well our clocks can be synchronized. So that's a very that's basically the foundation of scaling wireless networks. It's the first thing we do is we divide everything by time. And now they do much fancier things where they divide things by frequency and its frequency hopping thing. But basically, like you have a common channel and you split it by time in your common channels. That's a problem because you you can only have but so many like chunks of time available to so many people trying to broadcast messages. So if I'm trying to submit a transaction, how would I do that in your network if the blocks of time are already kind of allocated? Do I have collisions? Is that a problem? AM understanding something? So basically, block producers are round robin, and who gets to be the producer? As fast as we can, and right now that's four hundred milliseconds. So every every foreign a milliseconds we have a new block. So when you're as a as a client of the network, you submitted transaction, it will get encoded into block, you know, fairly quickly. Basically network delay. Oh okay, so there's a list of so basically the leader is given this. So like in a frequency spectrum kind of way of thinking about it, like I do like this this particular you know, Modulo of time is is is is given to this particular validator, right this Ford, you know. And so the people who are participating in this network know who that is, either an advance or by pulling the network correct. Yeah, yeah, so every kind of know. The second faces time. Good, wells is there is there a sacrifice for liveness, because that like if that node goes down, like what happens to the network? So here's the interesting thing. Is that, because we have this Vida. What the next note does is simply submit a proof that they waited long enough to get to their block, right to their slot, to their Hash count. And even if the previous note is down, there isn't really this classical time out where everybody waits for messages and that times amount simply the whoever gets to submit their block first, with the appropriate proof that they reach their required height, that's the one that gets processed. And because everybody staggered and delayed, right, if everybody's up and alive, then everybody goes according to schedule. But as soon as somebody's down, but next validated just, you know,...

...skips skips up right right. So what if somebody withholds their proof? I guess it wouldn't matter, because then everybody in the network would have already advanced far enough. But what if that dude to network delays or anything? Just does that matter at all? Like this person did not get some like all the messages that work. So it seems like you're sacrificing safety then instead. So I'm going to submit my message to to, you know, my peer, and my peer sees it, and he's the current elected leader. Whatever he's going to be the one submitting message. Does that go out to all the peers or like, how does that work? Do they day? On top of this is like, you know, single sharted. So No sharting is our thing. Even of a podcast called no sharting. is a really fast replicated state machine, and so every four villa seconds we produce a new block, if that leaders, and there the next one. Can simply provide a proof that they waited to get to the next block. And who whatever data arrives first, validators vote on it and the network continuous moving. So we don't sacrifice safety in the in like the long term. But we have is a synchronous safety. So as blocks are produced, because they're produced so quickly, individualidators and really know of the rest of the network receive them. So they vote and then they observe the rest of the network voting and they continuously increase the commitment safety and that grows exponentially. But at the kind of at the tip of the chain you have this maybe lowest commitment to safety that quickly becomes exponentially gross too full. If that makes sense. I have know I get it. Thank you. I take the obvious question from there is who is the validator set and how does one enter and leave it? So we're basically a proof of stake network. So the validators are who, over has enough stake gets to be, you know, scheduled in a staate weighted round robin. So you simply write some transaction. Let's say, Hey, I want to be a validator. Here's some steak. That's alligated to me. The next step op, he gets scheduled and and right now we are actually in this book the network phase, basically the go to market launch base. So we are working with a bunch of folks from the Cosmos Communities and folks from like the EST block producers. Cosmos, I feel like, created this group of folks that are more like the professional validators. Yeah, and it's been really amazing working with them and kind of on boarding all of them. And we're doing dry runs effectively, like we do dry runs until we can't crash the network and then it's made not right that. Yeah, we're following their model. So our version of the game mistakes a sculpture to soul. You know, a bunch of the folks are cycling nerds and part of it is stress tessing than that work and trying to demonstrate this is a this is the main that same hardware. See People, though, are they not? You know, totally permission is centralized. But can't do this. You know, foreign to mill a second blocks. FIFTYZERO TPS is our target. We'll see if we hit it, and that should be a fast blockchain that will never have seventy for transaction feed. But what can you do on it? You have this, you have this linked list of of, as of right now, ambiguous transactions. What what can you do with a transaction like can you make spark contracts your actors Park Contracts? Has the state grow for respect to that type stuff? Yeah, so we are. So my background is like operating systems. Actually worked on the SOAS culled through, which ran like every flip phone up there, if you had a CDM, a like motal eraser. That was a core kernel engineer in the thing, and a bunch of the team is actually from that project and we're like operating system virtual machineers. So our smart contracts language, or bike code, is this bike code that's in the part of the Linux kernel, I think, for almost a decade now, called Berkeley packet filter. It's designed for hype performance packet filtering, but it's now has now been used for kind of more general purpose secure bike. So this thing and a single like single machine can process sixty million packets per second and a forty gig a bit not work connection. It's designed for really, really, really fast like processing, right, and this is running that. There's even like implementations of this running at hardware. So anything that you can compile through, all of them we can execute. And we're using rust and C as as programming languages like native ones, but we...

...just ported the move the on and kind of our vision for this is that we can enable a bunch of virtual machines, you know, like we're looking at adding a spoot Nick Vm, which is a really nice, clean rust implementation of Evm, but also stuff like sapling. I think it would be really cool to run a see cash virtual machine alongside as well. So from our perspective we're kind of have this you know, really really fast base layer that does, you know, consensus and little latency block and you can program it in this byte code that's designed to be fast, and what you run in that bite code is up to you. And if you want to use a higher level language like evp, which will you take more instructions execute that? See a rust you can do so they as it's saying. Look at your website. That says like basically, like the way you do this is layer one, is this kind of stream of blocks done on the way of what you just said with the verifiable the life function, and you just build layers on top of that. So you're decoupling the transaction or the other virtual machines from the consensus layers. All right, YEP, and there's there's like, I mean there's definitely tradeoff there. Right. So I think the folks that are working on making massively started systems are solving a really difficult computer science problem and their goal is to have, you know, the smallest amount of computer power to add resources into this, you know, Mesh of network. You know this like meshup computers. For us, we're depending on these more professional valid laters to run validators that are like, you know, have more hardware. So almost like you know, the way I think of it is like we have a we're bootstrapping the next Internet with a bunch of homegrown ice beas. So people that know like attack a little bit that what in or not about it and can kind of go with this thing at their local co location space and hook up a bunch of gpus for signature arification and a bunch of s these for storage and make this thing really fast. But it's still totally open and permissionless, so people can enter, you know, at any time. So this is something that the kind of is a problem that I know of a VDF, and I'm I really want to hear you comment on it, and I think that it sounds like your network, just from what you're telling me, it sounds like it might still actually suffer from this, and it's kind of like the keeping up with the Jones is problem, Aka the state infiltrator problem. Like we have the use you mentioned earlier. You'd have to dump in a ton of computing power to like outpaced the network right, but eventually, like your validators need to kind of like upgrade their stuff, and it seems to me like you really do require a heavy amount of like metal just to make sure that the the system is kind of secure, like you're not going to be running this on a raspberry pie. It sounds like well, so so. Single core speeds have been flat lined for a long time so in terms of the vdf itself, it's it's going to be pretty tough for you to get to three speed up like like a large investment. Then you need like liquid night rig and cooling and stuff like that. Beyond that you'd have to spend a big pile of money to take out it a whole new chip at the SMC. That their latest fabrication process, and even then you might only get again another two. Three acts into three X is not enough to make a difference in this just now. Are some limiting for the physical limits of increasing, like having these these threshold steps of architectural changes f within cores like this. Like it's not going to like we're not. We're not going to go any further after that. So, but but what's interesting is that the number of actual you know what I remembers. Think these things been called as non uniform memory right, Newmat of course, is growing because you can just slap more cores and that's vapor that have no data dependencies, and this is what Nvidia does with the the you know, Kudah cores are basically you know, they just pread way more stuff in a single way for every year and that doubles every every two years in fact. So when we started we needed for two attiyes to process a million signatures per second. Right now one hund twenty atti can do a whole million. So from my perspective, are like what's actually happening? As we have, you know, the amount of parallel computational power is going to keep doubling because, you know, just the silicon wakers are going to get bigger so they can just put more stuff on it. And the process is still shrinking. So things will get slightly faster and slightly, you know, more that, but just the amount of silicon Makin ship is going to continue growing. So but the reality, guys, you're just saying. I'm sorry, I didn't mean it.

I thought you were done. I Apologize. The thing that kind of also kind of is this assuming a computational like paradigm that I don't think is necessary. For instance, like there are there's research into like chips that are light based, you know what I mean, like they're they're they're using photo photons is their method of transferring data. And I'm not like, I'm not saying that that those are anywhere near where we are. But don't don't you think by depending your architecture on our inability to innovate on one particular computing paradigm or one medium of doing compute is kind of not long term safe. I mean that's hard to predict, right, if quine of computing, this something unexpected in twenty years, but I think you know, for the next twenty years, I imagine the amount of compute in each validator is going to double. Right, so the cost of doing, you know, Fifty TPS, Fiftyzero PPS, is going to streak my half, right. So in our capacity should double. Right. So the goal for this is like for us to build a network that, if you have this normal spiky usage, that it can just handle it. Yeah, I mean also, like you gotta put this into perspective of the community you're serving. Like these networks are networks right there they if what you're doing serves the use cases that are built on top of it sufficiently, then good, okay, fine. The only problem with potential new paradigms in Compute, which I don't think we really have to worry about, hitting the limits, like catching up to the limits we currently have a silicon, silicon wafers, is what they can say about consensus or where or not they can take over like the super to the security assumptions of the base layer, and doesn't seem to be the case. Like imagine if we had like nuchino based communication and everybody in the world could be super connected to everyone else. That's consensus alt. We don't need anything, we don't need anything else. Right yeah, but Billo, that's, you know, probably a thousand years away. So what are you? Where are you? Like? What what's what's going on now? Where do you expect to go over the next over the next year? So we're look, we're going to launch, and then that's pretty scary. That means the PROTOL will be will be running and they'll be value to the underlying resource that governs like the similar resistance. Right that? This, you know, computer, and if we screw up, value might be destroyed. Right that? That's scary. Just as an engineer, you're always thinking about where or can things failed, and building a network from scratch over the last year means, you know, you're going as fast as you can. Then I'm always like concerned about security the like. Do we forget something? Do we not think of something? You know like that. That's that's to me is like the biggest story that. Where do you go? Like, how do you what do you do to allow you to sleep at take? What steps do you take to try and give you stronger confidence that what you've done is at least sufficient for the for the community you're serving, and your have things or processes put in place so that if something does exist, it should eventually show up by by the happy path and not the bad one. Yeah, so I mean like we're like, you know, good engineers. We do like separation of function and, like any kind of cryptographic operations, use their own separate keys and we try to do like kind of the limit the privilege of everything, you know, to the as much as we can. We're also doing a face rollout. So initially we're probably going to launch with just staking and consensus working. We can observe that, you know, it's out in the wild and it can't do much, but it's at least consistent and nothing is falling over it expectedly. And then turn up transfers and then turn on the contract sunsion, so that those are like the responsible engineering things to do. But you know, there's always something bagging in the back of my mind. We're always obviously, like we're also doing an audit with a reputable security firm. But, like you know, like people have been working on Firefox and crow for decades and there's your days found there every every month or two. So you brought up. Well, now you brought up two things, because I want to hear about your upgrade path and like how will that affect things? If you need need, need one. But before we get to that, I'm actually kind of curious about your staking. What is the model you have for validators in...

...the network to stake? Is Their slashing conditions? How do they join? What is our lockup period? What is the what is the staking incentive? So we have a cool down and warm up period for stakes. It's about to two weeks and the incentives are global inflation. So it's a validator. You Join, you create a you know, you're a contract that participates in consensus. Is what defines a validator and that users who have stake can delegate their stake to you. And if your votes are violate, you know you double vote or produced two blocks. You know when you're list but for the same slot you can get slashed and initially or slashing is pretty lightweight. It's a Quey five percent. There's no life in this slashing or anything like that. But eventually the goal is to have a hundred percent slashing, right, because slashing is really what defines the security than that work. But to get there, you know, we need to have a kind of a soak time for for the software and for the operations, for validators initially to see in that. Sorry, got so why is slashing so central to the security of the network? I mean, do you actually need to slash, or can you just say we're just for that for that particular period? You just not going to get what you we're going to get, like, you know, you have to be honest if you're going to get your your stuff. We find that you're not, then you're kicked out of the validator pool. So, honestly, like one of the worst punishments you can do to validators has forced their clients to redelegate, right, because that means I have to go reach out to everybody again and deal tell him go run this command or run, you know, run the software and restate, and that's pretty tough, right. That requires human, human time frames and communication. But slashing is ultimately like what's at state the capital at risk. That defines the security the network. And if you don't have anything at risk, then the securities very a week. And that's true about everything, big point included. You know, if the difficulties low, then would you trust a billion dollar like? I don't know if you guys notice. There's a reason is that it transactions and put a billion dollars into one account. That's that's definitely like. That's the security of of Bitcoin. And proof of work is the total capital put up front to try and mine. It's thanks, which then gets, you know, embedded into the recreation of these blocks, or someone wanted to reproduce them, but I'm curioed thing. Hold on, I got to respond to that one. I'm sorry, Cory. I think that's actually has more to do with the number of validators in the network and the fact that you have basically had a leader election system. So, you know, if you have this sort of like it's not leader election technically in the classical sense, but it's like you've allocated this particular frequency, let's band, I don't know what to exactly call it, to a specific this clock range is clock, you know, time to particular validator. You know they can cause havoc in that particular period if they're Byzantine. But if you have because they you know, with different mechanisms for for for consensus, you don't necessarily need to have that sort of like leader necessarily to do all the you know, transactional ordering essentially. So in the commitment. So I think that's because of the way that you did the or the it's inherent to the protocol that you need slashing because you have one person making a decision at any given time. So I think. So there's a couple protocols that are not proof for work that's out of slashing, like Algora and, I think avalanche, if I'm not mistaken, and honestly think that it's a mistake, because imagine like shots fired. Yeah, with proof of warrant. Yeah, I think I got to exactly where I wanted to go with this one. Full to school. Try working out a lapse and you know, I think that's it's a interesting choice to make and it's a controversial one and that's awesome that people are exploring about. Just from the way I think of it, is this one billion dollar transaction, that one billion dollar account and Bitcoin. Right now, somebody published a private key for this. That could's basically stalled bitcoin for I don't know, a hundred days, because that's the amount of the inflation, like from block rewards that you would need to add up to a billion dollars. But miners are not are not really picking the heaviest for they're picking the fork that gives them the most money. Right, yeah, pretend they're intrinsically creaty or self serving. So if you have a protocol without slashing, imagine a trillion dollars a volume flowing through it. And that's not the last the one percent of the...

...girls payment volume in the world. So that's one of that. That's one of the like the key differences between proof of work and proof steak, because if you have to, if you have to stake a resource that is external to the system, then you can never do slashing. And that's how Profu work systems work, as you're up you're up there putting up front and external resource at the chance of being a block producer, which then allows you to give a real world value upon the assets that are created based on that stake. With purpose stick, you have the ability to slash because you're putting up an an internal resource, namely the token on the on the thing. But then where do you get the where do you get the kind of maybe idea of what the value is resource? So with proof of work, I think the volume capacity of a network, I think is obvious at least to be at least the inflation reward right. If we can generate twelve million dollars a day to placer reward than twelve million dollars of volume, the stink of probably hand a little, a little more right, but pintuitively that makes sense. But if you have a trillion dollars to volume a year going through a network, how do you secure like, you know, an algorithm right, like I drive a bunch of you know, b GP around engineers. So you had a bunch of notes on there. Create a partition and I create two blocks and then they also short Elgo at the same time up and the companies that are in the companies that are depended on the trillly knowledge worth the payment volume right now. I just want to point out there that, I mean I have to defend the honor of that. I don't think it's I think, I think, I think we're the the the real cause of that. Issue as nothing to do with the like. The fact that you needs, you need slashing conditions, is more inherent to situations where you you basically have deciders, a group of you know, either a group of them or a or a single decider that's kind of either elected or they mine it or something like that. These kind of situations, even with validators, like they're just validating like if, if it feels to me like when people are able to make decisions on their own, then you need to be able to slash them. But if they don't have to make any decisions really, they just have to say this is what I know, and the world kind of comes to consensus around that, then you don't need to to to penalize them for saying something wrong. You just need them to get in line with the final state. And so that's in that case, you just on a purely incentive base model instead of like a penalty model, and I think that that I think. I think keeping it open mind. I just want to I have to be careful because I don't want to be like a Shill for like the company or for but like I think something to do that, you know. I mean. Yeah, no, I agree. I think that. So for us that doesn't even make sense to slash you in the in the very like in and making mistakes at the tip of the the spear, right at the Front of the ledger, because they're like the network is not settled yet and it doesn't really matter if you miss vote or produce a block so you can actually like it's just way more work to construct the steep machine to allow for not slashing there, but's lashing later on if you're trying to be malicious and trying to trick and exchange right and accept a bunch of fake money through a through a partition, because obviously the time frames are much longer and then you're creating a much longer for it. So that those kind of separating those in terms of like what are the actual attack vectors and you know who's executing them. I think it's like security analysis that needs to be made on that's a I don't know, like kind kind of careious. But you have, like you said, like the stake is your security, and distribution of that of a coin has a lot to do with WHO has like who's put up the amount associated gift. Forget all that. You have one security model associated with how much steak validators have. I'm kind of curious about like the real world, it's a cold needs associated with being a validator. It seems to be relatively expensive. If I would like to be a validator and I no longer have to, I not only have to put up a substantial amount of money, which I'm willing to lose a by if I misbehave, but also I need a substantial amount of physical hardware in order to keep up with the network, etc. So the hardware is actually fairly cheap because of nvidia single twenty attis, someone under the one hundred bucks can process a million at fifty five nineteen thing issues per second. Your Modern Day PCA, for like you know, can do like I think it's are a bit of bandwidth. Yeah, so there's quite a bit of hardware and like a machine that's stuff five thousand...

...dollars and that's going to recommend it set up. So it is more of an investment than the people run Demos and they run something in a raspberry pie are lopped up. But I think the reality is that we kind of have this professional set of validators that are coming up the cosmos and kind of the block producers from other from other networks, but also like the people that are hobbyous miners. They typically run, you know, a rack of ppus, right, and they have a fast connection because they want their blocks to be, you know, propagated at a decent rate. So there's actually quite a bit of folks that are in the space that are already way, way, way more than qualified to around a validator and, on top of that, like set so you have the valiator set set up and they and deep and the creation of the blockchain is done in a a in a safe way. What about people who just want to use the network and run notes that gather information? Or type of hardware they need? Is it just a simple like raspberry pie mobile phone? Did they need a PC? Didn't need to rely on someone else to give them data and hope that they do a good job of it? So, like most like kind of a common approach to the stuff is using like clients. See, you generate a proof of an event that occurred on chain, and that's really like the foundation to like Inter blockchain communication, and that's basically what we're building as well. So between networks themselves or between, you know, people that are just like wallets that are using a wallet. They don't need to run a full validator. They can simply get proof that the network is doing the right thing or somebody should be flashed. So what about upgrades to the network? So is there any advantage to like a new GPU coming out, for instance? So there's a hardware upgrade side of things. Like do they have to keep up with the Jones as if they're a professional validator on this network and video comes out with an architecture that still has paralyzation, but they're able to do a tremendous amount more. It sounds like it just benefits the network because you can run more checks. So correct. Here's the interesting thing. Like people can deploy in the cheapest hard run, they can run the network. So if there's but we can price transactions based on the most optimal capacity of the protocol. So imagine, you know, we did a really good job of software engineers and we can actually handle forty gigabit per second. Networks take a big pilot work, right, not going to happen this year? Yeah, maybe in five years, right, if we just had had the time to do it. That means that we can price the transactions as if we're handling twenty million transactions per second, because as soon as there's demand for that, it's ridiculously easy to scale up your hard work. You just, you know, type in or punching commands and all of a sudden you have a data center and whatever you know, cloud provider you want, and if that capacity sustained, then you go invest in racks right at a collegation space or whatever. So, but that's like in the sech extreme use case that all of a sudden we are the financial fabric of the world that we're handling. You know, twenty million transactions per second to be insane, right. So the network only needs to keep up with demand, but we can set the price based on the capacity of what can be achieved right with available harbor. Because want to do that, though, because that seems to sell. Like I'd imagine the incentive for validation is going to be based on transaction fees. If I would know, you said it's inflations actual right. Yeah, I like, I'm not sure if transaction fees are long term sustainable model for any network. So it's only a spent for former spam protection versus actual incentilization scheme. It is it is definitely in that center as Asian scheme, if we're at capacity, because then, let's say they get, you know what, you know, fifty two hundred percent margin, which is totally fine. That like those twenty million numbers you're the network is making billions a year, right, but it's justified because of those numbers. It's doing something crazy, right. It's handling a lot of financial transactions. I don't know if transaction fees and in the road can do it, because if you have a bunch of low volume of networks that are handling, you know, billion dollar transaction on Bitcoin, how are the validators only earning, you know, sixty cents or whatever for that billion, for...

...securing a billion dollars worth of assets? So to me like there's this misalignment of incentives that will need to be correct and I'm not a hundred percent sure how that will actually resolve itself. But that's that's not a problem that's unique to us, even I think that's apparent in Bitcoin as well, or that action. I think the next twent I like to move into somewhat of a different subject that's tangentially related it. None of this matters if no one uses Salaana. Right, we have a lot of so many different projects trying different schemes to like find the thing that's gonna scale to what needs to be scaled to allow for people to use it. They basically like as a universal chain across across these things. What's your goal and like the community would like to use Salaana and then how do you get people to come and join? You build on top of it to create the Val you're trying to get. So I'm like, you know, insanely bullish on the space. Like I think we have this like huge opportunity to just get rid of ads from like everything that we do. If we could just do that, it's plenty ascared to stay in everyone. Yeah, like is like if you look at like how much revenue financials like payments processors make, it's two JILLION dollars a year. And revenue like the entire Google, facebook adds ecosystem makes like one fourth of that. So if I would a billion. So there's a lot more money in the payments processing side. And if we can start building silly things like, you know, read it or Karma's it's token and you can like each other. For me is right, like or search engine where, like, instead of adds your micropaying for the stuff you're eat and it becomes fast and flawless and like kind of transparent to you. I think we can start building products that are these, you know, have massive numbers of users, that are self sustaining without stealing your data. So to me that that's like a enormous opportunity for the second and next phase of the Internet. And again, like I was a like a teenager in the s and I saw the kind of this hockey stick of the Internet. We had, think forty million users on the Internet globally and ninety six and that's about how many wallets are right now. And I think only a hundred thousand wallets have more than point one bitcoin. It's pretty small market right now. But if it doubles, right, if it actually doubles every year, when we hit those like two, three, hundred million number of self custody wallets out there, you'll start seeing like homegrown crypto, only like phenomenons happening, like the front stairs of Crypto, like I don't know if you remember the front store right, like I had ye sixty my six degrees of friends, like it was like the coolest thing and I had they had a scaling problem for a computer science problem. They had to recompete the scrap every time so that we enjoyed I did not know that about them. That's interesting and it's funny. You brought up the wallet to the to the Internet. You count owner from the night from S. I mean I remember those days and because I also was a teenager in those days, and I would say that the big sleep forward for the Internet was, you know, while we did need improve technology, better browsers, etc. Etc. And it all kind of started coming together, the the biggest sleep was the infrastructure. Yeah, broadband. Once broadband started getting adopted and the and people were using their cable wires, the little copper wires through the house, they were already using watch, you know, their favorite TV shows to and also brows the Internet at a wapping two megabits per sect per second, you know, from up from the S K they were doing before. Suddenly things like Youtube became possible, you know, and that's when things started getting interesting in my mind, and that didn't happen until early s and then I think youtube appeared, but two thousand and five sor right. Anyway, point is is that like it was? It's an infrastructure play right now. That's my feeling towards the world, and I you are in the right direction. So congratulations for building stuff and thank you very much for coming on and talking with us about it. Yeah, for sure, this was Super Fun. How do people learn more? How do they fin how to get in contacted there and's onto for so go to Salamacom. There's a discord group where all the engineer thing. We have a telegram channel. You know, podcast, no sharting podcast. So please check all those. Yeah, those big we actually had. We had like the deer prontoco guys on A. They're awesome. So charting is a lot of work, but we needed for...

...scale, needed for other things, but it's not a required with the scale. All right, cool, let's come on. Cool, awesome,.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (119)