Hashing It Out
Hashing It Out

Episode 63 · 2 years ago

Hashing It Out #63 – Nano – Colin LeMahieu

ABOUT THIS EPISODE

Nano is providing a new approach to fee-less pure payments networks. We are privileged to speak with Colin LeMahieu, CEO of Nano to discuss their DAG voting-based consensus protocol, their accounts-blockchain model, and their NanoPoW for memory-hard PoW to replace fees in a network. Interesting stuff, and definitely a great conversation.

Links
https://nano.org/
https://twitter.com/nano
https://github.com/nanocurrency
https://docs.nano.org/
https://medium.com/nanocurrency

Entering work. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blockchain infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. Hey, guys, welcome back to the show hashing it out. As always, I'm your host, Dr Corey Petty, with me as my trust to cohost Colin Cuche. Say Hello everybody calling. Hello everybody. Colin, Nice, simple your lips. You're basically saying it as I was saying. I was I actually did that. I want to see if you kept the paintings. Did I got good rhythm. What could I say? Today's episode we're going to talk about Nano, the Nano blockchain, and we have on Colin Lemon Hue, the founder, to help dive into how it works, White Works, where it's going, how it's been etc. Etc. So, Colin, once you do the main thing, give us a quick introduction as to kind of how you got into the space and then we'll just start talking about Nana. Yeah, yeah, Hey, everyone's good to be here. So I got interested in cryptocurrency probably in about twenty or two thousand and nine, when I heard about bitcoin. I just kind of let it on the sidelines for a while, Sitt in the back of my head, and then a few years later I revisited it, kind of wondering where the whole industry was, and it did seem like there was a whole lot of adoptions. So I kind of looked at it from a technical standpoint, seeing, you know, if I can figure out what it would take to kind of get digital money out there, and it seemed like there are a couple things I could contribute. So I worked on that for a while and then, after some months of planning, I decided to start on what was ray blocks at the time and worked on that for a couple years by myself, and then we started a team in two thousy, late two thousand and seventeen to kind of bring it forward, bring the protocol forward. So we change it to Nano and that's what we've been working on ever since. What's your background? Why did you why were you able to provide technical insight and and maybe innovate and that that way? Yeah, I will. I am a software engineer and I was working at call calm as a compiler engineer or an assembler engineer, the kind of have this like embedded performance tuning type of background in computers and that's how I got it from a technical standpoint. And then, you know, building a currency. Have always been interested in economics. I've never been through a university program on it, but it's always fascinated me kind of figuring out, you know, macroeconomics, microeconomics, and then currencies its own kind of beasts. So it was fortunate that, you know, had the performance background and then the interest in economics to kind of put that all together. That's definitely I think that's a it's a similar story for a lot of the engineers in the space is that they have some type of technical background, usually in computing or not, and software engineering, and then they had an interest in money. Maybe they didn't know they had an interested money. They can introduced into Bitcoin and then I oh shit, I kind of like the idea of money. Yeah, at least like the concept of how it works and how it how you use it and what it is. And then, you know, the standard trope of down the rabbit hole. You Go. Yeah, exactly, and it's the economics and kind of the politics around money. By one of the underlying things of how we designed ato is to not have, you know, monetary policy inside. It's not be non inflationary, and those are things that I would like to see in a currency that I use. So it was it was cool to see, you know, bitcoin started off with having fixed parameters on that and then, yeah, I mean I thought that was good idea. It needed to be done because because people losing their money to inflation and all that stuff, I see is very unethical. So kind of that standpoint mixed into this to Pique my interest. Let's talk about so yeah, let's talk about what Nana is. I did go through it a little bit. saw some keywords it kind of jumped out at me. One of them is dynamic proof of work, and I was wondering if maybe you could go over like what do you what is Nano? What is your key innovation? Why do you how do you differentiate from existing systems? Yeah, so, our keep differentiators, our...

...focus are are use case that we focused on, which is to be purely a currency. So we're not adding like smart contracts to it. We're not doing any of the small things that move it away from the concept of currency. You know, Nana Foundation that runs this isn't isn't a business, it's a nonprofit that kind of develops this. So our key differentiator is that we having very, very fast confirmation time on transactions. That was one of the things that identified when I was kind of doing research way back in the day was that in order for this to be useful by people in a day to day thing, it needs to be roughly equivalent to what they have right now and and hopefully faster. And unfortunately was neither of those. Is actually slower than like me swipe my credit card. So just from user experience, I thought that that was never going to go over very well. Or you have to have other things in order to cover it, like second layers. So that was one of them. Our transaction speeds are less than half a second generally in there, so they're incredibly fast. You can barely notice them when you try to pay for things and that's that's like confirmed. Right. Yeah, that can change. Fully Conf Yeah, fully confirmed on chain. So there's not a second layer, it's not a zero comp it's done and then we get that by our internal system. It's a voting based system that generates the consensus. So that's it's easy to send out network traffic and that's why we can get really, really fast confirmation. So the other thing that I thought was going to be a problem was the amount of fees inside the networks. And it kind of goes back again to why would anyone want to use a cryptocurrency instead of what they're currently using? It has to be better or very, very at least pretty much similar. So the fees, I thought we're going to be an issue in the network. It's a bad user experience people having to type them in to figure out what this fee means, like how much? If I ask my grandma to pay me something like a hot is she going to calculate fee? It was a very bad user experience. So we're trying to eliminate that too. So we don't have any in network fees, which is another key differentiator with pretty much everyone. There's a couple other, you know, coins out there that are fast. I don't think they're quite as fast, but there aren't coin other coins out there that don't have fees and network and that's makes our user experience very, very simple. So I'm assuming. I'm assuming, since you don't have fees, you're doing something like either a Vidia for proof of work on a per transaction basis. Is that? Yeah, likely they can. What do you do? Yeah, you nailed it. It's it's a small proof of work. It's not a vida, but it's a it's a small proof of work that you attached to each transaction. That is the throwing mechanisms, just like a Hashcash type of thing. Yeah, I know we're actually to day, as of when this is being recorded, going to be releasing some more information on a new some research I've been doing on that. So well, cool that later when to talk about that now, like yeah, we're ready to go, like whatever, yeah, this is actually stuff that's so like we've actually looked into here. It all those like something that some of our plugin systems could maybe implement is using him proof of work as a feelist system, and it does seem to make some sense, but it also, you know, seems to have the downside of you know, is it actually a fee? Is it equivalent to a feat? Does it actually provide, especially since, you know, you can't really know what system is going to be using the particular portion of your network. You know, like how can you say that an Ioteo device with like a raspberry pie equivalent is the same as somebody with a graphics card? And so I can'm wondering what about? What do you feel about that from for work standpoint? And maybe you could get into some of the stuff that you've researched on it, I think, before before we do that, I want to I want to get a broader understanding, at least for the audience, of some of the architectural differences of n out, because it's a dag based system, which allows for a lot of these things, like some of the consistus mechanisms that are around this, particularly yours. Only work and the context of using it, like using the right data structure, and a dag is what you use, which allows for that, and talk about that. Yeah, yeah, that's a good point. We can talk about that. So you're right. We do have a day system. So that's one of the changes that I made to this compared to a traditional blockchain where everyone's kind of fighting for the front in order to get their transaction put into the front of the list. So we have one to each account has its own chain. You Change Your own transactions together and everyone, all the ledgers and all the nodes and network track, all these change it, all these chains as they move on. So actions in my chain are not affected by...

...actions and somebody else's chain. And this is this kind of goes back to the computer science design behind it. If you have an unlimited number of people trying to put their piece of data into one spot, that's an enormous amount of contention and that's why you know you have the validator, is the miners. That kind of solve that contention problem. But it takes time to solve contention. If you can eliminate contention entirely, you can. You can kind of skip that entire problem and then you have to worry about it. So we eliminate contention by making one account for a chain. The only person I can add your chain is yourself. So there's there's no contention as long as you know you're making your own self chain. You can. You can kind of mess it up and then the network will have to fix it if you try to. This is our double spend problem. So if somebody messes up their chain and doesn't make it a one follows another order, then the network will like force it to one way or the other. But absence of doing that, you can just go on and on and on by yourself making your transaction line and it'll never be interviewed with, feared with, by someone else. And what and the point of contention? What happens? They're like say I how do you keep people from from staying honest and validating their own work, especially when a currency is a inherently two party system? So like, if I maintain my own chain, how do I ensure that the other person is updating there's appropriately, or that they're sure that I'm up doing mine appropriately? Yes, we, because I alluded to earlier, have a voting based system to show consensus on it, and so we have. When you send out a transaction to the network, it gets flooded out. You know, it's pretty standard, pretty simpler, pretty simple, and there are nodes on the network called representatives, and they're the ones that are voting on these transactions, and nodes of the network accumulate these votes count them up in order to make sure that particular transaction that they've observed has been observed and confirmed by everyone else the network. So this whole process is the thing that happens in the less than half a second. The votes go out to get counted and then it's confirmed, and it can do that very, very quickly. So usually the next question people have is, well, who are the representatives? Is, if this is a decentralized system, that needs to be done in a decentralized way. So the way that we design that was balanced holders in the network. So if you have a balance in an account on the network, part of your account state is a representative of your choosing, so that representative can vote with but they can't spend the balance that you have in your account. So you can go offline. You in the representative stays online and votes on your behalf. So you have like a basically a bunch of people who maintain availability on the network to perform votes, and people delegate their weight and their way is basically the amount of money they hold to those, to those to those basically watchers for voting. Yep, Yep, that's exactly how it is, and you can reassign that boat way at any time just by doing an other transaction for yourself on the network. So if you don't like a validator, you know, you can pick someone else. If there's some sort of problem, you know, like this validators gone offline for whatever problem, either the the company's gone or maybe there's some sort of regulatory issue that's new and now they can't run it anymore. You can reassign it to any other representative that you want within, you know, half of the second. So why would someone do this if there's no feas in the network? By like what's their incentive? Yeah, so this I get this question a lot, but you kind of have to back it up one more stuff. It's like, what is someone's insentive to use the network whatsoever? Yeah, and the incentive to use a cryptocurrency network is it solves your currency problem better than any other thing. So people want to use Nano because it is fast and there are no fees and you can send money to anyone anywhere in the world at that speed, so that that is eliminating expense on a company's balance sheet. Wire transfers cost money, bank accounts cost money and time and they're slow. There's float risk. So this is all a savings to them. So when you count the amount of money that it takes to run a node which is on the order of forty to sixty dollars a month. That that's a trivial amount of money in order run a validating node. And plus they get advertising revenue out of this. So there's a you know, a top list of not advertising revenue, they get advertisement out of it. There's a top list of, you know, the top validators, the top weight representatives, and if you're on that list, you know when people go and look at the representative list, your name is going to be shown on there. So you...

...kind of get some brand recognition as a contributor to the, you know, community and you know, people click and go to your site, get some click through traffic. So that's why people have done this so far and for forty to be on a list that the entire community will see whenever somebody like wants to look at it is very, very low cost to hi our lie. And the last, the last, like main point to this or like I guess whole that. I see that whole, but like my ignorance that I have the perspective Nano is the digital scarcity part. What is one of the token economics of Nano are there? Is it just like a flat inflation rate over so over period of time? Is it like a geometrically decaying series like bitcoin? No, I don't. Great had its. Actually it's actually fully distributed right now. So there's were the initial so we distributed our coins differently. We sent them out through a free Fawcet and the reason we did that is because we kind of wanted to move away from the you know, kind of the the rich get richer mentality where you have a lot of money to spend on mining hardware than you're just going to get more money out of the network, which eliminates a lot of people. I just don't have the capital to to run a mining rig or the just I don't know. They don't they don't have the expertise to do it. So we distributed that over a free foss and for two years and what people would do was solved Google captures. For they would just solve Google caption and it would tally how many you got every hour and every hour for the two years that we ran this, it would send a distribution out to the top people that got clicks. So we cut we use the Google captured so it wasn't bottable. We made sure we got a human on the other end the CATICAL Turkey. Yeah, yeah, exactly. You know, some people did. I imagine that's a that's a valid use case to you know, you want to work and get paid in something else. You know, it's as legitimate as anything else. It was funny when we're running at it was. It was interesting to keep this site up because I had a lot of traffic. We got a ton of traffic out of Southeast Asia, South America, basically any place where people are making a lot of money per hour, and this was a way that people made money. People like quit their job and we're feeding their family on this and they're like, Oh, thank you, I made it fifteen dollars this week off a clicking Google captions. This is like going to set me up for another two weeks. And like well, don't get carpal tunnel and that's crazy. Like the total, the total supply like that. That distribution is that set for ever or less inflation that? Yeah, yeah, I know it's set right now. So we turn that off in October. Two thousand and seventeen. When we were internally there was a number that was like the total supply, but we just decided we wanted to stop earlier than that and cut off the inflation. So we sent the remaining two thirds. One third is in circulation. The remaining two thirds we sent to a burn address and then just eliminated the inflation. So it's a fix supply right now. I mean technically it'll. It's actually slightly deflationary just because people lose keys and and whatnot. But yeah, it's fully out there. So we're never going to have like a governance question of you know, what's the inflation right now? What's an appropriate be structure to have, just eliminating all these points. That contention is our primary focus. Cool, cool. So, okay, back to the technicles. I'm still kind of curious about your memory hard for Work Algorithm. What do you what do you what are you doing there? What? First off, let me just can you describe to the audience what memory hard for work is, what the drawbacks are to some of the existing algorithms and what you guys are kind of doing different? Yeah, yeah, so the memory hard proof of work with a standard proof of work. It's just the difficult problem that needs to be solved and generally what we want with all these proofs is that time has elapsed. It's it's less concerning in most areas. You know some areas that still is interesting to have other properties like an amount of hardwares dedicated to this or an amount of energy has been expended during this process. Usually what people want those that time has elapsed. So that's generally what we want with this too. But what happens with the existing proof of works is you can you can make you can parallelize the very, very easily. It doesn't take a lot of harbor gates in order to do a Hash function and you can stamp out millions and millions of gates onto chips...

...in order to solve this problem in parallel. So memory hard is a different approach to this. Where it's trying to get is trying to prove not only that time is elapsed but also a certain amount of gates were like logic gates, transistors were dedicated to solving this problem. So you need to design the question that it's solving in that way. And the reason. There's a couple reasons to use memory gates. The top ones are memory is extremely commoditized. It's one of the the dentist and least cost type of chips that are made. They're pretty much universal cross you know, different processors whatever. So they're very, very commoditized. Are Ready they're highly optimized by Sampson and all the other people that generate memory. So there's a reason to do that. And then also just kind of from a physics standpoint, when you have logic gates, you know go from zero to one, there's an enormous amount of power consumption in that transition going from zero to one or one to zero. Just simply changing those state values is where that you're most your power goes up inside of your computer. So memory, if it's just sitting there at a specific value, is consuming less power per transistor. So you can hook all these things together. Time has happened. There's a certain amount of transistors here and the power consumption is, you know, low, lower than using to your logic gates. So that's the reason we're doing all of the memory hard stuff. The issue is that it's pretty hard to actually make one of these things. Computer Science guys are pretty smart at breaking these things down and, you know, kind of getting an edge on it. So that was that was what the research I started at the beginning of this year in order to do this. I looked at the other ones that are out there. There's some pretty good ones that have been made. Proof of works, but the main issue that we ran into is the proof size is too big. So our transactions, because we send them out individually, we process them individually, they're around two hundred bytes apiece. So a lot of these proofs were like five hundred bytes or like K or to k. Some of your advertising is basically it's a lot of saying like each transaction fits within the size of UDP packet. That's something you seem to be constrained by. Yeah, I mean we're not constrained by that anymore. We we did use it UDP originally and then we change it to TCP for the flow control. Okay, we'll probably actually moved a quick in the future for the multi channel per connection endpoint property, which is just interesting and nice to have. But yeah, we're not technically strained by that, but we actually we do want to keep them small because that decreases the latency. It's it's kind of the same trade off as some of the other coins where they're going like big blox small blocks thing. You know, big blocks have throughput but it takes longer to send the block. So it increases the latency. So we wanted to keep the latency as low as possible. So, yeah, these proofs would just not be big. They would at least double the size of our transactions and other cases like x Multipliet. So did research on this and, yeah, just came up with another way to do it. So I've been working on this this year. It's putting up a technical article on it, but it's it's it's it is also a search a random searching algorithm, where it's trying to find an equation with a specific property. That is extremely hard to do if you're just using a CPU to kind of plug and Chug and it make attempts on it. So kind of like Hashcash, where used do a random nuance. That would be very, very slow. But if you store parts of this in memory, it'll speed it up significantly. So there's a very big advantage to using memory in this, I see. And do you do that once for transaction, or is this like thing you have to do repeated times to get the memory proof handled? Like what is the what is the model for the proof? Yeah, it's just it's done once for transaction. So when you create a transaction, you know, you have your transaction Hash and based on that, you generate this proof of work and you present the proof, along with the block, to the validators on the network and then they either accepted or drop it. Yeah, another property, just to go back, that we really really needed, was very, very efficient validation. There's a couple of there's a very good like work proving and memory proving. The algorithms out there,...

...things like are gone, the the key denurvation functions, but they're not fast to validate. They're almost as slow as they are to generate, and it just it's not feasible for two hundred bytes to be validated by something that takes, you know, fifty or a hundred millis I cans a validate. That's ten to twenty validations per seconds, and that's can easily be overwhelmed with network tropic. Yeah, our God good for things like passwords, storing and salting your password and Blah, Blah Blah, and it's great for that side channel protected in Yadayad Yada. But yeah, it's slow, intentionally so, and that's just to prevent that kind of stuff. So it's also memory hard to which is, you know, interesting as well. But I think what I'm kind of curious as, like you said, you had an account and it each account has its own sort of chain. Is that correct? I understand that properly and that your dag actually is kind of like validating all these chains and that's where the validators do that kind of work. I would assume. I'm kind of having a hard time visualizing the model at the moment, but maybe you can explain that in a second. But first I have the question. Like, if you have this chain in your chain, and only your chain, you have to do this proof of work in order to send a transaction across the network, and only you can have blocks to your chain. Correct then can you pre calculate the proof of work and just have it there and then just immediately send whenever you want? Is that how you get your speed? Oh yeah, so we before when I said that you can you generate the proof and work based on the your transaction Hash. We actually have like one little trick inside of it where you you generate the proof of work based on your previous transaction Hash, so you can count. You can precalculate one proof of work in advance and latency hide that generation time and usually people don't do a lot of transactions, like I'm not doing multiple transactions per minute or second. I do maybe one per day or week or something. So after I send my transaction, my wallet will start kept pre calculating the next one for the next transaction that I'm doing, and so interesting. So that that's only for spam prevention, correct. Right, yeah, that's not. It doesn't affect the consensus. The consensus is purely a vote based system. And then my next my next question of that is going to be when designing a specific type of proof of work, you're usually doing that to map it onto a specific type of hardware that's optimized for that particular algorithm. That's that mean that in order to use man now in the future, you're going to have to have like it's going to be most efficient to use a specific type of hardware to do it. Well, that's kind of why we use the the commodity, the commodity hardware that's already out there. So, yeah, dd our four memory is the hardware that you use and US need a computer that has enough of that to do the generation. But is enough? Right now we have it tuned to about four to eight gigabytes of memory, or is it to boot consumption? And if you use less than that it slows down pretty heavily. I don't know where there's going to land. Part of the part of our like release thing in the next week or so it's going to kind of be figuring out where is a good amount to do this. Now it's like that's just as in. That fits within like the newer raspberry pies. So like yeah, cheap, cheap devices are going to be able to do this reasonably so, but I mean they'll be dedicated to doing this, but still like thirty bucks or something like that. Yeah, exactly. It's the hardware investments really, really cheap, and we we are designing this. It's we designed it for Netto, but I think other people will be interested in this also. So it's going to be an open source library that, you know, I hope other people use, because I think the I think our goals are all same in a lot of ways in order to do this. So, wait a minute. So it takes like four to eight Giga bytes to produce the proof correct the so like this happens like fast, is that what you're saying? So it happens so fast. You basically don't realize. What is the what is the time to actually produce your proof of work? Like what is what is your target there? Well, right now with the eight gigs of memory, our target is with a GPU, like a couple seconds, so like one, two, three seconds or so. But I assume other people want like longer times. So you can and longer times mean either you're going to have to use more memory to it or apply more memory to it or wait a lot longer. So there here's this thing. Is it? The hoping is statistical. So it isn't they you can't absolutely you don't have to have the most optimal amount of memory applied to this. It's just going to be the, if the fastest way to do it. So if you have the memory, you're going to over double the amount of time slightly that's going to take to generate it. Yeah, I'm just thinking, like what would the side effects of this be? I mean, like I guess, for instance, like if you're like building this for a gaming system and you're trying to, you know, do a micro transaction...

...mid game, like it could cause a stuttern in a system that you're on. But like that's minor compared to like the fact that you can get this through pretty quickly. So I mean it's interesting to kind of think how that would impact user experience. The question I kind of have is, like, do you see advancements in hardware being a big impact on what you're doing, for instance, like memoristers, where you have compute and memory like bound together almost? Do you feel like those kind of things might impact your proof of Work Algorithm? I don't think so. So the reason that we looked at this was what was going to say. I had one one thing that just this last sentence. It reminded me of. Oh Odd, just as a side note, are proof of work is outside of the like signed transaction paylot good that you send. So the person that generates the transaction doesn't need to be the one that applies the work to the transaction. And we use that a lot in a lot of ways for low power units in order to generate transaction. So your Raspberry Pot and sign it, send it off to your high powered machine like that you have in your house or your company that doesn't have designing key and it can still apply the proof of work and then send it up to the network to kind of do that. That's that's really cool. That's useful. Definitely for sure. Yeah, yeah, we did that tethering a designed it to be outside the transactive payload, Alexas and my transaction. Yeah, exactly. Yeah, because, like all these low power devices, are not going to put anything in there that consumes power. Their designed to SIP power, not to consume them. They're definitely not going to pay extra money to put something in that consumes more. So in the sorry, what was your what was the question correct? What was my question? I got sustracted by l how cool that was. Yeah, I don't so. I guess the only thing that take. The think I really want answered, and since you know we have got like twenty minutes left, is how are you doing the voting based consensus it, I mean the so when a transaction comes out, somebody are let's just say that one of the representatives observes transaction. So it's looks at the block, appears to fit, that it's signed correctly, and what it does is it sends out a vote for that block's Hash and it puts it into a memory, kind of a like a men pool, or is just a memory. It sits there and it waits for the confirmation corm to come in. So it will announce its vote out to everyone else and it'll watch votes coming from the other representatives coming in and it'll tally them up and if it sees, you know, somebody else winning or as a different block winning, it will remove the transaction that it had picked and then switch to the winner and then reissue its vote for the new winner. It's kind of bandwagon voting and then it collapses to one of these solutions. Is that is it? Is this quartum responsible for the entire two of you of the network, or is it is it? Is it somewhat Shart of? No, there's no sharting s computational effort of the quorum growth with the amount of transactions that are happening across the network. Right. Yeah, yeah, I mean there's people have talked about like starting cryptocurrency, but it it's not a tech. It's not a possible thing to do that. It goes back to the CAP problem, where you in order for it a currency to work, you need to have correct accounting. So you need this, the C in the a properties to be there. So the one that you can have the one you can't omit as partitioning. You cannot validate transactions isolated, completely isolated from the network. You need to have some sort of what your neive account? What was that? You want to hold a single unit of account? Like, yeah, you want to hold a balance across the network a properly, then you need you need to have kind of global knowledge there. Yeah, exactly. Yes, Yep, yeah, you need to change the state. Change needs to be in sync with everyone else. But yeah, like partitioning in it even in, you know, proof of work coins and everything. It's like you can't validate the transactions in complete isolation. You need to have a little bit of information from, you know, your friends or something else in the network in some way, shape or form. Now, like the group of Woor coins did it really, really cleverly and efficiently. You only need to know like a recently confirmed block and the approximate current Hash rate and but you know, you can validate it with that. It's our the amount of bandwidth used for our validation processes is higher than like proof of work coins, but the latency is much, much lower. Such...

...as kind of tradeoff that we have. It seems so like a if you hold the latency constant you're going to have increasing computational demand from the from the bust, increasing computational and network demand for the validators to be able to do their job if you want to hold rates here at the same time. Right. Yeah, so it is. That's the that's the quality of service mechanism. That as actually the dynamic proof of work that we're talking about earlier is our quality of service mechanism. So the overall limitter to our networks, TPS, isn't like a fixed number, it's it's the bandwidth dedicated to the network given the condition. So what the validators do is have like a bandwidth cap or just kind of a limit of which that they process transactions so that that will kind of implicitly set the TPS that the network has and that in validators don't need to do this in sync. They can kind of just move it forward if they want to. But the whole the validation process is only as fast as you know some of those slower actors. So yeah, yeah, so once you have that rate limit set in, what you want to do is given a set of transactions that validators presented with that is larger than the number that they can fit in this band with constraints. Who Do you pick to go to the front of the line? It's just like what is fair to do? So what we do is actually just look at this proof of work and put the most difficult solutions in the front of the line. So, yeah, so it just it moves into the front of the line. And actually this is another benefit of our proof of work being outside the sign transaction payload is if you don't like the priority that your transactions getting in the network, you can just reapply a higher difficulty proof of work and then put yourself into the front of the line in order to get it validated. It's like paying paying larger fee. You know, there literally as a faith. It's a kind of competential effort for or scarce, scarce physical resources in yes, Yep. So back to like more of the consensus side of the things, like you've this voting base system and you say collapses, that everybody agrees, but are you agreeing on the state of the dags? Does everybody have the same view on on that, or is this every node keeps its own sort of like idea of what is true and then you can kind of query the network to get like what do you actually what does the note actually storing about the state of things and what does the voting due to the nodes state? Yeah, so when we describe the DAG and say there's one chain per account en people think is that my account is on my own node and no one else knows it. That's that's not the case. That goes back to the partitioning thing. So every every validating node and everyone that you know runs just the full node has every single account chain that's in the ledger. So it they're all tracked by everyone. So the voting process moves these individual transactions from an unconfirmed state to a confirmed state. So your question about like does everyone have a uniform view of the network? I have given point in time. Not, not entirely. They don't have a uniform view of on confirmed transactions, but they do have a uniform view of confirmed transactions. So this isn't like some other ones where nodes never like come to consensus. We do come to an agreement on state continuously, but like at the fringes of you know, the last couple seconds of who published transactions, there's a little bit of disagreement as to which one's actually been confirmed. Until the votes come in, they're counted and then we know what the actual state change agreed by the network was. Got You saw all the states are, all the chains are being sinked, and then the DAG is just a method for deciding. You know what? What? What? What's going on in your voting for individual pruning? Yeah, so what a what is the storage problems with that? And how do you prevent people from just like, like, I know that the proof of work that you have is kind of a civil cost. Make is actually deep, you know, denial service prevention. It seems like Alut, like, how do you prevent people from just creating a ton of accounts and make it really difficult for people to sink your network? Well, that that is a less of a property of the proof of work mechanism and it's more a property of the transactions per second that get put through it, because people with even the know the like light going or bitcoin networks can and can send out dust transactions and they just sit in the UTXO set forever, growing indefinitely. So I mean spammability is and the faster like you accept transactions, the faster that you accept state changes is, the faster that that set can...

...grow. So just simp you want to say something. Yeah, if you know what. Yeah, so the fact that you could precompute doesn't, like concern you at all. So like you one of the time. Yeah, but I could have like hundreds of thousands accounts I've been computed and then all of a sudden I decide, and I did it a high value proof of work, all of a sudden I just dump them all into the network. Does that? Does that? Do you have any mechanisms to prevent that kind of attack? Right now we don't have a mechanism on the precomputed thing. So there's a couple of, you know, ideas that we have on it, but also there's some important things to kind of look at for that that type of thing to be happening. So when there's one attacker like trying to saturate your intentionally, if we divide up the number of transactions that get confirmed into segments, we can only confirm like a certain number, let's say a hundred, per second in the network. So what an attacker has to do is fill all of those one hundred slots with the highest proof of work difficulty at that time and after that one second e lapses and they get confirmed, they need to do that exact same thing again. So they need to be filling every single slot in perpetuity in order to continue to execute this thing. If any anybody else who's only trying to just do one transaction in the wall, it says, Oh, it looks like the difficulty, the average difficulty of things that are getting confirmed is four times higher than what I have currently. So I'll just spend four times as much time generating a more difficult solution and then publish that one out and then they'll get put to the front of the line. So it is a quality of service, it's a it's an anti denial of service mechanism that people can kind of just put themselves the front. We do have some other ideas on how to make it so you can't deeply precompute things. You know, if I have a section of the let my account ledgers that's offline, I just built them and I haven't published it and then, yeah, like you said, dump them out there. That takes time to generate and we yeah, there's been proposals to like rotate little extra nuances that you have to get mixed into your proof of brick generations. So it can't be more than like an hour old or or a day old or something. We have an implemented that yet, but yeah, that's ideas out there. So, like the front the thing, fees, seas and stuff. They generally the ideas that the economic its Senti is that if somebody decides to do that, then the whole network is still benefiting. Everybody's validating get some sort of like hey, these either the fees get burned or they get to redistributed, but the whole network like makes money off of it, and so that somebody doing this is not actually hurting the network. That note was like, fine, that's what we want. We want you to pay more, we want you to do, you know, more difficult things, but in this case, because there's no such like incentive model baked in, it feels so you're actually making it, you know, so that the validators might not even want to do their job. Well, yeah, I mean to a certain extent the network wants traffic to come through, but also if it's just useless traffic coming through, you kind of are killing public usage of the thing. So, like there's a couple guys that are handing a money in a circle to each other on, you know, the validators and then the person trying to spam the network and but everyone else is kind of left out. And if I'm a business that wants to accept currency and I can't accept it because of this thing, I mean it's equally as useless as if no one was getting paid and you don't want to devalue your own coin. I mean, what's the point of that? So I understand that that argument. That makes sense. So it kind of trance, you know, through put. Are you saying right now we can sustain I think twenty two thirty with version nineteen on it. We made some very significant improvements in version twenty, especially with replacing our database back end with rocks T be. So that's gotten us a lot more on Beta that we can do it. I mean, I'm Beata her Beta network. We try to make it is look as close to a public that work as possible. But I mean we can push a couple hundred through that at this time. I don't know how that quite translates out to the to the live network, but we we haven't done any sort of like public network tests so far because we are new. We're changing things like dynamic. We work which was put in in version nineteen, and then this new proof of work algorithm just like it's kind of a scientific thing where there's no point in testing something that's going to be instantaneously invalidated. So that's it. We already we already. We already knew by kind of like just looking at the profiling stats during high load where our bottle neck was, and right now it's the bottle neck is either bandwidth or io. Onto it,...

...and I the IO problem is going to be a much smaller problem in the near future. I was scarious. So the reason why paper earlier and you have a you have a sin transaction and receive transaction that are better made from each kN to each account. What's being validated. Only receive transactions and then that that like. Well, I guess like a receive is the culmination of a total transaction. You can have multiple sins and then one receive, I'd imagine. Is it is only the like? Do things only get validated once a received has been broadcasted, or is it though, all of them? Yeah, all of them are validated. So it kind of everything that's validated has a dependency chain that's in there. So let's say we go back to this pret computed thing. So I precompute like a soft pork or double spend into this and I send them both out. We need to roll back all dependent transactions on those things. Yeah. So, so from a risk standpoint is where's the risk in this type of thing? So if I'm generating transactions and sending to myself in a forking manner, you know I'm risking my own accounts. But from it's from a receiver standpoint, what they don't want is they don't want their transaction chain to be rolled back by somebody that sent to them. So they're they have a reason to wait for a sender to get their transaction confirmed before they receive it and link their chain with with that other chain essentially at that point by receiving it. Okay, cool. What do you see? What do you see? They don't going like in the near future. But what do you hope like? Prett you see you on horizon? Yeah, you have no smart contracts, but how do people like interact with this? Is a purely payments ors or some other logic in the future is thinking abouting or yeah, I mean the only logic that I would look at in the future would be privacy, if that can be done efficiently and quickly. But yeah, we are focused mainly on being a payments coin. Will Not mainly we're exclusively focused on being a currency. So we're trying to be a global digital currency. Um, I think that that it has an enormous use case. There's billions of people in the world that don't have access to bank accounts because of the way the banking system works. They can do that as long as they have Internet access. Now there's a lot of fees being essentially rent seeked out of people by institutional players, by the central banks. There's inflation. So I mean there's tons and tons of advantage to just the very simple concept of having currency out there and when we focus on this one concept, we can dedicate all of our effort to it. It's kind of like goes back the adage of you if you follow two rabbits, you're going to lose them both. So we follow one goal and that's to be the most efficient form of money for people to use. So, yeah, that it doesn't smart contracts. Have some other issues. Who are just like difficulty with it. It's a much more complex problem and if we had if we divert our time on to trying to do that, it's just an opportunity cost that we can't spend on the the currency part, which has a very large use case. That's what we're doing. Yeah, that makes sense. So, yeah, we're we're trying to find beachhead industries where think things going to I think a lot of people in cryptocurrency are trying to do that. But I think that we can actually revisits a lot of these industries that weren't able to make use of the the first generation cryptocurrencies because they technically didn't solve the problem or they're too expensive or slow or something. But now that you know those problems of kind of an alleviated, we can go back and revisit those. The reason we've been kind of waiting on that is because, like, as you guys probably assumed, we are not a fork of anything. So we have a unique codebase and that needs to be run through standard software engineering like rigger. So it needs to be examined, it needs to have a lot of time associated with it to make sure that there aren't issues. So we've been playing catch up on that. But you know, we're in a pretty good state now and we're going to be pushing this out to, you know, as many of these beach head industries as we can. That's fit as what questions should we have asked you that we haven't asked you normally week? Yeah, this is a like. What is the most exciting thing that you're looking forward to in space? What have you got cooking? I mean, I I think that the biggest thing that I like is going to be what it can do to help people in a lot of these countries that have their own currency supply and it's not it's just not managed very well. So it greatly impact a lot of people's lives. They have a really hard time, they can't connect to the rest of the world as far as I like, economically, and it's just a bunch of hasslings for no reason. It's because it's because people wanted me to currency. It's like, well,...

...countries issue currency, so we're going to issue a current to yourself. It's like currency is not a national thing, it's its currency. Is Best out there when it has maximum liquidity, when the most amount of people can earn it, the most amount of people can spend it. They can spend it anywhere no one's interest is served in currency when you have to convert it to another type of currency, and the only reason they do that is so they can all enact their own monetary policy, which countries will never agree on. So I think, I think getting this out there and getting this into people's hands and used purely like, not from like a self serving standpoint. It's purely from like. I think this is going to legitimately solve a lot of problems for a lot of people in the world. Is really exciting for me and actually for a lot of our team. We we all share kind of the same goal on that, which is what we're working for. Where people go to find out more and couldn't touch you. Yeah, well, the easiest one is Nanto Dot Org. That's our websites and we're doing a little bit of a refresh on that and the next month, but it's there. And then our twitters at Nano. That's where we put out all the updates and that's where we'll put out information what we're developing and like great hand all over them. Yeah, exactly. I mean it took us a little bit to do that. We had nano currency before it. I think about after a month of discussion we got at Nano, so that was fun. ratulations on that one. That's a hell of a the hell of a handle have all right. Well, thanks for coming on the show. I really enjoyed that. I'd like that, like kind of the differentiating architecture are how you solve a problem. was I was enjoyed seeing how people try and solve this problem differently and seeing what works. Yeah, absolutely, yeah, thanks for talking with you guys. Fun thanks.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)