Hashing It Out
Hashing It Out

Episode 10 · 4 years ago

Hashing It Out #10: TrueBit - Harley Swick

ABOUT THIS EPISODE

TrueBit is one of the more impressive decentralized projects, and we're really excited to have had the chance to interview Harley Swick, core developer on the project. We talk to him about TrueBit, how it works, it's significance, how it differentiates itself from other off-chain computation projects. We go over some of the many, many applications of decentralized off-chain computation projects and how they can improve world resource utilization. Amazing and really exciting stuff!

https://truebit.io/

https://medium.com/truebit

https://twitter.com/truebitprotocol

https://twitter.com/hdswick

Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, next episode, episode ten, of Hashing it out. As always, Corey and Colin here and say what's up Colin? What's up Colin? And today you have Harley Swick from True Brit coming on the show to talk about how true bit is going to help bring computation off the blockchain and scale what we're trying to do here. Harley, you want to give us a quick introduction as to who you are, what your role is with true bit, what's true bit is and kind of how you got introduced to the space as a whole? Yeah, so I'm hard. I'm a court developer JR project. I'm normally based out of Dallas, Texas bought for right now I'm in Berlin. She's pretty fun. So I originally got started as an open source contributor on the tribute project and then they pere me on to do full time development. I'm the developer by a background, so I could have heard about a theoreum and then got really sucked into the rabbit hole. I also really like Dogecoin, and it's an important point, because the Dogecoin, I heard about the Doge theorium bridge, which is what led me to Shrube it, and so that's kind of how I started working on the true a project, the way to, way do, way do. What is that? I actually hadn't heard of that. And why does that relate to true? True, but I don't think I followed that. That trade there. Yeah, it's a Um. Yeah, so it actually does relate. So true is actually originally design to sort of all the problem. But there's a couple years back there was a big bounty to sort of bridge the Dogecoin blockchain to etherium, so you could sort of move dogecoins over to atherium and it would sort of be like the doge would be represented as like year C Twenty Tokens. So it's not really like a Tomic swap type bridge. It's like it's more of like a two aeight peg, and so it was sort of like hard technical and the body sort of sat around for a while and then some different people, if atalic actually proposed the solution, but it was pretty expensive. And then one of the trubute paper CO writers, he proposed the solution to sort of a trout like solution to sort of solve this problem where you sort of only do the conwy one piece of computation on chain and the rest of its action, and that's sort of like that was the first sort of idea of Dru it and ideas that you are getting your locking dogecoin tokens and is sort of this multi sing and you have this like proof of the transaction and you have this like serialized data and then there's an escort cashed along with that and you have to sort of prove to this dot really smart contract that you like actually locked it her tokens. And so one part of that proof is checking a script and obviously escrout is like the dochepoint group works. It's never going to fit on antherium smart contract and so it offloads the verification to a true to Drub it is. That was actually the first like yeah, okay, I think a good way to pose this, or least the start this conversation, is to. First let's define what the problem is. That true bit is is fixing, and that's basically doing computation on the etherium virtual machine is expensive and order to scale into real world applications, we need to find a way to do off blockchain computation that is trusted, in a trust that it a trustless manner. Now, how it's true bit enabling this. So yeah, the cost is prohvitive and then you also have been gas limit. So even if you had all the money in the world, they are actually limited by the blockchain itself and as how much computation you can do on smart contracts. So true, it is used for that. So the way it works is that someone sort of submittally called tasks, and that includes sort of the code or if you sort of like with the Dogebridge, he's already knew what the task was. So there was like some optimizations. There's who going to do is a script. So someone sort of...

...like ask someone to like solve this stuff. We have what's called the solver and they're the ones who run the computation and they run the computation off chain and these submit the solution on chain, and then there's sort of this like number of blocks. The if to wait before that task is sort of a solution, is consider a final however, there are also other people that we call verifiers kind of watching and if they they will sort of see the task, they'll see your solution and they'll go and check it and they just degree, they'll challenge you. And then it gets to sort of into the secret sauce of true it, which is this verification game. And so at this point, you know nothing's been computed on the chain and so you get this verification game. It uses the verification game. The verifier queries for intermediate steps from the solver and it the way, it worse, is a binary search. So it's sort of uses binary search to you know, there's a queery responsism and in narrows it down to one step in the compulation and that one step is sort of loaded on chain and run on chain and because it's like, you know, we all trust the laws and computity and that's sort of the consensus mechanism, is that it can keeps that one step and sees it sort of like that final output was the actual answer and if it isn't, then the solver loses. But if it is, the verifier loses, so you want to make sure you don't challenge it. If you're really so I can kind of go in and work detail if you want, but that's that's like. Can I can. I tried and see what I got from that. Yeah, basically what you're saying is as that there's off chain computation which results in some sort of proof or result set, and the there's a group of solvers who go out and solve those off chain computations and then submit their solutions to the blockchain. In order for those solutions to be those solutions are then kind of validated through another group of people who are basically validators, who are basically looking at for these solutions and then minding them for, I guess mining is not the right word, validating them to make sure that they are correct. In the event that they're not correct, they were a bounty post some type. Is that correct? Yeah, it's not quite abouty. So that in so incentimizing verifiers is sort of like a whole nother story and I kind of haven't gotten to that. But yeah, people. So the way you sort of prohibit, you know, behavior is that before anyone could participate in the net work, they have to submit and deposit. And so if in the case that, like you, the verifier challenges and they prove that the solver, you know, didn't do correctly, than the solver is loses their deposit. But if a verifire challenge is a solution and they go through the whole verific patient game it turns out the solver is innocent, the verifier will actually lose their depositive. So to sort of Urbat you know, Dawson, the I that this's this brings up an issue for me. Maybe not an issue, but I mean it's a problem you're going to have to solve or it limits the scope of how far this can go. And that's this seems like a lot of potentially redundant work to make sure people are doing things trustlessly. Because someone does the does the computation, they then submit the result. Other People Redo the computation and to see if they got the right result. If not, they've then verify it to make sure they don't lose their own deposit. And you know, this continues. So I'd imagine as if this were to scale, you know, completely the the amount. There's not going to be enough verifiers to do all of the computation that's being done on the network. So you just hope that the incidents are aligned properly, or you hope that if you are cheating your system, you're not in the sample size that gets verified, I mean, or I guess you just have subsects of people who care about that result that will spend the time to Redo the computation. But if that's the case, why didn't they just do the competition in the first place themselves? Well, there is one other way. You can make it so that you can't be a solver until you've verified to some degree, which you mean you'd have to wait the probability that you can verify something. But I'm not really sure. How does that Balancing Act work? Yeah, so maybe there's some confusion on like where the verification kind of comes in. So, like verifiers are never like forced to challenge anything. They're just sort of like if they want to, you know. And so the Dotebridge we sort of assum we had altruistic verifiers and so certain you know, the dot bridge doesn't really like wouldn't probably have like that many transactions going through. So like you can kind of assume that you can get away with it, but we but yeah, you're right in the terms of like you need enough verifiers to be checking all the solvers, because those there's too many solvers...

...not a verifiers and you have a good chance of people sort of submitting fraudulent solutions. However, you can sort of toggle it so you know, you the task givers will the people submitting these tasks. They can as a parameter, they can set the number of blocks that they want to wait before a solution is to finalize. So if they know there's like a heavy load of the network, they could always just and they want to get really certain about the security, they could always sort of extend that number. Another solution is that you can sort of have pools of people, so you can have people sort of only looking to solve like low, low, comcute tasks and then maybe cools of people looking and that's sort of filtered by how big your deposit is, and then you can have the pools of like bigger tasts. So there's sort of a number of ways of doing this. I think go into the incentimization. So there is sort of like a probabilistic kind of way that you can make sure you can get like enough incentives or verifiers, and this is in the sugger white papers. We sort of call this the incentive layer, and what it uses is so there's this sort of forced error mechanism because and verifiers, like, most of the time people won't be cheating. So, like verifiers don't have a huge incentive to sort of go and check everyone. But what we do is we sort of probabilistically enforces thing called a forced error. So the solvers like required to submit a incorrect solution. They don't necessarily get slashed, but the sort of task is to get done again. But in this is to sort of make sure that you're checking the verifiers. And so the verifiers will see this sort of core solution, but they'll just think it's like not an incorrect solution. And the challenge and what they get out of this is what we call a jackpot, and so if they if they have challenge this court are then they get a piece of this jackpot that's like held in reserve. So there's this sort of probabilistic means that like make sure that people are sort of verifying as many tasks as they can, I see. So basically, if a verifier verify is something's correct and the solver is correct. Not, it's a neither of them lose any money. There's no bounce here. There's no gain to be had in that. Right event that a verifier find something which is happens to be a false solution, that's intentional, meaning that the system actually has a built in falseness to it, and they verify that that false solution. They basically mind a new it's essentially similar to mining in that they've discovered something that was randomly selected and as a result of finding that randomness, they hit the Jackpot and they they get incentivized for finding these kind of random behavior, which is what keeps the verifiers moving on. The verification that working insensivizes them to go forward. It's that correct? Yeah, yeah, exactly. And if they if they don't, you know, see any sort of wrongdoing, that they won't challenge. That sort of how it works. Yes, it's basically if they if they if they submit a challenge, they get penalized. So they're, you know, they're only going to challenge things that they know are incorrect, which makes them verify all kinds of things that check for incorrectness. That's interesting that the force falseness is what truly makes that work. We're does the false falseness come from? Because you can't execute that falseness on a on the blockchain, can you can't create a false solution automatically. Doesn't there have to be some solver which decides they're going to inject false falseness into their solution? And as a person who is putting computation on this network, how would I differentiate my answer from an intentionally false answer from, you know, a nonintentional asser if that makes sense? Yeah, yes, that's a really great question. So the way we do it is that every time a solver has to because eat, the solver doesn't know beforehand if they're doing a horse error not. So they submit two solutions and it's kind of a commit reveal scheme. And so whenever they submit the solution, they don't really reveal which one is the incorrect one or or correct one. And then keep there's a period where people go and challenge and then they sort of reveal which one they they believe was. You know, the solver reveals which one they said was the correct one and if a you know verifiers, if it happens to be a word are and the verifiers did challenge the forse are. Then they hit the jackpot price. They won't, you know, challenge what they were doing. WHO's paying for this check pot? So, yeah, the Jack Pot initially has to sort of like get you know, there has to be sort of funds raised for the initial Jackpot, but there's also sort of taxes. We give whatever task ever sort of...

...submit tasks, there's a little bit of a tax. And then also when people so we don't actually give bounties to the verifiers or people who get slashed. We actually just move them money into the Jackpot. Okay, that's so good. That's this brings up the next question. What data set, sighs, are we kind of limited to here? How much computation limits are there on this kind of system? How our timeouts determined? If, say, I put a very computationally complex problem out there, and how can I verify that it goes to somebody who's going to be able to solve it in time? Does it is it is it is it literally lay a let's say I'm a user of true B it and I publish really computationally complex problem onto the onto the system with the very large data set and I want somebody, I want to give some many two days to German this information out and they could to do it. How do I know that it's going to go to somebody who can accomplish it in that time period? Yeah, so we actually have our sort of quote unquote gas and sort of gas limits, and so the task ever, you know, submits sort of a gas limit that they're willing to pay. So there's the gas, some of they're willing to pay. So that's what it gives you like a bound at how big the Tas is, and then they also submit the timeout and so a like. We're still kind of working on this, but the idea would be like, you know, all those parameters are public and shared and so solid views a task those sort of take that into that in consideration whether or not they feel those parameters are fair in terms of Datas in size. That sort of does into like a whole sever problems. So like you can't submit data sets on chain, so then you have to submitted an chain, but then you get into the data availability problem, which is like really crazy and not asphalt. Yeah, so we've sort of it that. So datas thats a sort of like a whole separate kind of realm. But like with like the Doge Bridge, you know you don't need a very largdeas that there is still like a high back of you because it's running like a proof of word of mechanism. So at the moment we're sort of limited by how secure the task we can do because of the data availability problem. Well, you can't have everything at the same time, and that's it's solving off chain computation and up in a in a at least probabilistically fair way, is a step forward which allows you to solve some types of problems until other solutions come forward with like things like data availability. Maybe in another way. Yeah, because I see one way I could attack is to basically, let's just say you require an IDA that's addressed, to pull down the data set that I could pipe in and in amount of data that idea, that's address, because no sizing necessarily associated with that. Yeah, pulls up down. I could literally just say, Oh, yeah, you're looking for this IPFs address. Will guess what, I'm sharing that and here's an infinite amount of data, basically sucking up their bandwidth, which doesn't matter in certain cases, depending on your system, but in other cases where there're a metered bandwidth or are there in a country like Australia, where the bandwidth is needed, that would be kind of problematic. So I mean I would I guess you'd also have to put in bandwidth limits and that those kind of constraints into the truebert system for that. But yeah, no, I at the same time, though, I look at this and they go this is like a step in the direction that we need to go. Your game of flying the act of computation in such a way that it really should be able to grow into an insensive model, that that that that could scale. One of the one of the things that we mentioned as previous EPSOS is when I have some sort of like it, when I had some conversations which initially describe this system, is we compared it to is map makers used to inject fake streets into their maps so that if somebody were to try and copy their map, they could. They could, they could basically say which one is the fake, the fake street on this map, and then basically prove that the map was their original territory, because they can actually their original creation, because they could actually, like to that fake street does the true bit system? Oh know what that fake street is? Do you have to actually just kind of go through the act of proving it every time? Um, yeah, so the map making sort of thing. I guess it goes back to sort of like being in same classification of like interactive proofs because, like with the map maker, you could sort of like proove like there. You would need some kind of like verification game of like whose map is. I'm not sure how the map solution would played destruct how necessarily the same problem. It's yeah, but it is still in the general classification of interactive proofs,...

...which I'm like super interested in. And true it is sort of like only one drop and then interactive proof bucket and there are a lot of really cool interactive proofs that like use verification gate like things. There's a whole body of literature all this I like people haven't really tapped into. Like everyone's really into Zycase arks and like not interactive fruits, but you know, working on true of it as or like reading with Pias of these interactive proofs. I think they're really simple and yet really powerful and like the map maker thing that you describe it, I feel like you could describe it as an interactive proof yeah, for all. So that's just in sets of things, like when I was first told about true, but that was kind of like the analogy that was used. It doesn't sound like it exactly applies, but it's it's it's kind of interesting in that it does open a whole new way of thinking about how we can actually do some of these, you know, scaling problems going forward, meaning that, yeah, like you said, everybody's focusing on these in a zero knowledge proofs and stuff like. I think there's two different things here that I think so. I the map analogy works well with the injected falsification than the injected untrue part of the interactive proofing doesn't quite apply there. Can we talk a little bit more about how someone I actually verifies that the solution that you gave is the correct solution by looking at the like the binary research and doing that one kind of computation? I'm a little fuzzy on that. I kind of want a little more credication. So it's true. Is that I think whether that making didn't quite apply is because true is very much like senator on compulication, and you know, you can sort of verify that they did the right answer by going and downloading the code and right, elpancy, you get the same solution right. So every so that sort of limits what kind of tasks we can do. is they have to be deterministic tasks. They can be, you know, non Germastic tasks that that obviously, but you can you can literally just go run the computation, take a half of the solution and check the hash that like the other person's amitted the Hash of in. Then you know, you challenge. So we sort of leverage the last computation as are sort of consensus about. What about statistical errors and computation? So like maybe like good computational error or something like that. There has to be some threshold of accuracy that you're looking for because say, for instance, if I'm doing scientific calculations where I end up with some number plus, Remindus sub number based on convergence criteria, someone else doing that and taking a hash of that result isn't going to be the same. So how do you deal with something like that? I mean, what is the what is the criteria for for correctness? Is it a hash of the actual result, depending on what's happen computation it is? Where is it like this plus or mind of something or like how does that work? Yeah, so that's really good question. So we do so. We just use Hashes, so everything's deterministic. However, you know, our back end is let assembly and it's kind of leaves back into that. The Web Assembly spect doesn't specify how like computers are supposed to do floating point. They sort of leave it up to the machine for optimization and that's a problem for us and sort of any blockchain system that's wanting to use web assembly, because that's a source of nonterminism. So because it's, you know, floating points, like you mentioned, like plus or minus difference. So that's pretty consensus. Breaking their harder. So we recently just published a through it the inspect with like a couple solutions to this. One way is a sort of annulate floating points as integers. Other ways is you can sort of like cannot applies the floating point, and so that's kind of the different solutions. I mean the most basic way is just sort of emulate floating points of integers and then you sort of get determined to impute, yeah, a latching solutions and floating point numbers don't exactly go very well together. That, I think. The other thing is that floating point stuff like scientific computing, you can kind of get away with like just running it again. So maybe, sure, it isn't sort of the system you want, you can serve run it three our times and get an average. You can sort of that set that to that running in sort of like a more contained environment than than just web assembly. So you've got kind of a vm wrapper and it sandboxes the computation around, you know, just we assembly. Does that sound reasonable? So we so we you actually use a determinus, the web assembly interpreter. So that's, I guess, sort of sandbox again. So we actually modify the webs of Le Spect a little bit based on that. What type of problems are you're looking to solve, at least...

...initially, and solve something like that can be handled appropriately, like what is what is the problem step that true bit wants to go after or thinks there's a market for in the INN rom of solving something like scientific computing and or are like floating point operations. Yeah, so for now we're just using sort of floating point emulations. I guess if people did want to do scientific compute. They could just, you know, sort of accept that and I think that should work. So smilestone is to actually integrate with life here, who are doing video transcoding, and we're actually using the floating point emulation to sort of solve this because, you know, video transcoding sort of has like photing point numbers and stuff like that. So that's like a problem because like you can't obviously check video transcoding on a smart contract. So that's we're using that. The doge bridge is sort of an example in I mean you could use that for other types of bridges as well. We just verify a skirts. So like wanted to bridge light coin or stuff like that. I mean there's really not like stopping you the other things. So we're sort of hanging out here in the ocean protocol office and they do they really interested in like big data and sort of like computing and took curative registries around data sets and so they have some interesting applications. They want to do some type of compute around bonding curves and they use the sort of like what they called Trap Zoidal integration and they're having issues. So that's sort of, you know, the CHAPSOIDAL innigation formula gets around this sort of like voting point issue, but they sort of cant. Really it's prohibitly expensive than to do this on chain. So they're looking at using drew it. I know people have so the plasma by paper actually mentioned true it. So you can sort of check, you know, verify fraud proofs progress exactly with plasma. So yeah, I mean it's a scary solution. And so actually we were talking today was one they wanted to do. They wanted to go like a writing player one game or something like that, and they wanted to be able to check that you would actually unlocks some achievements using through it's those kind of interesting exacts are subset of their computation. Get it put, publish your system to verify that a user did some specific thing. But this is not necessarily go on a blockchain. I'm sorry, I think I may miss the first part. Yes, it's okay. It's if you're making like if I was making a game and I wanted to make sure that somebody did something, but I didn't want to overload a blockchain with all the material. I maybe all they wanted to do, like maybe token transfers of value on a blockchain. I could offload that computation to a system like true bit to verify that what happened with that particular user is good, while not overloading a blockchain, and then all other type of renderings and such can be local or who cares? Right, yeah, exactly, exactly. Oh, and that also reminded me some batching. So like sort of back be really large batch transactions. So the noses scheme approach us about solving that problem. Error Gun. They need to sum up votes. So you know, you can you know. So that was like and more voting. That was like more complex than just sort of like adding things up, but also it's related to cost. So yeah, there's a handful different things. Like I guess you could do the main thing with true it. So it's sort of we get a lot of how are we different than dollar, or like how we different thing? I eat it. Yeah, so the real big difference in like even sometimes like our team will forget this, but like true it is not meant like necessarily all of and I's like aren't solving hard problems. But like it's fairly easy to get to pay someone to do a computation you know, on the blockchain you sort of send them money and you just get the result back. But it's difficult is being able to, you know, trust this Lee check whether or not that competition was correct, and so that checking aspect. That's why I should it was designed to do. I feel like that's definitely the the initial push of true bit was we're starting with interactive verification and seeing what we can do with it, which happened to be off chain computation, because that was necessary at the time and still is. Whereas goal, I wanted to say we just want to somehow find a way to scentivized offchain computation. We may worry about verification later. Maybe I got that wrong with them, but it seems as though that seems that would that's what's going on. Yeah, yeah, so I think will like with the render and you can sort of just like run in a couple times and if you're like Deda from like the average, you like you know, I think you can get don't quote me too much on that, but like...

...with rendering, you can sort of get away with like sort of less terministic solutions. But yeah, let's true it. We're sort of we want option computation but with like blockchain like properties. So also, this is do rage is a good example, because you know you don't want to probabilistically meant all these new dojejoine tokens and like break the dochjoine is this. Is this a system that can be automated from a blockchain, for instance, like say, if you had some type of zero knowledge snark voting system that needed to add up all of the the computation, all the votes, which is a which could potentially be a computationally expensive task. You don't want to do it a blockchain. Could some smart contracting system automate the submission of this computation to true bit and then react on the answer? Or is this all need to be user driven? Because I could I could see gold and being good for doing map reduced type problems submitted by user because they clearly they just don't have the access to cheap compute resources. But if you can automate the process and have some have a verification game along with it, I could see that being a completely different use case, a lot outside of what they're capable of doing. Yeah, so we so the way you segment tasks is like via a smart contract. So I mean, and we don't really specified WHO's calling it. So as long as the other smart contract has their correct data and is able to access obviously the smart contracts is, you know, sort of I using the vending sheet metaphor. Doesn't really do anything until you put some coins in. So honestly, someone has to kind of like poke it to get it to do something. But yeah, I don't I don't see any reason. And these in terms of automation, we don't really distinguish between people and smart contracts seas in their system. So I have a question here from or you steel, since you've mentioned this earlier. Or Isso, do you know where? We were just talking to them. Actually, he's into our project. Yeah, he's a friend of mine from Austin. He's at least with a company called transpute industries, who're doing some great work right now the he actually asked me to ask you when do you plan on getting the Web Assembly, when it's web is simply demo, into the true at Os? Yeah, so I actually just started a web the we assembly client in trout. It last today. I'm hoping to get it like and testing mode in a couple weeks. So sort of like ready to demo. There's obviously like a lot of kings to work out and our web assembly vm isn't a hundred percent complete, but in terms of like integrating and instributed to us, I'm hoping to get it done in a couple weeks actually. Can you what do you hope that will enable some of this one we assembly work? Will we be able to operate like we'll be able to have web browser interface to troop it? Well, it's not really web browser interface, it's more of a so we only want to build one, you know, we don't want to build a whole ton of truth. It's so we want to sort of target a standard, and web assembly is a really good standard for computing. So, you know, it's like you have websily can be used outside of the browser. So it's really more an instructor like an instruction set architecture to sort of target or like a platform for us, target for more generalized compute. And so with Web Assembly you can like use a tool club and scripting and you can compile right now and you can compile see Sepos bus and rust, so non garbage collective languages and to Web Assembly, and so people will be able to write their task and all sorts of languages that they want, and I guess as Web Assembly adds garbage collection and you can maybe even get another programming languages. So that was really the idea by hand using web assembly, and it's kind of kind of I'm kind of going into idea mode. So a major new problem that a lot that a lot of, let's just a publishers, friends, web publishers have is that people are ad blocking their ads and so one of the solutions they had is they could do something where if the ads are blocks, they will mind me narrow in the background. Could they do something similar, because you're using web assembly monetize their websites through lending the unused javascript cycles to do kind of off chain compute calculations and actually make that a method of distributing content by in these browsing content, it's also learning their computing power to the publisher of that content. Yeah, so I guess even potentially solve the tasks in the browser, and a lot of people brought this up, I because we sort of have our own specialized interpreter. You would like,...

...if you got the challenge, you would have to offload you have to use our interpreter anyways, and you wouldn't be doing that in the browser. But it would be a good validator system way of scaling validation tasks, meaning that solvers, somers, I mean, don't seem to be where the big egg insentive is. Is that correct? Is actually be invalidating where the jackpots come in? Yeah, well, lovers are getting paid by the task divers so they're going to paige inject competition, right. Yeah, so it would probably be more advantageous to focus validators onto a system like that, where you're actually taking advantage of scale to find the larger Jackpot. Yeah, I guess you could be like using other people's browsers is like Zobbie computers. Is that sort of what you're referring to? Yes, sorry, like, for instance, if somebody has a there's there's there's actual libraries out there where if somebody has a ad blocker on your site, let's just say Your USA Today, I don't think they're doing it necessarily, but in you have an ad blocker on the site, then this library kicks in and what this library does is it minds be narrow in the background. So it's a way of instead of building in sensitization into websites, which it doesn't even have to have an ad blocker to beat frank with you, where you're actually doing mining on people's sites while they're browsing your site. It's pretty transparent and actually isn't terribly competitionally complex. Most people don't know what happens. And I'm thinking, okay, so we got this true bit thing and it's great, but how do we make sure that it has the scale that we need? And there's a couple ways that I think about that. And the best way I thought was to get it in the browser. And then it was like, oh well, you're using web assembly already. We can also use this to solve another problem, and that content publishers are getting published aren't getting paid for their published works. So you see sites like wired and Times on You York Times getting doing all these different, you know, ways of forcing people to either pay or incentivizing their their content. Like okay, well, here's this, here's the system that hasn't sensivization model built in, similar to like the basic attention to can you can actually decide to lend resources that are pretty much unused on the systems already through the web browser and actually use it to validate things that already exists in the true bit no work. Yeah, that's not a bad idea. I mean I don't see any reason why someone couldn't build, build on top of true bit. Yeah, I guess that makes sense. You sort of have all these different browsers running computations and if they find that something's not or they get it, they don't even have to check whether it's right or not. They just need to run it and then send the results back to some for laughing better word mother ship and the mother chick. Mother ship sort of checks the results and then they can challenge whether they want to or not. Yeah, that's make sense to me. DA's or coming out of hashing it out kind to get here. So there's there's that when we want to. When you see a technology like this, it's easy to focus on kind of like oh well, how are we going to get to the scale? But there's a lot of things. It's something like the solves a lot of problem sets that this is just applicable to that. You know, you don't need to focus just on scientific data alone. I mean there's people who need the scientific data solve, but there's also problems out there that are on the other end, meaning that there are people who want to monetize their extra computing resources or even monetize other people's extra computing resources in the terms of a browser. And when you have a system that is decentralized and you have the ability to lend your resources to another cause that are otherwise going unused, we have a way of actually distributing load in and monetizing that distribution of load that didn't prior previously exist, and that's why projects like like true it and even Golem are a really, really interesting and that's why I'm excited to it that you guys are doing the work that you're doing. Yeah, a really good wine. I'd sort of that really thought about it that way. I guess I had been thinking about, you know, if people, even with the transition to proof of state, there will still be sort of the need for like large cocute because of like things like true things, like Al and stuff like that and like in a peri of your passion. So sort of the same lines, but not I really thought about, you know, people sort of swapping out the airliners for true it validators that's I feel like it would be enough. The sarious, the better hardware that you have, the better the system improves in general. So people like Verizon, for instance, when they sell you phones, these phones have an upgrade plan. Well, you could also lend some computing resources on your phone and improve your data if, for instance,...

...verizon was able to capitalize as on the fact that you have improved data because you're not using your phone all the time, you're not using your data all the time. So this is another way to meter on, you know, wireless data plans, but also provide a benefit to the customer and that on demand, when they do need it, they can stream Netflix, but when they're not streaming Netflix, verising can also be doing things like sending you validate or data. So you can actually they can actually monetize some sort of the computer resources. And in addition, since that they that system would an improve computer resource, verising would be incentivised as a business to improve their customers devices, meaning that they would want them to have cheaper and easier accessible devices in mass across the globe. This also lends itself to Iot. So IOT devices are small, thin distributed devices that can do some extra calculation, but a lot of them are not going to be working. Seven, when you have something like true bit in the world and you're actually able to validate these computation on these devices, then you're able to incentivize iote network in a way that was never that never existed before. In fact, you can even use it to validate the computation that's going across an iote network. So I'm really excited about a project like true bet. This is this is kind of one of the keystones, like one of the first principle projects that will need in order to build a truly decentralized economy. Yeah, totally. That's why I joined on the project because I thought it would push the space forward. But yeah, you're totally right. It was always unused compu resources and using project contributes to it. That's really cool to really good idea. so you describe some of the projects that are currently other you mentioned like Arragon and stuff, but what kind of internal testing have you done? What are what is what is the progress being made on this? You know, how's funding going on this? What's the what's the project momentum looking like? Yeah, so we have a really small development team at the moment. So and we've haven't even really been working on it for more than a year. So we're pretty and like pretty early stages. We have me a lot of progress. That being said, and I'm very proud of our funding. Stuff is sort of like ongoing and our incentive layer is sort of still in a research phase and that's sort of like related to that, obviously. So that's been so I can't really say like too much on that, but yeah, we're still we're just driving along and I guess right now we're sort of just looking for, you know, more people to contribute and hopefully, once tribute to us, it's like ready to go and we sort of do the life here demo. I think it will sort of show people and make it easier to understand and I'm really hoping to get more contributors to the project. Yeah, so what kind of contributors you looking for and how, like how could somebody get involved? Like where's your point of entry for getting involved with the TRIPPET project? Yeah, so we have, you know, we have our GITHUB, so that's obviously like if you're developing, that's like where to go. We have a Wiki, we have a summer rebos, the wiki sort of is an overview of like all the pieces, and then we also have her jitter, which is really great place to ask any questions, obviously, and then so that's pretty much. So we're just an open source project and that's how I got started. We're the were. On one hand we're looking and engineers, so people who know about web assembly and know how to sort of like build the systems. Unfortunately, what assemblies are you do? So there's not a lot of people. So like pretty much airy and I think Mozilla has most of the web as of the experts of the world right now. And then also just people who are interested in like, you know, solidity and stuff like that. So just just I would say those are the two things. But we sort of have one guy named Sammy and he's the one who developed our most of the web assembly infrastructure. Like he actually wrote. So one piece of us, our system is a web assembly interpreter in written insolidity, and he actually made that. So it's pretty big achievement. But yeah, we need we need more Sammy's in the world right and people like me to sort of make this stuff easier to use and more he's are friendly. That's kind of my job. Yeah, and, like you know, education still of of this. You know ecosystem. So I'm glad to have people like you on the show, because that's that's exactly nobody. People will are want to learn about this stuff, but they don't have their discovery is still a big problem, and then learning how to break in is a big problems. So I'm really glad that you had this opportunity to talk about true bit. So overall, which your what you take on the these centralized system at the moment?...

Are you are you? Are you happy with it? What projects are you interested in and what do you what do you excited about other than true bit going forward? Yeah, so, I mean I'm really kind of excited about the governance stuff. I was like Pretty Hardcore Inter kiss and high school, and now like this stuff sort of reigniting that. In terms of other projects, obviously scaling. I've been saying that two thousand and eighteen was sort of the year scaling. So that's like kind of the biggest things for me. Other than that, I've been pretty I was like been pretty heads down. I people sort of say that when you started developing he sort of like stop like looking at price and sort of other projects. But yeah, I mean there's a lot of really cool projects out there. I guess like open mind is a pretty interesting so, like I have a background machine learning, so the machine learning projects are very interesting to me. Kind of scary because like then they sort of have their own at sonomy. Yeah, I mean I just sort of want the space to move forward, and so I think scaling is sort of the biggest thing that's holding like the databailability. So Shark, actually, I would see sharting is kind of one of my favorites, I think. I think my prediction is that once we sort of have sharting or theory and the starting that people will be able to experiment with these experimental features on these shards, and so I all these sort of new sort of evm chains that are coming out. They'll sort of get swallowed up into these shards and I I'm really big Leeb sort of the winners win and we'll sort of see this whole ecosystem sort of like getting kind of swallowed up with these different shards and the one hand is sort of, lack better word, centralized because it's like one system, but the other hand, because of all these shards and they're all sort of do different things. It's sort of like pragmanted, I guess, for lack of better word, and but in a good way. So yeah, and then also starting is sort of solving some problems around data availability. See, yeah, being a developer in true date availability sort of the biggest party member for us. So anything trying to solve data availability and I'm really excited about sort of like file point, I guess they're looking to solve it. So all right, that's a that's like. It's a great addition to what we've created so far on hatching it out and the conversation we're trying to build on what people are doing to enable these these next innovations in the decentralization space and off chain computation will play a large role on that and doing that in a trustless way. It's difficult, but I think true it has a novel way of approaching it that could work out and scale and a lot of the ways that we talked about more that we don't. So, once again, thanks for coming on. Is there any way that you'd like people to reach out to you if they'd like to, or where they can go to learn what they can do to help contribute? Yeah, so, I mean, first off, thanks for having me on the show. I love spreading the word about Trubden, sort of telling people about it, but there are really too many places, I guess. Just a get out and, you know, check out our code and post issues and where I try to be very responsive. pullar cost always welcome, and any of the questions. We do have a slack but it's not quite open. So, but in terms of development I think it's better to go to mergeator. Cool, cool, also. Well, thanks, man, appreciate you coming on. Yeah, thanks for having me. Having rest to your day and for our listeners, you can always catch us on spotify, itunes and the other podcasting gap. Can find me at Corpetti on twitter, call and at Colin Cuche on twitter and the podcast at hashing it out pod. If you want to get a hold of us through texting, you can just join the Bitcoin podcast networks lap slack. Go to the Bitcoin podcastcom and Click on a slack button to get an invite. Others than that, talk to us. We'll talk back. Thanks for coming on. Thanks, guys.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (128)