Hashing It Out
Hashing It Out

Episode 7 · 4 years ago

Hashing It Out #7: MIMIR Blockchain Solutions

ABOUT THIS EPISODE

On this stupendous episode of Hashing It Out, we touch base with Hunter Prendergast and Forrest Marshall of MIMIR Blockchain Solutions about the technology behind their multi-chain blockchain service provider system. They discuss some of the economics behind developing their staking mechanism for ensuring trust in their network. We get into some of the challenges their system is tackling and it's unique approach which sets it apart from Layer 2 scaling solutions that exist.

Entering. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blockchain infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology and the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, guys, episode seven of Hashing it out. As always, I'm here with Collins. What's up, Colin? Oh, Colin, Oh, I see what you did. They're a good old home alone reference home alone. I don't know that one. You don't think it was an out any custoller thy. It's like se good night, Kevin, and I Kevin. Yeah, they go anyway. Today. The episode today is with Mamr blockchain solutions, or a mirror solutions, with a forced and hunter. Whence I'll say hello, give us a quick instruction as to who you guys are and what a mirror solutions is. So my name's hunter, printer, guests, and thank you everyone, for everyone who's listening to the CTO me Mer blockchain solutions and I am a computer engineer, formerly a naval nuclear reactor operator, and Hey, I'm forest Marshall. I'm the software architect at me mere blockchain solutions. Formerly did work with smart buildings and green energy research. been working in blockchain space for a few years now. Happy to be here. Cool. Well, I've kind of gone over your site a little bit and I've actually talked to force in the past. I think it was on telegram. I think it was on telegram that we actually discussed for a little while about like what you guys are doing and how what you try to bring to the space. But I figured maybe I just let you guys do that in your own terms, like what is memir? What is this bridge you're building and how does it work and what does it do and what will it enable people to build on top of it? So the fundamental premise of why we set out to build what we're building is simple. It's that we've seen as block chains and blockchain services proliferate and reach broad and broader audiences, we are accruing more and more non technical users and so in the process of having that happen, people are trying to simplify and make it easier, which is exactly what we should be doing. Unfortunately, the short cut that a lot of people are taking is that they're taking a normal set of nodes and putting them behind a restful API. So this is essentially if I want to ask about something in a Blockchain, I normally ask my local node and this instance I'm asking someone else's node on a web server somewhere. And unfortunately, when you do that, especially when you do that with things like the etherium blockchain, we have complex smart contracts and complex stateful information, there's no compact proof that you can use to verify all of that information, or at least no one is using them right now, which means that all of that information being served is effectively with substantially less security than what was directly on the blockchain. And that's really the problem. Is we needed to increase the security when communicating between edge connected devices and a blockchain based infrastructure. Very cool. So what are you doing to actually prove the security on this?...

Like? How? Like? All Right, so let me just let me just before I get into deep into that question, because that's I think you get go down the technical rabbit hole. Probably. Let's just talk about what these edge devices look like and how people would actually interface with your system and what what what kinds of users you're actually reaching out towards? You said less technical users, but we're still talk about a technical audience would be integrating with your system. What kinds of devices, what kinds of applications do you see this being used with? Well, I think that it can be used for a number of different things. Embedded devices are absolutely something that you could construct very justified argument for having benefits to come out of these types of systems and honestly, anytime even someone is using a mobile device, if they're interfacing with the blockchain technology, are blockchain service behind the hood. It's substantially more expensive to use block chains and so if you're going to use them, you may as well inherit the security benefits of them, which means that if you're using a mobile device, there's not a good mechanism currently to meet that demand. So even if you're a technical user or a nontechnical user, we're not trying to solve necessarily the UIUX. There's a lot of companies doing that very well. We're trying to solve more fundamental problem of creating a here communication channel between these devices. We can definitely get into the technical rabbit hole, but I think I think an interesting way of looking at it, and this is something that I at least what I glean from when you came on the Bitcoin podcast flagship show, was that, because you're not necessarily serving directly to the end user, what you're doing, and least the way I saw it, was incentivising people to run full notes that then also run your service and then serve the secure information to people who need information directly from a blockchain. Is that kind of where you see yourself sit in the text stack of going from data on the blockchain to data on someone's phone from a decentralized application? Yeah, we oftentimes refer to ourselves as a decentralized blockchain, and you're absolutely right. What, fundamentally, what our system is doing is it's incentivising people who already are running nodes out in the world or interested in running nodes, to provide a very specific service using our software clients, which allows people who want to interact with the blockchain more from an edge connected device, or they just don't want to synchronize the entire blockchain state or interact with it like many technical people, will still simply rely on like Meta mask, for example, or to actually interact with blockchain state on daily basis, and so we would like to incyndivize people, to offer a secure alternative to centralize apis. Yeah, because right now, if I connect to Meta mask, I'm going through I think it's Meta mass. I don't actually I typically want to build a build for my local private network or something like that. With metamacts, believe you're going to an API that they provide gard needed by third parties, and there's about three or different four apis that they'll auto connect into for various things. But you're absolutely right, such a trusted Oracle. It's a centralized system that's coming in between you and and the so how do you guys, decentralize that in a skin, not just to centralized a how do you do it in a reliable and a scalable way? Well, would you like to take them? Yeah, sure. In short, I'm sure most of your listeners are familiar with the ideas of proof of steak. fundmantally, you you ensure that set of entities have something to lose and that if they act maliciously, you have a mechanism by which you can ensure that they will be caught and punished. What we essentially developed is a second layer protocol on top of we're building out our initial prototype on etherium, but any smart caught tract enabled blockchain should be...

...sufficient. We developed a second layer consensus protocol which works. Rather than coming to consensus around the state of the virtual machine, comes to consensus around the set of entities who currently hold steak and can be held accountable for serving API data. And, more importantly, we use a system for proving to the requester that when API data is served to them, not only is it verified, but it is verified by trustlessly selected entities who could not have been known by the party that initially serve their data, meaning that if someone serves data to me, even if they really would like to lie to me, they have no way of predicting who will double check their work or how many entities will double check their work, and they're strong incentives to catch liers. Okay, let me try and get that straight. Sorry. Yeah, it's fun to try and do these podcasts over completely audio, so people have to use their imaginations. I hope you're not driving while listening. Let's see, you're sitting. As I see it, almost like a Mush Mesh layer on top of the node, the nodes that are sinking a given blockchain. Will call it a theory in for now, like you said, any blockchain can do the this, this network on top of the people running nodes are serving data to people who want information from the blockchains. Now, when someone sends a request, which means I say I want this thing from this smart contract on the Blockchain, they interact with your API and there is a way for them to get that information, as well as a type of guarantee that that information is correct, because it's checked from not it's checked from the person that got it from as well as someone else set right multiple individuals, in fact, arbitrarily many, and so this is another fundamental assumption. Without getting really technical again, you know, we can talk about centralized Web apias. That's a single way to facilitated. We also have light clients, but what we've seen in light clients is that as you get these sprawling smart contracts, systems that have, you know, more and more complex infrastructure behind them, like plants, don't scale very effectively in these circumstances because you're continuously modifying state somewhere in the system and you always have to be able to aggregate all of the necessary proof to know that those interactions were valid. And so this is kind of the worst of both worlds and stealing one of forced great lines. I love this one. But you're downloading a lot of data and doing a lot of work and you're still not getting all of the security of directly interacting with the blockchain. So we kind of set out with the idea that it wouldn't it be great if you could construct a system in which the amount of data that need be exchanged and the amount of computation that need be performed could be arbitrarily set with reference to the value of what you're requesting. I. If you're requesting the address through the Ns of the local pizza shot down the street so that you can buy a pizza, you don't care all that much, but if you're trying to, you know, buy a house, you very much care, and I wouldn't personally buy a house unless I had a real note in front of me. That the wrong way, but this is just for a frame of reference. It's yes, there's multiple people double checking and it's arbitrarily many, multiply people double checking through or recursive operation. Okay, and so obviously you don't want it to be going out forever, so you're incentivising it. So is it like a bounty system? Is that I'm not understanding. So you have this token. It's the two eyes at what's called correct is it so? Is that what's the insensive model is based around that token. The the incentive model is in terms of what's your incentive to actually do...

...work in the system, is just to get paid, like anything or anything like that for providing a service. The the token in our system is actually simply a reputation coding, and so that is only something held by the entities working in the system. I use serving data and that's an important part of proof of steak protocols, where you don't want someone to be able to come in and lock steak, where the failure of the system that they're working in is independent from the value of the steak. Like if I if I have a proof of state protocol for a Blockchain, I want people to lock the the currency or the token of that blockchain because if they walk a lot of steak and they use it to attack the system, they should devalue them their own asset. Fundamentally, and this is kind of how some of the security around mining works anyway. If you if the mining pools that were large enough to actually successfully cause meaningful damage to the major chains hit, they colluded to do that, they would immediately devalue all of their mining rigs significantly, and so even if there was some incentive to do that, it ends up sort of being shooting yourself in the foot. So that that's the place where the token comes in. Is Binding your steak to the success of the system got you. What is sent of to people have I like. How does the token itself gained value in an economic sense? Wellness System? This is so the the analysis that we performed, how we based all of our fundamental assumptions, was on the idea that there is in a world in which we succeed at what we're trying to accomplish and in a world in which we are generating revenue through the act of providing the service to and consumers, we would be paying the people who are running our infrastructure for us, which means that you can use different forms of additional economic analysis to say we have a payment in perpetuity of this value. Here's what the value of this thing is, and so we actually based a lot of those economics around becoming analogous in structure and form to traditional ethery of mining, because the people review as the most likely candidates to be willing to operate on a business model where they're paid to run a computer somewhere, either Alice or the cloud, are the miners. Granted, the system that we've described does not require a mining rig to have, you know, any valid interaction. You can factor it with much more lightweight hardware or you can use much bigger hardware and do many of these things on one machine. So that's kind of how we're assessing the value is we're going to actually pay you. So it has an intrinsic value based on what you're generating and revenue. That's kind of the way I see the reason why proof of work is so valuable is that you set the incentives such that the only thing that you're rolling really want to do is just be honest and follow the rules because you're getting paid to do it. And if what you're getting paid to do in the scenario of a mirror is just serve data from the blockchain to someone requesting it, and so you're just kind of using what we've learned from how mining has ended up becoming the only, at least as of right now, proper consensus method in existence for large scale networks. You're using the incentive mechanisms around that to then fill a gap between the central life services that we have now and trying to centralize that. So as like right now, basically what we have is like people like my crypto running nodes, ether scan dato running nodes. That you're just saying, I hope you're doing it right. I mean, and we we've seen problems around this, at least in...

...terms of people using the was it coin market cat API for price data and their ability to manipulate those things to then make their make big money on their behalf. This is a way of decentralizing that scenario without necessarily all of the work. Right. The work is just serving data API. It also incentivizes people to actually run full notes, which is something that is desperately needed in a proof of work scenario, because right now the only way to be incentivized to run a full notice to mine, which is not an easy thing to get into. It requires a lot of capital. So, like, I see a lot of benefits in the reason why you did this? Is that I mean? I guess the question in all of this is you had to have seen a problem in the way this entire space was moving forward. Is it? Is this like the massive gap you saw and this is the solution you came up with to fix it? Honestly, where this where this all started, was we're talking about building a piece of attack needed to interface with mobile phones, and what we realized is we were building this ad tech out. We're talking about the really cool stuff you could do with it. Essentially, what we're describing as based on some research I had done long ago and university, where you could use cryptographic primitives inside of things like qr codes and other visually identifiable objects. Idea was you could determine unique position and threedimensional space reference to these little things and you could basically sell any flat surface and an augmented reality world. Cool idea of way out there on the end of the you know, future stuff that's has so many barriers to inture it's insane. But in the process looking through all of that cool stuff, we'd realize, man, there's no real way to get all of this data out of the repository of the block chain and into the end device such that there's no centralized trusted entity in the pipe. And what's the point of all of this extra effort we're doing with the blockchain, because we could just use the sequel database if that's the case, you know. And so we really sat down and we started trying to figure this problem out and actually I can distinctly remember the conversation for us and I were having. I would say what after about an hour of arguing for us, finally looked to me and said this is ridiculous, this is too hard to solve. We should solve this problem by itself and then we should solve the other problems. And that's where we are today. We're solving the first problem first. I don't know if we'll get back to the other stuff, but the first problem was the hard problem and it was the one that we realized everyone kind of had. So, yeah, that's why we end up where we are today. So I want to get a little more at some point into how you prove that the information that we're receiving is correct, but I really want to kind of hone in on the crypt economic side of things, and I hate to do this to you so early on, because I haven't. We haven't had a guest on here yet who's really got a cryptograph, cryptoeconomic model built into their system yet. That isn't like on a protocol are. So I really want to know how you guys went about the process of designing that model. And I know there's it sounds like you're using, and correct me if I'm wrong, ADEP POS kind of system. Sort of. I guess it's not like it's are croup mistakes. You could draw analogy to it. Yes, but how did you design this and how did you prove that? Like, for instance, how do I know on your system that what kind of like restrictions and variables did you put it to basically block it so that somebody can create like a little Opeq for Memur? You know what I mean. Well, the so there's a few things that go into this. The the first thing that you have to be able to know about is set membership. I. You need a trivial...

...mechanism by which to know that the people you're talking with are the people that are supposed to be there. Right too, you need to know that even in the event of collusion, even collusion on a massive scale, the the OPACK is you're calling it, that the probability of a success is sufficiently small that you can call it negligible, or that least you can demonstrate the system converges to stability even under an attack of the scale of something like a fifty one percent attack. And then you need to in order to kind of pull those things off. The other sets of assumptions that kind of stem thereafter is that it should always be worth more to be honest than to lie, and it should it should be structured such that you can never know who's going to be double checking your work and that you know that the information did necessarily come through one of the communication channels that we've constructed. And so that's really what my Mer servers kind of do in the in the world, as we construct a transparent communication pipe in which we are passive observer. We don't feed information into the system, but we enforced that all participants in the system are performing valid cryptographic operations, and so there's a lot in that, but I just stated which we want to kind of like pull apart and really go into first yeah, so I guess the first part is you said that you're monitoring. What does that mean? So most of what our system is built around is trying to get the best of both worlds, trying to get the the kind of speed and efficiency you get with centralized systems but maintain strong, decentralized security guarantees. So rather than having a fully Mesh Network that's you are purely only making direct peer to peer connections. Instead, we engineered the systems such that you would use one or more centralized essentially like high performance message que or content delivery systems, but that the cryptography done at all the edges would treat that central messaging system as an untrusted party, and so for the initial roll out of the system, will be running that message system. But we've taken great care to ensure that the community will be able to take take back all infrastructure and continue to run it if we become a malicious actor, because if you're in the blockchain space you got to assume that you may become a malicious actor. And so when Hunter says that we're a passive observer, essentially we see the messages going in through the pipe and we have our own job in the system, which is essentially that if people start misbehaving, it's our job to attempt to kill their connection as quickly as possible to essentially mitigate the damage that they do. And so, in addition to the cryptographic commitments made by the entities they're actually serving data about the truth of the data, we make cryptographic commitments about the fact that entity is still in good standing with the system, so that we can be held accountable if we allow any entity that falls out of good standing with the system to continue to serve and interact. So so there's sort of layers of checks and balances going on, but that's that's that particular passive observations. We provide commitments that at a given time, an entity was still in good standing in the system. How does it end user verify that you're upholding your commitments on all this like a so this goes back to the trust issue that I kind of a pushing down, kicking down the line. I really want to know how are you establishing trust from end and on the entire system when yourself, for kind of sitting in the middle as just this past observer, with these rules that you have the capability and responsibility to,...

...quote, quick kick people out of the system? What is to be clear about something we can close a connection. All of the operations around the proof of stake protocol take place inside of the blockchain. So effectively we're limiting harm because we are preventing you, if you're a malicious party in your Lyne, our job is to make sure that you don't keep lying to people. As soon as we figured out that you have lied, right, so we close your connection and we're effectively acting as the PAM. And so then permission access management system, and so then we let the arbitration sequence that describes how revocation of steak occurs in the malicious party event. We let all that unfold inside the blockchain defined by smart contract logic, such that we are not capable of manipulating that piece of the system. And so we can get we can go back into the you know, all of the other pieces of the trust, but I wanted to make sure that was clear in what we had said a second ago. Okay, so you have these smart contracts that you've built that basically maintained this trust mechanism. So anybody could technically go through the audit trail of and basically verify that your work is correct at any point. It's not just about rhet retrospective auditing. US kicking into these out of the system has nothing to do with the fundamental security that's for essentially reducing wasted work, because we're trying to be as efficient as possible computationally speaking. The process that is actually ensuring these entities cannot serve in the system is entirely on chain smart contract systems which add and remove entities from the sets which are considered to be in different reputation states and allow for arbitration and accusation of different entities. Now, one of the really important things in this system is the ability to verify set membership and to do it trivially. And so we have a number of layers of incentive structures, but one of the layers of incentive structure is an incredibly strong incentive to catch people lying about. You all and your listeners are probably familiar with the basic idea of a mercle tree. Right. This is well, I sure it's definitely are, but I think I think it's okay if you want to explain it. If essentially, more users are front and developers friends, okay. One of the core technologies that makes blockchains useful and is used in most block chains is a Mircle tree, which is essentially a way of using a very small amount of data to prove whether something is a member of a set. And so if I have a very small piece of information like a block Hash, I can construct a proof that can be verified only by using that block hash about whether or not some piece of informations is in that block, without needing to show you the whole block. So it's compact proofs the somethings in a set. And so one of the strongest incentive levels in the system is ensuring that cryptographic certificates are regularly created identifying the mercal Root Hashes or other proofs of set membership around the entities in different state. In the system, I you are allowed to serve this kind of role, such as serving requests directly or verifying other people's work, and the idea being that you can very truvily create proofs based on these about whether or not an entity from very large set of entities is in group, and you can greatly simplify the process of arbitrating lying about set membership, because if you simply use the small root of the proof as your point of arbitration around set membership and then the requesting user who's trying to get data out, will not accept any...

...information from someone they haven't seen a proof of membership for. And so, at the end of the day, we we specified these algorithms one such that they could be trivially encoded in a small amount of data to we could share them and we could be held culpable if we were to share them with a cryptographic signature for a set that was not valid, as well as independent third parties could transmit this data for effectively almost no cost, which means that we're constructing a system that is intentionally engineered so that other people can reinforce our reassert what we've stated. We're trying to not be a centralized identity in any place, but we've accepted the small amount of centralization that we have to in order to pull off this protocol, with the understanding that there is a long term play where we can pull ourselves out of more and more positions of authority as the system scales, it becomes more in selfsufficient and more secure on its own. It's kind of the idea of normal proof of stake systems that these are systems that don't play Nice when you first turn them on when there are people in the world who have large concentrations of assets or anything else of that nature. Kind of like when you first turn on a blockchain and your hash rate is almost nothing, you can be subject to manipulation. So we built the system such the early on we can kind of make sure that we're guiding it in the direction we want it to go and then kind of progressively step off from it further and further, which is ultimately what I think would we want to do is minimize both our own liability as well as our own operational expense and maintaining this infrastructure. So that makes sense to me. I actually think it's interesting. So the the the current kind of like solutions for scaling mean there's there's you know, like shouting, and then you got these layer two solutions. You got your plasma and your general state channels. You've kind of got your own brand of a layer two solution. It sounds like you're working on it's not really been described in a very generalized way to this point where you're actually certain. You know, white clients are great and all, but you're actually serving a truth mechanism that talks about the main net without actually having to have necessarily the entire main neet at your disposal. That's kind of a scaling solution when you think about it. I mean it enables, not like this is enable scaling of the number of transactions, as you could do some sort of consolidation mechanisms in front of things. You can also it's a scaling of just general access and data size. So like big problems. Thinking that that that blockchain is. blockchains are huge, you know, they get and they grow, they continuously grow. Mobile devices can't handle that. IOT devices can I handle that. This is a light way of kind of executing trustless mechanisms without actually, you know, having to have full participation in the network as it's kind of interesting to me. So as a side note, I don't I keep as more and more I learned about this entire space, the main overarching theme seems to be what are some new interesting ways to mercilize Shit, and then how do we vote on it? Right? That's basically the blockchain space summed up in total. Really is but it's what I'm curious about now, maybe, is what does this what does this look like? How does this change how people currently do things? What does it mean for the developer who's building to centralized applications. Now that people have a better way of getting information more quickly with a trust mechanism from the blockchain, do they need to change the way they operate and then rethink the way they retrieve data? But I'm our intention from the get go was that, if you want to do you could treat us like a transparent web three provider object speaking in a theorem terms here. For your listeners who aren't familiar, web three is the standard library than most developers use interract blockchain, and when you initially set up...

...library, you choose how you're going to connect to the blockchain, via web sockets or remote API or something like that, and there's one to two lines of code the you specify which defines what kind of connection you're going to use, and then after that your API is the same no matter what connection you're using underneath, and it's simplest form. We would like our system to work just like that. From the point of view of adapt developer. I add a couple lines of code and now I have a skier connection in the way that if you want to set up SSL to ensure that when you're talking to a server. You're like, you know hdps, if you if you want to ensure the using https your application, most applications these days, that's a few extra lines of code to make sure you did that right, and I kind of feel like what you're doing is almost literally that to centralized SSL for blockchains. That's there's an there is an analogy. So, I mean it's around cryptographic proofs and we're essentially trying to pull out the point of centralization regarding will release, just single point failure in the the proof and of identity of individual as well as the information being provided. And there's some other really cool things. If we have time, I'd love to come back and just talk about things in this realm even more. But the yeah, the base ideas that were we're trying to construct systems where we minimize the expense and increase the security of interaction with the blockchain, whether you're a developer, an end consumer or even a device, because really blockchains are great for storing all kinds of kernels of truth, like sets of Cryptographic keys that you wanted to use for an encryption set, or set of cryptographic keys that need necessarily sign messages, Hashes of code, locks for software updates. You know things like this where blockchains aren't good a high throughput. They probably never will be compared to a centralized system. But there's a lot of places in the world where you don't need high through but what you need is a small piece of information that you need to know it's up to date with absolute certainty, and that gets you everywhere else. If you look at like with software update systems, you need the Hash of the code to ensure that when you install a software update, you install the correct one. When you're sending encrypted messages to people all over the world, the most important thing you need to do is say, do I have the right public key for this person? If you do, you can take care of everything else securely. The message doesn't have to go on the blockchain, but you needed a place where you had absolute certainty that you were getting true information out, and I think that's the by far the strongest role block chain has to play, is giving us a place where that little kernel, the truth and the center of the system can be used to make sure that the rest of the system works right. Yep, fully agreed. Hum. Yeah. So so to go back to the coin for a second, to Beti. You say you're paying people. Are you paying with the coin or are you paying with a theory Um or like? How does this make what is order in order to maintain that that proof of state system has intrinsic value? Ideally, we want to have something that's not a speculative vehicle, I. We want it to have stable value. We don't want it to be something that's just traded around all the time. Literally, I want people to take this thing actually use it for what's in purpose, because then I can I can handle more bandwidth, right, and so a a. We quite literally set up a business model in which we can contract on a bTV basis, on a BTC basis, you know, and literally sell our service on a per request basis. The expectation is that on a BTC basis...

...you're going to have to eat loss leaders because all of the providers do it. But if you're talking about the major providers and you're talking about setting up infrastructure for an actual decentralized application, nobody does that for free for anyone else in the world right now, because running nodes is expensive in the cloud. I know what my aws bill is and I mostly just run a few ones here and there for different projects, some of them in test stage, some of them production. It's not cheap. So that's really the ideas. That will collect revenue for providing this sort of set of services and we will happily share that revenue back with the people who are providing our infrastructure, because we don't have to set up a massive, autoscaling, complex infrastructure system and literally just take a whole bunch of messages and fan them out across thousands of potential recipients and then they can all do small amounts of work by comparison to a few centralized nodes running really heavy all day long. What is that? What is that? Quite Callin I was just kind of curious, like how you you like? If so, it's in the middle of committing to it, I guess they wouldn't even have to commit. I'm not really sure. I understand a flow of some something. If a node happens to drop off the network, for instance, like that was in the middle of something. This is impact you at all? Do you guys just resiliency full like on this? You have any problems that it from the persective our system. That's a dropped packet hit the reputation. It's dropping off the network is not going to hit the reputation. In order to ensure that you don't have essentially denial a service attack by constantly coming on off the network, the Block Jab delf enforces a small time out to ensure that you cannot rejoin activity when you drop out of activity for a small amount of time and that. But from the point of view of the communication protocol, that's just a drop packet. What what gets you? Loss of reputation in the system in the long run, ie. Loss of the token that's actively serving incorrect information, because we can't punish people for their Internet went out, you know, like that's that's not a functional way to build the system. However, the an individual commitment. If you lose that, from the point of view of any communication protocol, drop packets happen. It just has to be resilient to that. Yeah, and so essentially how we handle this is our communication protocol for the first implementation is set up over web sockets. This enables us to do a lot of persistent communication by directional which has some cool features for blockchain, obviously. But what that means is that you can request a chunk of information and you can get back the communition. It Mint to a piece of information such that the users unaware that anything is still populating in the background. They get the data to load their death effectively and then verification sequences can occur in the background. But if any of the verification sequences that are pending fail, all outbound transactions are not allowed and the user is informed of the case that there has been an error. Reload page right. So the idea here is that we can give you a nice, clean, graphical user experience, a good Uiux, and yet we can create a mechanism to enforce the security behind the scenes, a synchronous Lee if we're running consistent connections and if any of them fail, they time out. So it's it's default. Everything is default to the state of failure and only on a successful verification response does it flip to the hey, we're actually in a good state position. So do you depend on a single note as like a point of entry? How do I access these these notes? Can I broadcast multiple nodes and then and sure that I get to the correct response back from both? Like how does this work? You could choose to make the same request multiple times. However, the way the economics the system are set up, there's a fairly strong incentive to not be dropping your...

...connections, and so the the drop pack analogy is one that should have been very rarely its scale. Well, I mean like, let's say, as adapt to that developer, I want to interact with the the MEMROR system. Want to send I just want to I don't have the full blockchain and I just want to read. Okay, I want to read this field on this smart contract at this address and I want to pull down the IPFs addresses in this field and then. But I also want to do it. I know that I need responsiveness and there's a possibility that nodes can drop off the network at a frequency of like to say, you know ten percent of your nodes drop off the network with every every hour. I know that. I don't. That's unacceptable me. So I'm going to diversify my my request and send it to three different nodes and pull down that IPFs address, and so my odds of me actually losing that a is loow. Is that? Is that going to cause me problems on your system, or is that something that you can make as many requests as you want? There's, I think, that your your ten percent example that that would be scenario or something had probably gone horribly wrong. But yeah, you could absolutely make multiple requests with the same mean for information. And when you talk about the snappiness, what hunter was describing for about essentially confirmation in the background. One of the very important things that we came up with the system because you want good user experience, which means, like you said, snappiness. And so rather than waiting full confirmation before you update your Ui and you your user interface, you update when you get the initial commitment and the software clients that you're using from US underneath automatically disallows rights of state unless all preceding reads of state have been confirmed. And that, in essence, that is what you need for the security to persist. Is I cannot send a rite of state and change some blockchain state on bad assumptions, but I can have my application function with a snappy user interface and simply guarantee that the user will never accidentally send a transaction on bad assumptions. And just like if you're if you're browsing website and your browsers as like you know, like your connections not private, throw up a big red flag. You need to refresh. You can do the same thing with this sort of system where, because at scale, the economist assess that you should almost never see an attempt emlish's activity. If you do, you simply reset to the last known good state. So when you would say that read the reads, they are they have to go through before the rights. That's actually a really good point in something I hadn't considered. If I'm reading from like I'll say you got three reads and then two rites, and then three reads and in two rites, all within a fourteen second time stamp. By the way, I'm assuming this is on a per app user basis of her wallet. So if I you know, I would have to have some way of identifying that these particular transaction, these calls and these transactions are are basically the by the same person. You Com so I send they're all within a fourteen second time fan is just a stack kind of scenario where you have to execute these reads, send that data back and then you can send those rights or and how do I know that rights are even dependent on the reads? So there's a few different ways to approach it. So one we are looking at ways to batch, which is what you're asking about. A second ago, and yes, this is something that we think would be a really great idea, is to batch requests. It's not something that we put into the first place. In so far as being able to get you know, you're asking about putting messages in the correct place. All messages can be muteext based on a set of precommittals of randomness that the user submit. So this is part of the UNGAMABLE randomized routing thing that we were talking about earlier. But there's effectively...

...a built in new texting method based on two hundred and fifty six but numbers. So collisions are pretty improbable. And in so far as the instance where you've designed a piece of software, that's say was doing a firmware update based on an Iot device, based on something that it was reading on blockchain state, obviously in this instance you would wait programmatically at the top level, on the main loop, for the asynchronous verification to complete on the backside. Or if you were running something like, say, m qt or some other lightweight messaging protocol, you could design a mechanism by which you could use callbacks or web hooks or anything any other number of methods where you could effectively inject the initial request and then pull on a point until you could receive the compact completed proof at the back side of it. So it depends very much on what the setting is. But these are kind of just engineering problems. These are these are things we know how to deal with. It's just you necessarily should think about these things when using these types of systems. So ultimately we're going to have to write a whole bunch of docks and really talk about this because we don't want our assumption sets not to transfer well into the people ultimately use it, because that's how you end up with horrible security flaws. So so good thing is developers understand the concept of dependencies. So I think if you build some sort of dependency system into whatever library, and that's actually another thing. So you have this library that that that's not web three, and you mentioned that you can support multichain, which is not as relevant right now because the any theoremes pretty much the only game in town in a serious manner. But you built this thing in such a way that you don't depends. You know it's there. Really, you're killing me. That was my question. Well, you know, great mind stick a light cory once you tell it, once you ask the question that, Corey go you just you basically nail it on the head right like. I'm curious, like so you could do multichain. I'm curious about the the what it takes to do multichain transactions and because if you're staking these coins on a specific blockchain, what does that mean across the blockchains? Is it a token that works on multiple blocks? And very smart people working on atomic swaps. So that's not what we're doing that exam but yeah, however, for given blockchain, so long as it supports smart contracts, the meat a very minimum set of criteria. Basically all of them will if you're trying to compete with the theium, then you'll be able to run this system on top of it. Now, in terms of the actual interaction from the point of view the developer, you probably wouldn't use web three. You'd use that blockchains equivalent, because every blockchains going to do things a little bit differently. Probably now, I'm sure at some point some very handsome intelligent people will make the library that abstracts over contract interaction across different blockchain back ends. But in general, most of our system the ideas behind are things are very general to block chains. As long as you have smart contracts, they can force a couple very simple rules in their code, their turing complete and have decent data structures, they can and as long as you have signature verification, hashing and all these sort of fundamental things, everything ports right over the the the fundamental assumption sets are that you have convergent state, that you have at least limited introspection of state inside of the blockchain. I. It can know about its own state and that it knows what cryptography and hashing is, and I mean that that's about the the basis of how fancy we tried to get underneath the hood. We tried to take really simple, well founded concepts and construct a system that was zilient and didn't do any groundbreaking cryptography. You know, it's just don't. Don't reinvent the rules where you don't have to, because we know we have these really sound, strong constructs. Let's utilize them to the best of our abilities and unique ways to do cool things. That's a thinking nailed it on the head on that one. I...

...have other questions that we move along two different types of conversation. Call and do everything else on this line. No, please go. It's kind of like I mean, because you've developed this particular general framework that solves a problem of Centralation, centralization that we found as we as we've grown out. It shows you've had some foresight, or least understanding on the movement of where problems exist in a stack, or at least the problems you want to work on in order to make an actual ind user have a good experience that has all of the security guarantees that the blockchain has. Where do you see problems diction now, like, what holes do you see on the horizon of this entire space that need to be filled? Well, I guess I'll start off with one that's not necessarily inside of the space, but we talked about a very briefly earlier and it's, moreover, if I could point it's something and say, man, you should be using the blockchain for this, using the blockchain for things like DNS, and using the block chains for things like PKI, using the Blockchain. Essentially, if you look at the Internet and the structure of how these systems run, they're huge data stores that need be controlled and they need read by thousands of individuals or millions of individuals really all over the globe simultaneously and they everyone who's participating in the system needs to know that nobody's else is trying to screw up the record sets. And we can describe a lot of different systems where this technology is plug and play. That solves other things where we've done layers and layers and layyers of bandids that don't actually quite solve the problem. So if I had to point out where and maybe the tech can go, that's a place. Now, if we're talking about internal problems, for things that just have to be solved, I mean I think that really the transition in the etherium network into things like wosom would be ideal for a number of reasons. This would increase the security around the execution potentially it would also increase the throughput of the system. So was I web assembly for the etherium virtual machine is, moreover, what I'm referencing. And then, I mean there's a lot of if we're talking about the solidity compiler, there's a lot of very gross inefficiencies in terms of gas cost and gas structure are these are things that have to be addressed. When you say create a dynamic array and solidity you're actually constructing a multilayer mapping that has all this circurseive hashing stuff going on. It's huge waste of computation. So there's better ways to do just fundamental data structures or no, I could go on on boring things, but maybe there's some other pool ones. I mean, yeah, it's a it's a very difficult question to answer, in part because I think that there's a lot of smart people who've already very much identified what the major issues are in the short run. I do think that I think the problems that people are working on that strike me as the most important are the ones that are infrastructure and a fundamental preposes things like states, things like we mentioned sharting and plasm before, but anything in the area of like, you know, let's reduce transaction costs increase through put. Some solutions have more problematic tradeoffs than others. So not going to speak much about the details that particular area, but I think in general, people who are working to make the technology more practical. I think it's where I think the interesting problem solving is going...

...on, because right now, I think sometimes there's there's so much excitement in the space because we all see how powerful blocking as potential to be lambos. It's in, sorry, and we move on. Walk before we run. Yeah, apart. A big part of this show and my opinion is not only talking about what solutions were coming up with the solve the problems, it's just even asking the question and getting perspective on where the hell are we now, so that the people who are listening who might have fantastic domain expertise in a certain area, understand where to apply that effort. Because, like, the more we understand what the problems are, about where we currently are and how to get to that thing we want, the better we can focus people of great talent to get to those problems. And so the and there's no better people to give that perspective and the people who are trying to solve problems because they've run into issues. Like, just like you said, we wanted to do this thing that was really cool, but there existed this other thing that didn't exist, that was in the way. So now we're doing that thing because we have to. And so the more we talk about these gaping holes in user experience or, you know, technical debt or just underlying mechanisms how blockchain works, the more we can try and get an idea on where we should be spending our time. Instead of maybe making pretty depts or focusing on things that the induser has to care about, maybe we should be spending more time focusing on solving the problems that make those things exist at all. Well, I mean there's there's a trade here, and I fundamentally agree. If, if I could do what I wanted to do all day, I would lock myself in a dark room and I would be tapping away to black screen with nothing but code on it, writing something that only other programmers would ever see, because that's the type of stuff I like to write and these are the things that I think are interesting, solving the hard problems. But even within what we've been doing for our own company, you know, we talked about all these things and they're so abstract. We feel that a lot of into you, a lot of users, a lot of the people who ultimately will benefit the most from this, don't really get it. They don't see it and if you if they don't see it, then it's going to be really hard to convince people that this isn't just some scam mechanism for you know, all of this bad and press you hear about. It's literally a piece of technology that, an infrastructure level, can change the way that pieces of society and the Internet and a lot of other things behave, and that is awesome and exciting. But it's kind of like if you were to describe electricity to someone you know. Okay, that's really cool, but now you put a light bulb in front of them at a time when there's only candles around, they get it, and so I think that there's a there's a balance here. We have to build things that are incredibly useful, but if we actually want to see adoption and success of this stuff, we're going to have to figure out a way to get it into the inhand of users, because users are the ones who are going to effectively pay for the services we're building that allow us to continue to develop the next wave of cool things. So it's it's always a balance and I don't know that it's hard to strike that balance, but we try to. We try to at least put some things in the public domain that are easier to play with, and I know that a lot of other main projects are trying to do the same. It's but it's a it's a diversion of resources that on my honest opinion. At the same time, I wish I could be spending at just developing the primary codebase. So so something I ask pretty much everybody that comes on the show at this point is a what you know? Are you? You guys are building at what is essentially to me a kind of a scaling solution for blockchain.

In a sense it's also a framework for APP development, but I feel like at its core it's also a scaling solution. So when we talk about blockchain scalability, it's often kind of there's two major camps of is it going to be like an Internet of blockchains, where we have all these different blockchains, different protocols, they're all like this mishmash of communication between each other and you can all communicate in some manner through some mechanism and pass value around, or is it could be this one base layer block chin to kind of rule them all and be the central source of value everything kind of communicates with that? So are you more of an Internet of blockchains? Are You guys Internet blockchains folks? Are you kind of like this central source of truth, folks? Its comminatorial problem. Yeah, I mean I but a great way to say it. This. This sounds like. This sounds like a more general issue of like, well, both scenarios or kind of bad. I mean, like if we only have one giant blockchain, then there's no pressure to implude improve and everything is built on a single assumption set. But if it's all chaos and we're all running around like I've difficulty imagining I either scenario to be like the best case scenario. I think that block chains are a technology stack, and technology stacks need to have a decent amount of agreement, but they also need to be able to change. I think if everything's built on the same technology snack stack, change doesn't come easily. Like I guess that, like Hunder was saying, everything's a balance, but then there's only one point of change. You just have to change this central system and then and then everything propagates. So if we're talking about the central data are the central blockchain as basically being like the the one that stitches them together, not like it's a universal blockchain that there are no other is. It's, moreover, that there's a microcosm of sub chains, and then you're using a singular interface from Morlasma style from a programmatic standpoint, this is way easier because when I was saying commnatorial problem earlier, if you if you think about having all of these different chains try to talk to each other. As a programmer I have to do a lot of work, but if I create a universal language that everyone can talk in and then I only have to translate from each individual language into some universal thing, then I get a lot of really cool benefits. This is why you see languages that compile into sub representations of the code and then into the final assembly language out of that. It's just it's easier if you have a universal translator in the middle. Yeah, I'm in clients to agree at this point. I think. I think most people kind of especially since the plasma why pair of AC about it, definitely taken more note to the idea of it. Maybe it's possible to have more of a central source of truth, but then again, I actually see the benefit of being able to create your own protocol in a free way. That might completely block the way that other that the social source of truth, operates on a fundamental level. And but then wouldn't that be a better sensor, central source of truth, a better main net for the rest of the world. So I don't know. I'm kind of think, how do we blot model? You're always game podic and market and we're all, least to some extent, market people. Competition is good, bold, diverse. Yeah, by would ever, city is very good if you have a winner, but you can still challenge that winner. That's good, absolutely. So I guess my answer is build a really strong backbone framework that lets you hot plood modular components. Maybe that...

...it makes a lot of sense. All think put you as there's that there's a trade off here and in these kinds of things, and that's specificity and like basically what you said earlier is you need different assumption sets and by making the base layer, if you have this central source of truth, it has to be agnostic in every single way. Otherwise you can find everything built on top of it to abide by the assumption set that that thing is and if it makes certain decisions at that layer, everything else is beholden to it. So you have the same situation that we've created with the Internet. Of You make applications that work the way the Internet works, which ends up with the applications that centralize information, which makes honeypots, and so you have social implications of the assumption sets of the base layer and when you allow for people to build things based on different assumption sets, based on what they want to do with the at the end of the day, we're all humans just trying to talk to each other on through various means of assumption or or relationships. You need to build up be able to build a technology that focuses on optimizing that relationship versus folding the relationship into the technology. Yeah, so that's definitely isolation of components. I absolutely agree and there but there's another set of trade off and another counterargument even here, which is if we look at things like hardware design, going back to the probably the S. I'm honestly terrible with dates, but we look at things like Spi. You familiar with serial proof for a interface, like a communication protocol on Silicon right you? If you look through is no standard about what this communication protocol should be when it was created, but it was implemented differently by all of these different ship manufacturers. And so now if you go back and you want to say interface two pieces of Silagon to silicon together, it sometimes takes a substantial amount of work to figure out how to make the two set of assumptions play with each other, because they both implemented a slightly different variant of the exact same thing. And so this is this is really the balance is. I agree. If you can, if you could construct a protocol that was highly generalizable and allowed for clean interfaces that were independent of the underlying logic, I think that that would be the ideal circumstance. But it's figuring out an interface that's independent of the logic or doesn't bake on any assumptions really into it, that still lets them all talk and play nicely, would be the that'd be the cool thing, because then you can talk to everybody but you don't have to do the same thing as everybody else. I would argue almost like you need is the most basic steaking mechanism possible than any sub chains can stick into that. I'd argue that Hyper Ledger said they did that. That was their whole goal, was to create that base layer that everything could work on top of, but we kind of all know how that worked out. Yeah, that's everything and it's ideal world. Right, it would all it'll always behave ideally, so I don't know, not going to say that it's easy. Yeah, but that's part of the really what I meant to say is that that we're making something that is agnostic to everything, that is completely generalizable, is really hard, because we all want to do a lot of different things that work in different ways, and making something that's, you know, trustless and fair and scalable and so on and so forth all the same time. It's a hard problem. No, it really is. I mean, you're describing our I guess we're describing like we want something that's touring complete so we can generalize it. But you know, it's the problem is we give everyone turning completeness and now they do a bunch of other stuff and now we have to deal with everyone else. Is Technical Debt. Yeah, it's it's the irony of the circumstance. Right. I think that's a that's kind of a great way too, to wrap this up. Is there is there any questions that we should have asked...

...you that we didn't get around to? Man, what is the meaning of life, the universe, and you know every for forty two absolutely better better thing to ask is what's the what's the question? Something like what does that mean? That's question. What's the question? So I provided the question and you gave the answer. Well, looks yeah, what's yeah, deep think. So how can I, how can people get a hold of you working? I find out more? What can we expect from new where do we go to learn stuff? You can find us at Mirror blockchain, dot solutions. That's M I am I are L K H A I in. You never be like how long that is until you go to spell it. Proven that about solutions. So our websites there are githubs, their white papers that describe what we're doing in a technical manner. They're we're building out some very technical documents. Regarding the economics that we were discussing today, those are discussed lightly inside of the White Paper and we're going to eventually start digging in on the full yellow paper. That's the full enabling document. But when there's decent amount of Alpha code based available, I promise I'll be commenting more things soon. So says every developer. Ever, they'll need an intern to do that. All right, guys. Well, thanks for coming on the show. We love having you. For the listeners who have been here, if you have it yet, go on itunes subscribe you can find us through spotify, any podcasting APP probably. If it doesn't exist, tweet us. We'll figure it. We'll fix that and yeah, thanks for the show. Yeah, thanks, guys, you were great. Thanks.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (118)