Hashing It Out
Hashing It Out

Episode 70 · 2 years ago

Hashing It Out #70-Akash Network-Greg Osuri

ABOUT THIS EPISODE

Akash is a cloud for serverless computing. Using the incentivization schemes enabled by the Cosmos Network, Akash empowers architects and builders of the internet with the ability to utilize un-utilized computing resources. Conversely, those with excess computing power can open their systems for auction, enabling them to recuperate value from those resources. Corey and I have the pleasure of speaking with CEO Greg Osuri about their engineering effort and how they built this service.

LINKS

Entering forecast work. Welcome to hashing it out, a podcast where we talked to the tech innovators behind blocked in infrastructure and decentralized networks. We dive into the weeds to get at why and how people build this technology the problems they face along the way. Come listen and learn from the best in the business so you can join their ranks. All right, welcome back, everybody, to hashing it out. As always, I'm your host, Dart Corey Petty, with my trusty Co host, is always calling. Cuche say what's up? Everybody calling? What's up? Everybody calling, and today's episode we're talking to a cosh dot network or a cosh network, and we have on the CEO, Greg a Surri. Greg, the normal thing. Tell us where you come from. Hey, you got into the space, and then we'll just start talking about a cash network. Sure, I hi, everybody. Hi, Hash colry calling. Great to be here. I am a applied economist, a computer scientist and a a visual storyteur. began my computer career programming since I was twelve. What twenty five years I can all remember the time I did not program and the origins of our cash blie goes back all the way to to the time I was a kid and and for me, growing up on a farm in India, affording a computer, I learn how to program without using a computer, using books and to get access a computer. It's kind of a big deal. So so fast forward. What we're doing in that cage is making computing more more available, two more people. So I think the journey really began there. But but to give more specifics, really my background. I moved to San Francisco, but fourteen years I began my career with the building data centers for for IBM and and for Big Cal coast and big farmers and, you know, decided to move to the the high growing, high pace environment in Seca Valley about thirteen years ago and started a company called Angel Hag. Angel Hag was a Hackapon, bits accelerator. The idea was to bring to people, bring people together in a room. Basically bring people together that are not very, very extra word a very outgoing and put them in a room for thirty six hours, give them all the food and all the help you need and and see what comes out of it. So we kind of made this idea of a hackathon more mainstream and by the time I left the company we had about one, fiftyzero developers in the ecosystem and within in about fifty cities around the world, and the the best thing that I learned was to understand developersented to rest and had incredible opportunity there to launch several companies from Haigh bones, and the biggest one would be fire base. If you you know what five bases, five bass, the Google's cloud verse, and it is what I considered to be one of the...

...most loved developer tooling, you know, kind of a top ten. I think it'll will clearly be out there. And two stuff tools like that that really help people to get things done in the least of one of time. Is What I had experience launching and and I stumble and talk upon what it really realize was a big challenges. After you develop your code, what happens then? What happens from the time you have your code on your get help to the time it goes to your users? And turns out that very piece, which is called deployment, was extremely challenging. And so this was around two thousand and thirteen. I decided to focus on that part of the development of Software and stumble upon this technology called Linux containers very, very early in its phase and found a lot of promise when it comes to solving the deployment problem and started focusing on it and got myself involved into projects that just started back then. Doctors is a big one, and cubnitis and and and really just you know, I was guessed. I haven'tly been the right time, that right place, and in I met a guy called Jobeta who created communities, and once I saw this demos project right before the launch, I got I decided to get involved much deeper. There's a lot of promise and that's how I started working more seriously in the cloud automation space, and today cougannities is used by eighty percent of the cloud. So that's where really how my journey began, with making software much simpler to deploy and we started a cosh net for the company. You know, the team that started our coash neck work comes from overclub labs. Our mission was to really take commoditized compute to a level that it can be deployed at the edge at high performances, such a low latency, and what we discovered was there is a ton of capacity sitting in data centers. About any five percent of the capacity sitting in four point, make eight million data centers is not used. And in surprisingly, there is this this new new sort of like industry that was just growing rapidly called the cloud service providers, primarily a by Amazon, Google Micros, of that's gaining market share in capitalizing or arbitraging access, for lack of a better word. So so there's this incredible capacity, it's not being used and there is this few companies that are capitalizing on knowledge and and we felt that that was fundamentally broken and we started experimenting with unlocking the capacity by creating a marketplace and and we're been we wanted to design this market look marketplace. We didn't want to design it in a way that it's going to die after if you give you stopped working on the project. So we come from a big open source background and we wanted to design the software in a manner that is peer to peer. So that looks something like get. You know, if use get, get notes essentially not centralizer in they had this like pull or push model and we wanted to do something like get with gustability features. And that's how we ended up building on a blockchain. So the the whole history really comes back to is that that we have this massive unused capacity. There's an opportunity for a marketplace. There, there, and when we decide to do...

...a marketplace, if you want to do it in a decentralized manner, in a peer to bear maneral manner, get was a major inspiration and if you had trustability to get blockchain happens. So that's really journey of occash now work. First question on that one. Um. You have this massive UN used capacity, and I come from an HPC background. At even like scientific clusters have a good portion of their their conversation down in terms of like overall utilization, and I'd imagine these massive data centers it's even more so because there are a lot more commoditized in sorts of the harbor being used. What's the problem? What? Why is it under utilize? Is it lack of access? Is it affordability? Is it there's not enough people who need to run jobs and people built data centers that are too large, or what is it? So I think it comes down to two could problems. Right. The first problem is deployed for peak. So if you are a computer provider, say your Walmart, or say you are turbotax, for example. Right, turbo tax, during their tax season their utilization is somewhere around ninety seven percent, as in the real numbers are. And when they are doing the non taxism, it is nine months of the year, they their utilization like three percent. But you still need that capacity three months the year, right. And so the peak planning is the number one cause for a news capacity and the second because of news capacity is suboptimal distribution of workloads. So if you hear this term, you know sequel servers, are dynamic servers, are abservers. What the really means is they're dedicating this entire machine for one function. So they usually run these machines to the very homogeneous matter in homogeneous architectures. So that causes significant, you know, inefficiency when it comes to disc distribution of workloads. So if you can somehow really capitalize on unused capacity and somehow capitalizes on distribution of jobs or workloads the been scheduling them in a manner that that's been by a good performance envelope, you can solve the efficiency problems. So yeah, it's it's pep planning and inefficient usage, JOE or or unity, efficient architectures were for scheduling. So your your, I guess, audience, or your, I guess, desired people to provide this computer resource is not going to be the big box people who already have cloud services. It's going to be because they have continerization and a kind of they cost out. Even in your what your recent white paper, you kind of figure out how that cost out the price their services. It's people who have planned for this certain amount of hardware infrastructure and they have it, don't utilize it but a certain part of the year so that they can provide the same resources for a much smaller amount. Is that? Is that about right? Or is it broader than that? So if you're asking who provides? Yeah, you don't on cous network that it's a wide, wide, wide range right. So all the way from home deployments to Raspberry pies to, you know, mega data centers. Do you inclub providers, and you know it's it's incredible. Some of the TATU providers that that we've been talking to still have massive under you close problems every time there's a physical box somewhere that is under your life,...

...no matter what. Right so it's a really a wide, wide range of people that are using some big, biggest providers so far. We saw on that on the test net are machine learning companies that deployed hardware in their facilities on site, and these companies use them maybe one to times a year, times a Times a day, maybe couple hours and and most of the time they're not using it. And they want to know, recapitalize or recovers of the capital because so it's hardware is a bit expensive to you know, have supermanium right right behind me, which is my home computer which I barely use, but that's going to be on the whole an hour. So who do you see the end user being, though? I think that's what Corey might have been getting at. Okay, sorry, I must misunderstood and I am miss understood. Apologize. But who do you see consuming these resources? What do you see their user experience being? So what kind of availability guarantees do they have? You know, there's obviously no I don't seem like there's one oneone service level, you know, agreement. So how do you who do you see the target audience for using this unused compute power is, you know, right. So our biggest demand comes comes from machine learning and data intends to workout vertical, because the cost for these to run these jobs are exponentially increasing because the size of data we processing is is growing, especially if you're a deep learning Om machine in company. So some of our biggest users today spend about twelve million dollars of a cloud and someone for smaller size users spend about five hundredzero on the cloud. So these are large scale data intensive applications and and and run by growth stage companies that are extremely sensitive to cost. So any cost advantage they provide them is extremely attractive. Using a cautious model, the cost advantage, you know, is about four to five times lower than the current cloud. So that's an extremely attractive incentive for for machining companies to use. And from a user experience standpoint, I believe a cash command line is perhaps the best command line experience when it comes deploying period, not cloud or not watching. And the reason for that really you know, if you look at my back ground, most of my work, I dead, any library that I have or that I have over a thousand stars, were about making, you know, develop experience on distributor systems more delightful. So so my background with making command line interfaces and design developer interfaces. When we combined with with cloud deployment, we ended up creating an incredibly usable tool and I believe one of our partners were taking the survey about it. A wee'd about hundred developers and when they add some what tool, blockchain, crypto, do you think has a most the best develop experience? And think what you what people said our cash. So that's that's really a testament. That's really helps to our experience and we do have a wall of low. If you got arches are network love, will see people what people talk about their experiences when it comes to deploying on a cash so it's extremely productive interfaces and very delightful interfaces. We designed from a SAA guarantee standpoint. This is a decentral life clouds, so they are no guarantees right, they're not contracts and although...

...we do have mechanisms for top fault tolerance, and really the design for the for the systems goes back to designing for hyperscale. When you design for hyperscale you do not assume trust or you do not assume performance at the age. So you design systems in a way that are that the highly versatile and highly highly introperable and flexible. So we designed the the containerization to college that we've been working on is a design in a manner that you can switch over from machine to machine in case of a note failure, in a manner that the fault tolerant, but also in a manner that you do not lose a session. Right, so as as someone experiencing a fault tolerant event will have no idea that you know, there's a note switch over while using a cash so so. So, you know, a cause to the hyperscale architecture, designing a way very similar to how Google or most of the hyperscale architectures are deployed, in a way not to disrupt the actual and user experience while doing a fault tolerant event. On top of that, I'm curious for people who have these larger jobs that require a lot of resources, are those that? Does that mean they require specific subsets as a network that are all basically located the physical location, like an all these in the same Tatas Interc can those types of things we distributed across whatever devices that are capable of running things in a so how do you head? Is that like network latency come into play because, like my backgrounds in in scientific computing, so it's most of the Times you had to use clusters that had very fast interconnects. But I'm curious on like there's no way you could provide that type of service in a distributed network. So the jobs are going to be doing or things like machine learning, but even though sometimes, based on the size of the job, need to be in the same physical location, is there way you can handle that? I know that's what's like. Goal of had troubles within the past in terms of this kind of blockchain computer model. Right. So local residency versus remote asidency. Right. So the the IT really comes down turns out when we did, when we know we're doing a lot more research with the users, it really comes out of work price performance. So when you have a price performance metric where the cost is insignificant, latency becomes insignificant as well. And if you really categorize a kind of workloads in a bity way high level, the two types of work closes, a latencysons to work close and this batch optimal work clothes. Right. So when you do batch optimality, residency, even though it it looks like you do, you do require but the requirement really doing the cost. So our thesis is the cost is exponentially over what like eight times, nine times. Lower latency becomes less important for bat optimal work close. Now labels becomes extremely important when you have long running jobs, which is essentially that need to respond to a user event within hand the mill seconds. Now what a cash provides is an extremely flexible deployment architecture. Waking. Choose the node that you want to deploying right. So when you have a job that requires local residency, local clusters, you can choose to deploy in a cluster that meets x amount of note capacity, right and only then...

...deploy the job on the cluster. If not, don't. So what a cast does really is open up the market to this these massive clusters, too small little deployments all of the world and gives the sovereignty to the user to choose what they want. And of course, the multiclod architecture on our cash also means that cloud company is plug into a cash directly, which they're doing right now. We have some of the K to providersplugging into to a cache that have a lot of local cluster availability. So what a cash providce is this extremely flexible model where, in case you want to choose a local resident cluster with where you know minimum load requirement, you can choose to do that and once that job is complete, you can choose to switch over to a more cost optimal deployment structures. So it's not limiting but rather additive on top of the existing infrastructure. Another if you look at a cost from another Lens, you can look at it as the Multech cloud deployment platform as well, using the big clouds now right. So it's really a technology that you know it is, is designed to more workloads in a in a performance in a cost envelope defined by the user, and we provide this extremely simply. Use a contract language called STL, or stack definition language, that lets you orchestrate workloads, you know, simple cloud functions to very advanced deep learning. The MULTA regional, multi zoneald clusters all over the world in one single pile. I guess this. It seems as though the way you've built it gets around or fixes some of the issues that I think that earlier distributed blockchain computer platforms are having and that you had, like how do you prove a job was performed appropriately? You're not. You're not necessarily creating a marketplace for jobs where you set off a job and get an answer back. You're you're cutting a workplace for the resources themselves that you can then so and and the containers, I guess, associated with it. Right. So verification in compute is an extremely hard problem, especially when you're doing general purpose computer right. So verification is possible when you have access to the language or the framework where you can plug in, but when you're doing general purpose compute it's it's next impossible, almost impractical. So instead of taking a verification model, we took a web of trust model. Like today, when you deploy a workload on Amazon or or or Google, you just assume that they're telling the truth and they're going to a asked with. It is no guarantee that they, you know, they're going to provide you the workloads that they said they're going to provide. So we emulated that, that kind of pattern, but we went one step further on making sure that we can create a system that is flexible in terms of, you know, arbitrating trust. Right. So a web of cross model was was a lot more optimal than a verification model. And and when you have dishonest nodes, the reputation of a dishonest node should give information enough to the user to make a decision whether it deploy on that dishonest no or not. And so we don't sort of like the protocol does not limit you in terms of you choosing and what know, you want to deploy based on honesty of dishonest. It's really like pushing the sovereignty to user and let the use a decide. Kind of you. Okay, Connor, know you had...

...a few questions in terms of like hard and notice. Well, I did notice on your era, you know, learning page of your website you listed sgx under your stack and one of the graphics. Kind of curious if you're actually using stx yet or if it's something in your pipeline and if so, what does that look like? Yeah, so we are using a Shix. We the demo from one of the partners that you know amazing them or where they would deploy something on Amazon and and that doesn't have his chicks and dump your keys and literally as as a noisy enable problem, you call it right and it deployed with an x chix. I'm time on top of that, completely giving you this super amazing on claim. That's impossible. So turns out Amazon doesn't product J X yet. I mean I don't know if they're going too. And Yeah, so it works. It works in a it does require a certain level of expertise from a user standpoint to understand how to use this chicks, but it works. The only limitation or like it's a credoff between security and performance. So when you have like hyper performing systems, if you're going to end crypt all your you know, everything in it, especially run time, is going to perform very slowly and you need to understand those tradeoffs as a developer to use the s gx feature and sins. I coash is a diverse marketplace with any type of compute. So it's JX are trust zone. So or you know, the processor equial and secure on claves is really dependent on the on the compute. And if you go to a couch right now, if you quity for providers, you'll know what providers have a shax some what providers don't. So yeah, it's it's jix is a primitive on a cash yeah, where what I saw? I'm actually curious. I like taking through what you kind of doing there in the source. So you have you released that that yet? It is that a did I miss it? where? where? where? Could I take a look at some of that? It's yeah, it's available right now. You can go use it. It's Jicks, but you got to bring you on run time. So it's up to you. Oh, I'd see, you can literally. I Okay. So if somebody brings around Stx runtime, you can deploy it using the a cosh network and then that the the individuals themselves will lends their sgx, I think, actually to the signing right like that, securely for you. Okay, I got it. Cool. So runtime is very I mean you have few run times. We would we recommend on JANA's run time. They're amazing company. That's what they do. I believe some of our friends are working on their own run times. Mobile coin is working on a run time. Last a hurt. Lots of lots of run times that being developed that have different you know at the station and different mechanisms, and I'll usus x. We haven't seen any major open source implementation for is the x yet. So I think the technology is like still early and under development, but that's I'm pretty certain that's going to change. You like the next year? I think so too. So, yeah, that's really interesting. And so what? What kind of a incentivization model does a cash network have, like you, like overclock labs, have for putting this together? Like, how are you tied to this network? Are you minting coins? Like, how how are you guys actually making money off of this? How do we make money? No, billion other question. How does any open source company make money? We don't make money from a cash network per se, but we make money if our cash not work succeeds. So our current model is to...

...sell hardware to support this ecosystem and we are doing that using super mini. And our second business model is to provide manage services for large providers, that our large customers that want to deploy without touching the command land when we do it for them. and We created overclock labs. When we began the company, our thesis was we want to unlock the edge by creating a software that makes age deployment as easy. As you know, a single deployment and heracle right and and so we we needed this network in order to succeed with our original business. Now the net work is is deployed and as a network grows, we expect to see a lot more usage of services, especially for me, from Americ Services on point. So it's a very interesting leave. It's a very interesting model that that in that a lot of other experimenting when it comes to open the software. So so far the business models for opens so software have been primarily subscription driven and open core driven. But now we're experiencing the third business model, which is tokenize opens US software, we call it, where he has a incredible incentive layer, the token layer, on top of the open source, you know, platform, where the the data as well, I mean so so good source cod as well as the data is open source. So how do you monetize? A lot of us are still trying to figure that out, to be honest with you. End and our solution is setting hardware. Yeah, that's definitely I don't think anyone's figure that out yet. I think there's quite a bit of potential. They're coming from working for a company trying to do something similar in terms of a token open source project. I'm now. I think that's a great time to even get into the economics of the network, because how you lay out the incentives and the key players of the network and how that token flows across that so with the life cycle of the tokens are will say a lot about your potential to succeed because of that. Like equilibrium, is an appropriate that it can never really scale. So maybe the performance of what the network could be right, right, right. So our pieces on this whole decentralized and sold token economy really began where a paper I published called bootstrapping a pre market by borrowing from the future. So essentially my thesis was in a two side of the marketplace. The first challenge the market has to solve is the demands of black paradox. What comes first, right, and the market has to solve this problem in order to create equilibrium between demand supplies so that the marketing can unlock the network effects which is which is what drives the go to the market. And the question now is how do we, how do you do that? So our thesis was when you boots up supply to a point that it's extremely attractive to the to the demand, let's say cost for example, the demand side will catch up. So and we can reduce the cost by subsidizing supply. So we create a model where as a supplier them is as someone that has have compute, essentially have zero risk to to dedicate the computer networkwork and the...

...risk is medicated through a subsidy the network provides. So and and and INCENTI. Wizes the providers to reduce cost keeps, keep costs as well as possible. Right. So really you're borrowing from the future to bootstrap the present. And really what you're doing is you're creating the effects of liquidity on on on an agreeable price point, what the inflation is going to be like. So the paper Arroad really addresses how do we how do we create inflation in a manner that is acceptable for a price point? What is the price point look like? Right? And one of the things we did when it comes to proof of I mean the system uses a proof of stake mechanism when it comes consensus, and the token here is used to provide economic security block chain. And what's interesting with this token, where it comes a proof of stake systems or when it comes inflation systems, is the the rewards are are essentially dependent or driven by the Amodel of stake, but as well as the time you stake right. So so that creates a much better price performance when it comes to to token price or in in a wild mark, in a bear market, we expect people to lock up tokens for a much shorter period, whereas in a bold market we expect people to lock up tokens for a longer period, which translates to the amount of inflation rate. So in a bold market inflation going to be higher and and because there's less price pressure, we assume that's going to be okay. But in a beer marketing we have a lot of price pressure, the token lock cops or the inflation is going to be a lot lower. So the the token models designed to be adaptive when it comes to when it comes to performance from a market stime point and from a token capitalization flaw structure of the capital comes in, the capital goes out. The capital comes in really from the from the user using the network, and that gets distributed to the providers and to the stakers and whole lot of stakeholders in the in the ecosystem and the in and, I believe, the the the inflation rewards are designed to first of all creating Centers for provider, but also creating Centers for for the stakers to provide a comment security of the change. Right. So that's a very, very high level. And the token is better. Is Very importantly. The token is really functions on the background. It doesn't mandate you to use the token in order to use the network. And a card supports a multi currency settlement mechanism where you can settle using bitcoin or any USDC VI asum. A lot of lot of folks are getting to the stable still coin go just stable coin utilization. So of course in a wide range of tokens for settlement and of course there are a lot of benefits of using our coast show, given it comes to cost and and other things. But, but, but providing a mult currency settlement makers and will believe, solves quite a bit of problems. It's only by very important for us because the accasion network, you know, being a cloud infrastructure, is is, you know, as you know, in a cloud system the solution comes from from various different systems working together, right. So it's, for...

...example, you have new cipher, which does key management, or storage, which does archival storage or helium, which does you know long range Wi fi networking and the towns of incredible projects right now in the cloud ecosystem that perform critical functions that you require as a solution developer to interoperate with. So so with a car, you can literally pay using any token and use all these services without having to hold their tokens and have that level of flexibility and that level of comfort, which we believe is super important for driving adoption, for for decent plass card, which is which is a huge problem today, because we cannot interoperate or you can automate, you may dies and matter. How do you how do you price a specific unit of compute, especially when kind and does the, I guess, variation of price of the token itself come to play with that? So it's a very challenging problem. You don't price the computer, it acts. Chooses every worse option model where the providers and and the payers, the tenants, price the computer using option. Because you know, every attempt to price compute has ended up in under failure, US being, I think, a big example on how they try to price Ram or commoditize Ram. Every time you commoditize something, turns out. You know, it's a classical game, theortical problem, right, you have the you know, the Nash Equilibrium problem, you call it the Game Theory. Every time, when you create a commodity, there's always going to be room for people to cheat, right, provide you this same commodity that that's of lesser quality. So computer is very, very hard to commodize. Instead of trying to commodetize us, great free market was given by auction. That's been our PC's and because how many vayable, how many types of Rad you have or how many types of like clock speed for CPU is extremely variable, right. So a lot of variability when it comes to what kind of devices are you're going to be using. So it's like diamonds and it's not like gold. Okay, Great. How do you prove the commodity that's being supplied is equal to look for it's like, I know that proving that a computer executive program you know, to you know in good faith, is something you can't do, but I'm not really sure. Did you speak to this about? How do you do make any attempts to make sure that the person who is putting their resources up for auction or actually have those resources available. If they said, yeah, it's for gpus, can you verify that that's that's actually the case? How do you do that? So there are answer your question. You can't and start practical. Right. There are mechanisms you can deploy to prove. But if you have to, if the user have to prove every time they provide something that they tell the truth, is just not going to happen. So you know, the best plausible proof that you have is like an asymmetric memory art function, if you want to test memory, right, just like an encryption hack to essentially a run a job, so that that requires excellent memory meets or not. So what we do is we provide a instead of proving. We provided trust best model. Right. So if I say that this provider is legit and I'm someone with a High Reputation by association, this provider has a reputation like just like PGP, essentially. So, and if this provider is lying somehow and I run a challenge on the provider and the provider we proved that provider lying, I take my trust points out...

...and more people have more number of people running the challenge on the viders are continuously distrusting that means you, as a user, should probably distress the provider. Three, a reputation style mechanism may similar to help. Proof of state works. You know, there's no one sentence involved in terms of like proving that someone doesn't have a specific resource doesn't went net anybody any money. Can you prove it? You can't even prove that the adult. So here's the problem I'm in. Say, I'm seeing. I'm a provider. Okay, I provide a particular type of resourceers. Just say I happen to have a butt looad of sgx enabled systems for some reason, and I know that there's a sudden spike in a man fresh gx for whatever reason. Okay, cool, that's I'm picking on Stx. We graphicers. I don't care. And and like I notice a competitor POPs up. So what do I do? I'll go ahead and I smith some under by other accounts, by you know, my fake account, and I sumit a bunch of requests and then I say, no, you didn't do it right. Dang, in your trust. No, Nope, you didn't do it right. Taking your trust. So and then I knock out my competitor. What stopping this? Kind of scenario from happening. So that's why reputations were important. So if you who are a nord that has very no reputation and you're not going to knocking go your competitor, then you know you're knocking out. Does really matter, right? But if you want someone with high reputation that's knocking out, that means there's some weight to it, right, and if want this is sorry to interrupt. I think I'm just there's a disconnect here because I'm not understanding what the word node means. That I very sorry to interrupt, but the when you said node, I think I think you know a compute note. I think somebody as a provider. So I think if there's just two. There's providers that there's consumers. You know, it's producers in the consumer just people consuming compute power and this people producing compute power. The people producing compute power need a trust mechanism to rate that they are actually giving providing the correct bail, bility in the correctpute power, that memory and all that stuff they need to actually like take care of the job. All you care about is a job gets done right and it gets drent unny good faith. Then there's a consumer side, but I don't see a trust moodel necessary for them and there's no way to actually verify that they're doing the trust model correctly. Do you see what I'm saying? So like they don't. So if I'm a producer, I produce our I'm a I'm a provider, I provide, I provide my compute power and I see competitor who's also providing compute power, I could have a I could easily have a myriad of consumer who have actually used used the service free every so often, just dumping my money back into myself kind of thing, or use other services just for whatever reason they have a small reputation or history on them and then they dump a job on to my competitor and say no, you didn't, you didn't do the job right, this is wrong. What do you how do you drive? Kind of things. Yeah, so we're talking about a civil problem right. So you're essentially every time you deploy something there's a cost. There is a cost that goes directly to the provider and there's a portion it take fee, you called it a goes to validators and I think about twenty percent take feed right now. So your cost of just sort of like reducing your competitors. Reputation gets increasingly higher the more number of jobs you do. So essentially there is today in the kind the cloud today, right, so you have Amazon, Google, Microsoft, so three companies essentially providing computer now, yeah, as a user, what I would do is I would run a benchmark on Amazon and Google Microsoft, right, and based on the based on the benchmark, I established my own reputation gage and I'll be like, okay, you know, Amazon,...

...the same instance, you know same configuration, Avlon just perform better. I'm not going to use Google. Now, imagine me taking my reputation gage and putting that online so other users can see my data. Want what I experienced. Right now, it's up to the user to make the decision whether they want to deploy on Google or Microsoft, or Google or Amazon and run their own benchmarks or just trust me that I'm telling the truth and go with my data. Right. So that's really the model that we took it with our cash. So there's no the reputation we providing does not limit scheduling to the job, but it gives you enough information to the tenant that the deploying the job to make a decision themselves on whether they want to apply a job or not. Like so in case of its Sybil, where provider, you know, a is trying to discredit provider be and I'm a tenant and I, you know, it's up to me whether I want to take that reputation and run with it or it's up to me whether I can run these jobs and have my own reputation to challenge the reputation given by a Sybil, the provider a created. So that's really why I said we have a web of trust model. It's a trust this system. It's not a, you know, permission less, verifiable system that's going to limit your job reploirement based on some factor. So I see. Okay, yes, I listed to that. I thought you were ready describing a trust appoint system to particular notice, but you're actually just having people report feedback like yelp correct, so like reviews essentially. All right, I got a few questions from here. If you start to kind of kind of start to wrap up here. What the Hell is a? Is the super mini and why do I need one? So super mini is our is a home appliance that is essentially a mini super computer and it is I think, as powerful as a cray seventy, which which is the most the fastest supercomputer ind s is more powerful than that. And it from me. If you're if you're a user, in simple terms, it makes you more money that you spent purchasing the device in the first year. So the Rli is expected to be somewhere around hundred and ten percent the first year. And and the cool thing it was supercop super meny for me from a youth case standpoint, is it gives you a mechanism to bring the cloud to your house right and it has an APP storum model that lets you deploy your favorite decentralized applications. Now we have eat to support a lot of proof steak chains that you can run from the comfort of your couch at significantly lower cost than what you would otherwise pay on the cloud or deployment to day. And some of the cool applications we are we're launching super many with is or Chid, the VPN node and sent now this another incredible vpm server, so you can run to be PN node directly from your house in totally skip the skip the big, you know, big cloud. And and also another cool application that we're seeing, a super mini is providing an inference layer to this long pen network that helium has created. For those of you that don't know what helium is, helium is beautiful. You know project that introduces a hot...

...spot, a long range hot spot with about ten mile radius, and how want sitting in my house. So what this hot spot does is is create sort of a Mesh network for low powered iote devices to directly connect over wirelessly, using this essentially to tooth hot spot and and and super mini, when it's connected to a helium device, forwards inference layer, because lot of the devices are super low powered and they designed to be. They designed just like do the sensing and acting and they don't have the kind of intelligence baked in at the age to do any meaningful inference. So the inference player is what we are curt supermni salves for. So you imagine, now you're essentially creating a age infrastructure that skips the big Tel Cause in the big clouds all together. So that significantly reduces the cost deploy cost envelope when it comes to deploying the Agia to devices. That's just one of the one of the super cool use cases for supermani. So, depending on like what angle you want to look at, if you're a hardware hacker, you want a Supermni at home, because this is your APP store essentially for your house, and it's extremely modular. You can literally plug out and plug in different boards that that will be releasing. Or if you're someone that cares what's sovereignty and in privacy, you know, having super minis at home gives you this, this opportunity to control your own data and have the data where you want to deploy. So I don't know if you saw a dog, but we have this this deployment guide to deploy Matrix Chat Server. So imagine having super minis, you know, in your house and my house and we both running matrix on it and we can connect directly in a peer of peer manner and have voice over Ip, fully secure communications into an encrypted that skips the cloud together. Right. So a lot of like a lot of like interesting use cases that we a unlocking with super mini that focus on sovereignty both our computer and data, and in doing so you're contributing or you building the next next layer of the web, that the web three for the for the edge, right. So, yeah, that's a really really what super mini in the box looks like. As actually really like give my hands on one of those. How do I do that? We are shipping fifteen of these devices in five weeks. If you want to get one of these devices, you can go to our costart network, Super Mini Reserve and put your name on it. The first fifteen we're shipping is to do a few test and in order to qualify for these these boxes, you need to have some level of link skills, and so we really our goal with the initial shipment is to get that feedback and and and, you know, work with you see how it's going to function the field. And we're shipping about hundred of these devices in me so and in and and more so in the macoming months. So we're doing get phase to released, release and yeah, yeah, really, get in touch and out how to ship you one. Okrect. The next question is just to kind of let's just something that I'm just kind of curious this point. I have a I'm one of those weird people that just has random computing resources sitting in...

...their house not doing anything. I have, I guess, Sixteen Court computer node waiting to do something. What's the process that be setting up and getting it on the aircash network. And what can I expect from that process? If you the process is extremely simple and if you go to a dogs dot caution network, there's a guide on how to become a provider on the network. And what are users tell from the twitters? You go to acash on network love, you'll see a lot of feedback on twitter as to what the experience is like deploying and almost all of them found deploying or unlocking your your compute that you have a home way sumpler than you can deploy on the cloud. So it's really simple way to do it and we kind of created tons of tooling around being a provider and operating efficiency. So we have a tool called disco. Disco stands for a decentralized infrastructure for serverlis computing operations. Is essentially a framework for providers to manage your nodes. It gives you amazing tooling all the way from cumunities to observe ability tooling, Grapana and per meet years and all kinds of tools you need to like like like maintain your your computer and be informed when things happen. And in that process you know you can, you can install the Atcosh for wider node on the on your on your machine and and just you know, yeah, just unlock it, as simple as that. Correct. First off, phone, thank you for coming on the show. You're were a recommendation from one of our fans, as network can from cipher core. He said he'd like to hear from Y'all and recommend you look into it, and I'm glad you did. Yeah, yeah, I love can. Can is a the met can, I believe, through the twitters, and he's one of the providers on our coosh work and he did exactly what you wanted. You ask me earlier like how easy it is to get on a cash I work, and once he experienced the software, it was obvious that this is something you he wants to spend his time on, and I believe that's how we got introduced. All right. Well, thanks king the show. Call go ahead. Sorry, it kind of one more thing, like has the already been like user stories that have really like come out of that that are just compelling that you think might like somebody actually use this system to do this cool scientific operation, for instance? Like what, what kind of experiences are people having so far? Besides, you know, learning their resources, but actually consuming the resources. Is there anything on that? And yet oh yeah, yeah, yeah, times of them again like a CONTO network. Special love. That really describes use experiences. I think my favorite one would be Alex LS, who created open fast, and open fast is the most widely adopted function as a service platform, and his school story was he created this, this cloud load balancer, I think that's how you describe it. It's called it lets. What it does is it exposes an IP address from your house using allowed load balancer sitting somewhere on the cloud and and so that you can can share your your service sitting inside and Matstead of your house, right. And he did that using our cash,...

...and his experience was like he's going away, because now you know, all of a sudden he has you use case of just exposing your local computer to the to the to the globe, right, and that was fairly hard to do without a cash and because of the gets, the permissions nature of it, and and and the extreme program bility that permissional list mechanism provides, making so that he can bake this like you know decentralized load balance or model like it. I think you call it, now into his course software and next thing I know he's starting. He's just running all kinds of things or kinds of like machine learning influences. He sent me a picture that colorized my photograph from like a real black and white pictures. All kinds of cool things people are doing, and the most exciting thing that people are really excited about is the privacy protection, privacy guarantees running matrix servers. Right. So we ran a challenge where we wanted people to to get their own matrix domain. If you got to if you use right or matrix, when you sign up you're usually the user name is like, you know, Greg, get yours in support. So instead of that, get your own name right and create enough nodes in Matrix where you communicating privately one way to serve to another in a fear of peer manner. So that is like, that's an incredible we got a lot of exciting excitement from from from the public for that kind of deployment. Right. So it's a lot of people that are excited about a cash are machine learning and and privacy, you know, seeking folks. Awesome. I actually like the fact that people are incentivizing are coming up with new ways to build around meatic servers on I like that project a lot and connected with them through the work the idea. Yeah, yeah, so, because I don't if you know, but sixty percent of its tring also running on it us right now, which kind of sad. That's that's one of the reasons why we want to fix the many the big goal for a disco is like well, we are giving you a progress for convenience, and of course it Wus and Google are copy convenient to deploy your work close and we're giving a convenience because we don't have a capabilities and community centrals manner. One of the big reasons where we started disco was to solve that problem and you know, in that journey we ended up creating a node that you can deploy in your house, extremely convenient, extremely cost effective, more so than than the cloud. You know, hope that we can decentralize the supposedly decentralizz world today. So, you know, doesn't really matter how decentralized the network is really come sound to your physically deployment. If you're running on a on Amazon that I don't consider that decantilis at all, especially when majority of network is running on a single count provider. I would I would absolutely agree with that. And the best part is no one is using is gx. So in case someone, as adversary, wants to take over if you did network, they can just simply dump the keys. That has physical access. I'm not saying that gift basis is going to do our our our exist. Current administration is going to have any intention to shun down if they're in. But if they intend to do so, it's can be done done, matter of maths, right. So you don't have the problem at cash because even though you have physical access, as she x is going to make sure that that that's impossible run...

...on. So where do people go to? You set it a few times, but just a recap the varied episode where if you go to learn more and get ahold of you guys, our condo network is where you find us. We're very active in our chat right chat. If you got a cosht network, Lash Chat. But you want to get started dog start our cash and our work network is your best place and people will really like a dogs and we work very hard at it and we can you know, getting started linked there, start using it. The best way to learn about our Kashi is to use it. And once you use it, everybody that uses it has their own description of what a cash is. That's always exciting for me to hear. So, instead of me describing what it is, you use it and you tell me what you think it is. I'll standing. I'll go and start setting up my my random notes today and let me know your Matrix servers and we do have inside of by steps not coming out, by the way. So so so in about two weeks. So if you are someone that likes to operate a D or have aspirations to to, you know, gain some prosperity in this new new proof of stake world, please join our in Cente West Deestin. And it's a fun exercise. It's set up for set of set up for challenges that walk you through how to set how to become a provider and how to start using a Cauche and how to start validating whatnot. And of course, insane verzation means you get tokens. So you know, it's a fun game. The last challenge we had the founder challenges. People have so much fun. At at peak we were the fastest growing decentralized cloud with about forty five orso providers fully decentlized and we had about three AD applications running actively the plot. On our cash right now we have something close to that. But but a lot of that came through like instead of by an auction program so it's a lot of fun awesome. So thanks, thanks. Not Come on again a good episode. Thanks. Yeah, thanks so much. This is this is fun par.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (127)