I have spent the past few weeks cleaning up the hashpool codebase. In the process of sprinting towards ecash proofs I basically extracted only the pieces I needed into the SRI codebase. This is a messy arrangement. I hate working in a messy environment, it puts a drag on everything you try to build. Before I start working toward redemptions I want to clean up issuance as much as possible. This means putting the pieces back into the cashu wallet in a way that makes sense for my usecase. When I have the wallet working the way I want I will open a PR to cdk. I'm still a ways off, though.
I added wallet functions to perform a bunch of tasks without making any network calls. This included:
- add and retrieve keysets
- generate blinded secrets
- keep count of generated tokens
- store blinded secrets in the wallet
- db support not yet implemented
- generate proofs
- combine blinded secrets with blinded sigs
I also deleted a bunch of code:
- logic I moved into the cashu wallet
- shared state between SRI components
- unnecessary fields, functions, Options, Mutexes, lifetimes
God I love deleting code!
There are a couple of things I need to keep in mind to reevaluate or fix later.
I can't assume that the active keyset used to generate a blinded secret will still be active when I go to combine the secret and signature to make a proof. I'm not sure yet whether I can still generate that ecash token or if it will be lost, but this decision will inform how to handle this edge case.
I also reinterpreted the cashu protocol a bit. I am using the mining share header hash output as the quote identifier.
Usually a BOLT11 invoice would go here. I think the hash header is the most direct translation to a LN invoice for my use case. Instead of providing an LN invoice and validating it with a proof of payment, the pool validates the miner's proof of work by checking the block template they are mining on and the difficulty of the block header hash. I think this is a very straight-forward translation of the concept but I should spend some more time teasing out all of the implications of this architecture.
It took a bit of plumbing in SRI to get the block header hash from the share validation code to the success message. I'm not sure if it is more appropriate to calculate this value once and pass it around within the codebase or to regenerate it at the point where it's needed. I don't think it's a question I need to answer any time soon. We can optimize this later. For now, I think passing around the block header hash makes a ton of sense to human developers so I'm happy with this construction.
I have a little voice in the back of my mind that tells me I don't need to worry too much about efficiency of implementation because the end user will have the pool difficulty setting as a knob to turn that will solve all performance problems, at the cost of increased payout variability for miners. For pleb miners and decentralization of block template production, I think this is a great tradeoff. In terms of attracting developer attention I think making the design more understandable to meatbags with spongy human brains like myself is also a great tradeoff.
I am pretty close to capturing all of the wallet state in cdk but I am still generating one token per share with amount 1. I wrote a function to calculate the amount of ehash to issue for each share. I call this function and log the 'work' value along with the block header hash.
I tried to pass this value to cdk during token minting but despite initializing the mint with one key in the keyset, it still tries to mint a bunch of tokens. Before I can mint this amount of ehash I need to revisit the single keyset key workaround. I ran into trouble with SRI encodings in the past but I think the solution is to simply calculate the maximum number of keys needed to represent every possible difficulty and create a (largeish) fixed length struct to store those keys. This is my next development goal for after the holidays.
We're planning a trip after Christmas to a snowy place for the kids to enjoy. I want to minimize my workload during this time without missing a day so I'll probably start standing up the hashpool.dev website repo, unless I can find something even easier. =)
I haven't started work on the mpsc design but I have a great code tour from Gary that follows the existing design through the SRI codebase. This will result in a cleaner design. I need to grow my understanding of the mpsc architecture in order to prioritize this work.
Merry Christmas, y'all. We are winning. Can't stop now!