windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Windows.-.Installer.exe
os x
app: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.OS.X.-.App.dmg
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v193.tar.gz
I had a good week. I am feeling much better and am generally back to schedule. I fixed a lot of bugs and improved several different things. Unfortunately, my plans for IPFS integration encountered a small snag.
Incidentally, I played through Human Resource Machine this week and really enjoyed it. If you are a software engineer, or are looking to become one, check it out.
the client's servers are off by default now
You can now switch off the client's local server under file->options->local server. You may not have heard of this server before–it is a mostly experimental http server that was useful for exporting files in earlier versions. I will do more with it in future, but regular users don't need it now we have drag-and-drop export.
The same is true for the local booru, under services->manage services->local booru.
For a new client, these servers now start off. I never liked the 'welcome to hydrus, now your firewall will ping' message I started with, especially when most people didn't even need those servers, so that's also now gone as a result.
ipfs
I wanted to add simple hydrus->ipfs file upload this week, but in working on that I discovered that I had misunderstood what an ipfs multihash (the Qm… address) represented. I thought it was the sha256 of the file contents, but it is actually the sha256 of an internal ipfs-specific data node that leads to and eventually becomes the whole file. I am not 100% familiar with IPFS's inner workings, but I think I appreciate why they do it this way.
Unfortunately, I had hoped and assumed I could nicely overlap hydrus's cache of sha256 hashes with their file list and hence easily sync and push and interpolate and query that metadata over the IPFS API. But it seems that figuring out what an IPFS multihash for a particular file isn't trivial. This simply adds a layer of complexity to anything I want to do, as I'll have to cache sha256->ipfs_multihash pairs and won't be able to generally query the IPFS network for arbitrary files.
So, I will keep working on IPFS, with the aim of getting file upload and virtual 'directory' upload working (so you can IM your friend the IPFS multihash, or post it on a board somewhere, and hence do ad-hoc p2p), and pause there. This was generally my plan anyway, but some of the fun speculative things we were previously discussing just might not be technically possible. Distributing repository update files and doing other client-to-client supplemented network comms is still doable, and something I will return to as it becomes necessary.
changes in how jobs are scheduled
I am generally dissatisfied with how slow some of the client's operations have become. Some big jobs trigger when I don't want them to, or don't when I do, and some simple tag autocomplete queries take just too long. A lot of this code is messy.
So, I have spent a bit of time improving how these things trigger in real world scenarios. Let me know if anything bad or good happens. Particularly, big jobs should cancel more quickly (especially on slower computers), and subscriptions should trigger much more snappily now, including just a couple of seconds after client boot. If that turns out to be annoying, let me know.
I want to do more here, in particular serialising big jobs to a single daemon thread and giving the user more power to say 'sync this repo now'-type stuff. I also have an idea to break the main client.db into smaller files and 'service' folders that will be quicker to maintain. The service dbs will handle their own sync, not locking up everything else while they do so, and you'll be able to delete them and export and import them to other clients without having to do a ton of reprocessing. (Hence I'll be able to put up a synced ptr .7z that anyone can import.)
thoughts on speeding up tag autocomplete results
The current tag autocomplete system worked great when there were only 500,000 tags, but it really groans now. I have optimised and optimised, but some queries still just take too long.
I know what the problem is, and I know the current system won't ever be able to produce accurate results quickly, so I want to take a little time in the next few weeks–probably as part of the client.db breakup above, and likely as soon as I am done with IPFS–to create a new mappings cache layer that'll allow me to store and maintain accurate tag counts at all times. I expect it to slow down tag processing a bit, increase the total db size by a bit, and reduce autocomplete results generation in almost all cases to something like 250ms, possibly much quicker.
Of course, increasing the db size is something we absolutely don't want right now, so I'll be doing all this one step at a time. Expect the coming db updates to be pretty CPU and HDD heavy.
Once the db runs better, I will return to adding a suggested tag control and writing a faster dupe search algorithm.
full list
- the client's local server and local booru can be turned off from their respective management panels, and from now on, the client will initialise with them this way.
- if the local server or the local booru are not running, their copy/share commands won't appear in right-click menus
- the welcome dialog is now a simpler popup message
- incidence sorted tag lists are now sub-sorted by a-z lexicographic
- pasting many tags that have siblings to the manage tags dialog will ask you if you want to always preference the sibling, saving time
- added a 'clear deleted file records' button to the local file service on the review services window
- idle mode now cannot naturally engage within the first two minutes since client boot
- the autocomplete search logic will not count namespace characters in the autocomplete character threshold, so typing 'character:a' will not typically trigger a (very laggy) full search
- putting a '*' anywhere in an autocomplete search_text will force a full search, ignoring the a/c character threshold
- moved some specific 'give gui time to catch up' pause code to the generalised pause/cancel code that a lot of stuff uses, so big jobs should generally be a bit more polite
- split the daemon class into two–one for big jobs that remains polite, and another for small jobs that triggers regardless of what else is going on. this should increase responsivity for a number of scenarios
- fixed some bad wal failure detection and hence no-wal file creation on some instances of db cursor reinit (usually after service modification). because of now many superfluous no-wal files, existing no-wal files will be deleted on db update
- some external storage location errors are improved
- some internal and external storage location init is improved.
- if an error is detected in the external storage location manager, it will not attempt to rebalance again until the client is rebooted
- improved some upnp error catching
- cleaned up some misc shutdown thread-gui interaction error spam
- did some prep work on a future rewrite of daemon jobs pipeline
- split up some mixed file/data/404 'stuff was missing' exception code
next week
I'll try to finish off single-file IPFS upload, and I'll prep for this big db rewrite.