windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v189/Hydrus.Network.189.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v189/Hydrus.Network.189.-.Windows.-.Installer.exe
os x
app: https://github.com/hydrusnetwork/hydrus/releases/download/v189/Hydrus.Network.189.-.OS.X.-.App.dmg
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v189/Hydrus.Network.189.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v189/Hydrus.Network.189.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v189.tar.gz
I had a pretty good week. I didn't get around to everything I wanted, but I made significant improvements to some formerly big and slow gui and db stuff.
some slow things are faster
I have split the analyze call that usually happens on db update into a lot of smaller jobs that'll be tucked in with the other idle maintenance routines. I shouldn't expect you will ever notice it again. This means that the v188->v189 update step should only take about one second!
I have also improved how the db prepares for a vacuum. It seems to go a little faster now, and if you run Windows, your next vacuum will change something important inside the db that should reduce first-load hdd latency. Let me know if you think your Windows system is running searches faster in the coming week!
Also, I sped up the gui code that adds new thumbnails to an existing set of results, like for file import pages. It was running fast with a small number of thumbnails, but not when there were many thousands. I figured out what was taking so long and rewrote it, and for an example page of 5,000 richly tagged results, I managed to cut ~266ms of processing time down to 3ms! I hope this speeds up some people's large imports in future.
Server and client backup code will also work a hell of a lot faster if you backup onto an existing backup–now, files that have the same size and 'last modified' date (which includes all your regular files and thumbnails, which are not changed by the client) will now be skipped rather than overwritten, saving a lot of time.
Also, I rejigged some maintenance stuff so the client should be able to more quickly respond to a change from idle to not idle when it is in the midst of something big.
full list
- split the big analyze db calls into individual table/index calls and moved them from update code to the normal maintenance routines
- on vacuum, both the client and server dbs will now bump their page size up to 4096 if they are running on windows (server vacuum is triggered by running a backup)
- vacuum should be a slightly faster operation for both the client and server
- boosted the db cache significantly–we'll see if it makes much difference for things.
- the way the selection tags control updates its taglist on increases to its media cache is massively sped up. An update on a 5,000-thumbnail-strong page now typically works in 3ms instead of ~250ms. large import pages should stream new results much more quickly now
- sped up some slow hash calculation code that was lagging a variety of large operations
- some hash caching responsibility is moved about to make it available for the add_media_results comparison, which now typically works in sub-millisecond time (was about 16ms before)
- some sorted media list index recalculation now works faster
- some internal media object hashing is now cached, so sorted list index regeneration is a bit faster
- some medialist file counting is now superfast
- wrote a new pauser object to break big jobs up more conveniently and reduce gui choking
- the repo processing db call now uses this pauser
- some copy and mirror directory functions now use this pauser
- backup and restore code for the client now skips re-copying files if they share the same last modified date and file size
- backup code for the server now skips re-copying files if they share the same last modified date and file size
- http cannotsendrequest and badstatusline errors will now provoke two reattempts before being raised
- socket error 10013 (no access permission, usually due to a firewall) is caught and a nicer error produced
- socket error 10054 (remote host reset connection) is caught, and the connection is reattempted twice before being raised
- the old giphy API is gone, so I have removed giphy
- forced shutdown due to system exit/logoff is handled better
- pubsub-related shutdown exceptions are now caught and silenced
- an unusual shutdown exception is now caught
- fixed a copy subtag menu typo
- cleaned some misc hydrus path code
- tags that begin with a colon (like ':)' ) will now render correctly in the media canvas background
- some misc code cleanup
- dropped flvlib since ffmpeg parses flv metadata better anyway
next week
I still have some things in my github queue, and my private to-do list remains stuffed, but I feel I have improved a lot of broken and slow things over the past month or so and would like to push ahead with something big and new. Thank you all for voting in the poll I set up:
https://poal.me/4bhdd6
The top results right now are:
- IPFS plugin
- suggested tags control
- faster dupe searching
So I will have a bit of a think and get going on those. I don't know how long each will take, but I think I will put about a third of my time into this stuff and see how that works out.
I will start with IPFS , although I still do not know a huge amount about it. I am fairly certain I can upload files into the network, but searching and browsing and hence having a hash to download is something I can't confidently state. Anyway, I will try to get a very simple bridge between hydrus and IPFS working, just to see if it all works.
Otherwise, I will keep working bugs and cleanup.