[ home / board list / faq / random / create / bans / search / manage / irc ] [ ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.

Catalog

Name
Email
Subject
Comment *
File
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 8 MB.
Max image dimensions are 10000 x 10000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Currently prioritising: simple IPFS plugin


YouTube embed. Click thumbnail to play.

 No.2052

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v193/Hydrus.Network.193.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v193.tar.gz

I had a good week. I am feeling much better and am generally back to schedule. I fixed a lot of bugs and improved several different things. Unfortunately, my plans for IPFS integration encountered a small snag.

Incidentally, I played through Human Resource Machine this week and really enjoyed it. If you are a software engineer, or are looking to become one, check it out.

the client's servers are off by default now

You can now switch off the client's local server under file->options->local server. You may not have heard of this server before–it is a mostly experimental http server that was useful for exporting files in earlier versions. I will do more with it in future, but regular users don't need it now we have drag-and-drop export.

The same is true for the local booru, under services->manage services->local booru.

For a new client, these servers now start off. I never liked the 'welcome to hydrus, now your firewall will ping' message I started with, especially when most people didn't even need those servers, so that's also now gone as a result.

ipfs

I wanted to add simple hydrus->ipfs file upload this week, but in working on that I discovered that I had misunderstood what an ipfs multihash (the Qm… address) represented. I thought it was the sha256 of the file contents, but it is actually the sha256 of an internal ipfs-specific data node that leads to and eventually becomes the whole file. I am not 100% familiar with IPFS's inner workings, but I think I appreciate why they do it this way.

Unfortunately, I had hoped and assumed I could nicely overlap hydrus's cache of sha256 hashes with their file list and hence easily sync and push and interpolate and query that metadata over the IPFS API. But it seems that figuring out what an IPFS multihash for a particular file isn't trivial. This simply adds a layer of complexity to anything I want to do, as I'll have to cache sha256->ipfs_multihash pairs and won't be able to generally query the IPFS network for arbitrary files.

So, I will keep working on IPFS, with the aim of getting file upload and virtual 'directory' upload working (so you can IM your friend the IPFS multihash, or post it on a board somewhere, and hence do ad-hoc p2p), and pause there. This was generally my plan anyway, but some of the fun speculative things we were previously discussing just might not be technically possible. Distributing repository update files and doing other client-to-client supplemented network comms is still doable, and something I will return to as it becomes necessary.

changes in how jobs are scheduled

I am generally dissatisfied with how slow some of the client's operations have become. Some big jobs trigger when I don't want them to, or don't when I do, and some simple tag autocomplete queries take just too long. A lot of this code is messy.

So, I have spent a bit of time improving how these things trigger in real world scenarios. Let me know if anything bad or good happens. Particularly, big jobs should cancel more quickly (especially on slower computers), and subscriptions should trigger much more snappily now, including just a couple of seconds after client boot. If that turns out to be annoying, let me know.

I want to do more here, in particular serialising big jobs to a single daemon thread and giving the user more power to say 'sync this repo now'-type stuff. I also have an idea to break the main client.db into smaller files and 'service' folders that will be quicker to maintain. The service dbs will handle their own sync, not locking up everything else while they do so, and you'll be able to delete them and export and import them to other clients without having to do a ton of reprocessing. (Hence I'll be able to put up a synced ptr .7z that anyone can import.)

thoughts on speeding up tag autocomplete results

The current tag autocomplete system worked great when there were only 500,000 tags, but it really groans now. I have optimised and optimised, but some queries still just take too long.

I know what the problem is, and I know the current system won't ever be able to produce accurate results quickly, so I want to take a little time in the next few weeks–probably as part of the client.db breakup above, and likely as soon as I am done with IPFS–to create a new mappings cache layer that'll allow me to store and maintain accurate tag counts at all times. I expect it to slow down tag processing a bit, increase the total db size by a bit, and reduce autocomplete results generation in almost all cases to something like 250ms, possibly much quicker.

Of course, increasing the db size is something we absolutely don't want right now, so I'll be doing all this one step at a time. Expect the coming db updates to be pretty CPU and HDD heavy.

Once the db runs better, I will return to adding a suggested tag control and writing a faster dupe search algorithm.

full list

- the client's local server and local booru can be turned off from their respective management panels, and from now on, the client will initialise with them this way.

- if the local server or the local booru are not running, their copy/share commands won't appear in right-click menus

- the welcome dialog is now a simpler popup message

- incidence sorted tag lists are now sub-sorted by a-z lexicographic

- pasting many tags that have siblings to the manage tags dialog will ask you if you want to always preference the sibling, saving time

- added a 'clear deleted file records' button to the local file service on the review services window

- idle mode now cannot naturally engage within the first two minutes since client boot

- the autocomplete search logic will not count namespace characters in the autocomplete character threshold, so typing 'character:a' will not typically trigger a (very laggy) full search

- putting a '*' anywhere in an autocomplete search_text will force a full search, ignoring the a/c character threshold

- moved some specific 'give gui time to catch up' pause code to the generalised pause/cancel code that a lot of stuff uses, so big jobs should generally be a bit more polite

- split the daemon class into two–one for big jobs that remains polite, and another for small jobs that triggers regardless of what else is going on. this should increase responsivity for a number of scenarios

- fixed some bad wal failure detection and hence no-wal file creation on some instances of db cursor reinit (usually after service modification). because of now many superfluous no-wal files, existing no-wal files will be deleted on db update

- some external storage location errors are improved

- some internal and external storage location init is improved.

- if an error is detected in the external storage location manager, it will not attempt to rebalance again until the client is rebooted

- improved some upnp error catching

- cleaned up some misc shutdown thread-gui interaction error spam

- did some prep work on a future rewrite of daemon jobs pipeline

- split up some mixed file/data/404 'stuff was missing' exception code

next week

I'll try to finish off single-file IPFS upload, and I'll prep for this big db rewrite.

 No.2053

Wow, looks like big changes are coming. Thanks a lot and good luck.


 No.2054

File: 1455789342503.jpg (256.59 KB, 1200x1723, 1200:1723, 17f22191c2ce86b7ce407b1648….jpg)

speaking of ipfs, don't forget if you want pretty filenames like /ipfs/####/myfile.png (webm, jpg, etc)

you need to run ipfs add -r /myfolder/

so that it makes the tree. you could make folder for hydrus uploads but the files, like /ipfs_uploads/myfolder/myfile.png and then use ipfs add -r ./myfolder/ or w/e. otherwise it just has ipfs/##### which is the file.

also with uploads you need to run some HTTP request to the file (with resume if you can) and query it from the gateway (ipfs.io or gateway.ipfs.io) until you get the complete file downloaded. this will push it to mirrors as it's needed, it assumes the file will be live so people don't have to start the pull from your machine when they click a link. i do it with wget but it's not hard since you do other http requests already.

you can check the list of machines seeding with ipfs bootstap, etc. sorry your hashing didn't work out like you thought it would. if i was you i would try to get in touch with the dev or check his video lectures more. surely it won't be hard to compare hashing if you can pull the a whole file from ipfs and then run ur own hash from it. but having two databases is annoying. eh well i wish you the best of luck implementing the api or w/e.

tbh i haven't touched my hydrus database for like 6 months since i had it on running on windows, but now i have ubuntu setup accessing those files, so i usually go looking for images to post from the db files. I guess i'll try to import those files and db to the new version in OP. thanks for all your hard work. I'm sure i'll find the 'app' more fun with a fast inet and better cpu/apu. not sure if apu/gpu can hash the images and compare them but it's something to think about in future.


 No.2055

File: 1455789519036.png (548.02 KB, 1008x720, 7:5, 173e2a5f6438a26bd733963651….png)

>>2054

*it assures the file will be live

sorry sleepy…


 No.2058

File: 1455833779543.png (409.23 KB, 1920x1080, 16:9, non-local.png)

Is there anyway to remove the images that have been deleted but still appear in a non-local tag search like the black logo images in pic related? I just find them kind of intrusive, and right click → remove doesn't fully remove them from the client. (Just takes them out of view)


 No.2059

File: 1455925601286.jpg (498.84 KB, 1024x768, 4:3, a06dfd547f3613215d4c02b558….jpg)

>>2054

>>2055

OP of these posts here again. to the dev of thread OP, i'm starting to wonder if i should just use jewgle' botnet picasa for my maymay pictures.

as i said i haven't used hydrus many months and i last used it on win7. so yesterday I copied the contents of my old db folder from the backup to the linux folder from the zip in the post. i made sure the app ran first ofc, i noticed i had to run it as sudo in the terminal otherwise it wouldn't launch, oh well, enjoy ur root botnet app. anyway i copied the db files over and the cleaned up the two exe files from windows. the image files were 6.4 gb, client db was 1.8gb, db-wal 1.4gb each and the thumbnail folder was only 316mb, thank goodness.

after this i restarted the app and noticed it did have a count in the sidebar with 13k+ files, though i think 12k are untagged, fml. i tired to do a search for something but it was unresponsive after two letter typed into search. i was able to run the vaccum tool, check integrity, and folder balance. at this point i told it to setup some repos from the baka menu, it did list then in services so i restarted the app again.

at this point it asked about maintenance and since i had seen this alert before when the db was empty (before copying from backup), i told it to run it, since the alert said it might take only 30 minutes… oh boy was that wrong. the maintenance script did DL some tags from the repos which took little time relatively, maybe 15 minutes, i have fast inet. but then it when to write the content to db, i noticed this was slow but it was making progress after a while so i went to bed hoping it would be done l8r.

> 2016/02/19 07:37:56: content row 0/101,468: writing

^ starting db writing

> 2016/02/19 18:40:31: content row 67,729/101,468: writing at 2 rows/s

^ last update just now.

well it wasn't, hurr durr. i know you said this app was heavy but damn. luckly since i run this in terminal i can see the log of updates. the rows/second values most stay in range of 0 to 7, but sometimes peak at 8-20 rarely, there is one valued at 80 though. i wonder am i running this app wrong somehow? I couldn't find any obvious way to load the old database other than from 'import db backup' but i didn't ever run the backup export on windows, so i just copied over the data which seem to have worked. the logs showed it updating the db to various versions and such.

tl;dr why is the rows/sec value so low? should i kill the app and restart?

also i thought of a feature to implement, some kind of a tag cloud system for easy browsing? here's an example that pull words used on the boards. ofc in the hydrus app the tags would be by image counts or something. sorry to complain about the db rows but i was very surprised to see it still running at ~50k/100k when i got up and checked.

> http://catalog.neet.tv/clouds.html#a


 No.2060

>>2059

the win7 db folder was on a different hdd than the new linux project folder ofc, but the ubuttjoo handled that copy operation fine, shouldn't affect this db row stuff


 No.2061

>>2059

>i made sure the app ran first ofc, i noticed i had to run it as sudo in the terminal otherwise it wouldn't launch, oh well, enjoy ur root botnet app.

Hydrus should never be run as root. Did you extract it with '$ tar xvf Hydrus.Network.193.-.Linux.-.Executable.tar.gz'? Some archive managers will extract tars without file permissions causing the issue you described.


 No.2064

File: 1455994890216.jpg (336.98 KB, 845x1024, 845:1024, 67d2e9baa3752779ca29b2ccde….jpg)

>>2058

Those files are actually 'remote' files that you neither have nor have thumbnails for. This includes files you have deleted, but also files your client has never seen but has otherwise been told about (usually by a tag service). If you want to remove them from searches, you'll want to run a regular search on the 'local files' search domain, which restricts results to only those files on your hard drive.

For most purposes, most users have no reason to run an 'all known files' search. You'll also find 'local files' searches much faster.

Let me know if I haven't explained that well.

>>2059

I am sorry you are experiencing such bad lag. This increasing slowdown for many people is why I would like to split up the db. Although for most machines, processing time will increase slightly, I think slower computers will get a big boost, as I expect large latency will have less an effect in the new system.

Processing speeds for a reasonably new computer with a well defragged drive are usually in the rage of 100-800 rows/s, spiking to 2,000-15,000. I don't notice much difference between Windows/Linux/OS X. Your single digit speed is unusual. I recommend that you pause repository sync via services->pause->repo sync until I can speed up my code or you can figure out why your machine is processing so slowly.

Most of the time, lag is down to a fragged, very full, or otherwise busy hard drive. Do you run defragging software? Is hydrus running off an unusual partition? Or maybe have some other hdd- or cpu-heavy software that is competing with and hence choking hydrus's access time? Having two hydrus clients trying to process at the same time, for instance, can slow things to a crawl.

I have written more on this subject here:

http://hydrusnetwork.github.io/hydrus/help/reducing_lag.html

Making a profile (as I describe on that page) for a whole repository content package would probably take more than you want to wait, so you could try it just while adding a handful of local tags to some files.

It is generally safe to to force kill the application's process if you want to, by the way. The db and the code is atomic, so it will recover from where it was before it started the last big job.

As for the tag cloud, something like that is definitely planned for the future. It will be in the next-feature poll the next time I have time to work on something new.


 No.2066

Hydrus you may want to hold off on IPFS for a little bit, implementing what you're doing now seems fine but it is still in alpha, there are plans to make dynamic content easier to handle and to have file passthrough so that you do not need to retain 2 copies of data but instead just a reference to one (for example a filepath or blob).

When pub/sub is implemented it should be good for distributing PTR updates to clients through IPFS if that's your desire . Passthrough would make it so that you just have the image data stored however the hydrus client wants and then you just tell IPFS about it through a reference, IPFS will then hash the data itself into a multihash on daemon startup and then it can handle the distribution of that from then out. Kind of like how DC clients used to work, they'd store filepaths, hash the files on startup (if they've been modified), and then they could be distributed.

Worth seeing:

https://github.com/ipfs/go-ipfs/issues/875

https://github.com/ipfs/notes/issues/64

Dag format stuff:

normal

https://github.com/ipfs/specs/tree/master/merkledag

trickle

https://github.com/ipfs/go-ipfs/pull/687

https://github.com/ipfs/go-ipfs/pull/713

future stuff IPLD

https://github.com/ipfs/ipfs/issues/36

There's probably more stuff to mention but I can't think of anything. Their documentation and information is pretty sparse and fragmented at the moment since the project is under pretty active and rapid development. Hopefully that changes as they stabilize, and hopefully that will result in better APIs for you to work with. In theory this project should eventually allow people to essentially drop in a P2P system and use it how they want but it doesn't seem complete yet. So it seems like it would be very useful for Hydrus but maybe not yet, I could be wrong though maybe it's worth experimenting with now anyway to get ahead of the curve. I just don't want you to get frustrated with it now when it may be easier to deal with later.

>>2054

You can use

ipfs add -w file1 file2 ...
if you want to retain filenames as well that are ot in a directory, ipfs will create a fake directory and wrap the files in it, so you get returned a directory hash that contains all the files you added without having to put them in a directory yourself.


 No.2068

>>2064

>For most purposes, most users have no reason to run an 'all known files' search. You'll also find 'local files' searches much faster.

>Let me know if I haven't explained that well.

That makes sense, I just didn't realise the local file search would remove them from view. Thank you!


 No.2071

Just chiming in to let you know that under linux, the contents of export folders are frequently just deleted to the linux/system trash.

I use hydrus to export pictures marked 'wallpaper', used to set a wallpaper slideshow. Every time I use hydrus, that folder gets deleted and I have to poke hydrus' export dialog to get it to re-export everything. You may remember I reported this to you before. I'm starting to use linux more often so this is really becoming a nuisance.


 No.2073

File: 1456051465154.jpg (150.86 KB, 1280x960, 4:3, edb3ff8567a1d537e7eccf3c21….jpg)

>>2061

i did use the gui to 'extract here' on default on ubuntu mate. i thought it was odd that it won't run without error but you know how it with loonix nerds, just sudo it and move out sometimes, 2lazy2debug. i guess i'll need to chmod the directory when i have the program closed.

>>2064

thanks for responding so promptly. i have been making more progress with hydrus in the last day. I accidently killed the process on a reboot and then ran it again later as root again, it opened without issue and i was able to search for a tag and also show files with >1 tag, the fad in of thumbnails while scrolling is neat. mostly i'm disgusted by how many shitty /b/-tier pics i have and also much weeb wow.

just now after re-reading the thread for updates i thought i should close it again so as to fix permissions, but this offered the maintence again so i ran it to see if the row/sec is different. surprisely it is fast today, running at many thousand as you describe. ofc it's very interesting to see it do many sets of rows, so far many sets of xxx/yyy, the y values change. so i'm just letting it run for the moment while i type this, i'm glad it's faster.

perhaps the data on the disk was in a mess since it had just been copied over from another hard drive. the one the linux is running on is WD green 1gb only six months from new, the other drives for backup are older. i know there is talk that one 'never has to defrag on linux' but that is bs, it can be done but the idea is the 'average user' wouldn't ever need to since the filesystem isn't as cancer as windblows. i will look into the gnu defrag tools built-in to the system, maybe setup a script or service if possible. i know my system runs fschk on boot up each time, so maybe the reboot helped it organize

maybe i'll stop tonights process soon, it's row speed has dropped to hundreds to less than 2 thousand since it keeps pulling from the repos, likely making messy hdd. smh. why didn't i fall for le SSD maymay. oh /g/ee… well it looks like it's finished during proof read.

>2016/02/21 05:34:31: repository synchronisation - public tag repository - finished

0 updates downloaded, 11 updates processed, and 425,725 mappings added

does that mean 425k tags, shared over 13k images? wew.

btw i did read the page on local help page on reduce lag before posting, thanks for reminding me, sometimes i find the documentation somewhat cryptic since i'm not sure what goes on behind the scenes. the process is to hash images added to the database then compare the hashs with a list on a server, if the server has tags about that hash the tags are downloaded and put in a local database? or maybe these tags are filled in from a master list like a blockchain? ugh, do you have flow chart of process or some whiteboard-style scribbles? source is in python, have no excuse.

thanks as well for considering tag clouds as a poll topic, the booru's and such sites really spoil me.

>>2066

ah another ipfs fan, obviously you read the man pages more, wew lad. i'll be sure to try that file wrapper syntax next time i do an upload. i had a feeling there was an easy way to do it but after noticing a directory in the url once i though the obvious method used had been to make a directory and upload that recursively. glad there's a quicker method.

>>2071

living on the edge m8, can't you just remember to clean ur trash?


 No.2074

File: 1456051643775.png (588.91 KB, 1008x720, 7:5, ed14081018ade4345bfdd8933c….png)

>>2073

>the process is to hash images added to the database then compare the hashs with a list on a server, if the server has tags about that hash the tags are downloaded and put in a local database?

in b4 photodna api feature*


 No.2075

>>2074

>>2073

when the script says 'up to 30 minutes of maintience' perhaps that should be hard coded and not just some text. keep a hard counter, or put it in some settings gui. lol i mean what a silly message.


 No.2076

File: 1456054978042.png (24.18 KB, 2048x2048, 1:1, b3159b78b575a5d0b509863b59….png)

>>2074

>>2075

>>2073

well its working without root now. simply moved the db directory to ~/ , purged the contents of the hydrus folder, replaced it with files from the tar extraction, removed new db folder and moved in old one. works fine with just ./client no need for root.

>2016

>using a GUI

i should have known better than to save time with the mouse.

ran a search for # of tags >0 , it pulls 1.211 files. the everything count is 13.431 files. # of tags >1 is ~700 files. from what i can tell the content of the untagged is reaction images, 3dpd, anime art. and the content of the tagged is pretty much the same with just more pepes, but i have a lot pepe on untagged as well.

maybe 8chin could put in some kind of maymay captcha system like the 3x3 joogle jooscript applet on 4chin.

>click the pepe to prove you're a real human being.. a real human being..

pic related, copied file location from hydrus to le fugfox.


 No.2078

File: 1456085485231.jpg (1.1 MB, 1772x1261, 1772:1261, 5f32728717e1798ab0551004ee….jpg)

>>2073

I'm glad things are working a little better. Hydrus is a continual thrown-together experiment, and I am doing some odd things that desktops don't usually do, so feedback from the real world is useful. Let me know if you figure out why it was processing so slow that first time. As a test, I yesterday specifically ran my defragger at the same time as my laptop client was processing, and it slowed to ~6 rows/s, so maybe something like that was happening for you.

In response to your general questions about what's happening with sync, I had some nice .svg diagrams once and a whole help page for it, but I think they got a bit out of date and I then never remade them. The next time I go over the help files, I'll make sure there is a decent technical explanation somewhere, rather than general descriptions.

Basically, my intention with hydrus is to offload most of the typical server-side knowledge and responsibility to the client. Every new hash-tag pair that anyone uploads to a repo is regularly and blindly distributed to every client that syncs with it, and then those clients only ever search over their local cache. This has a heap of privacy benefits, because the syncing client thus never has to reveal which files it has, and the server doesn't have to spend cpu time doing db requests to filter its mappings for the clients. Every client basically maintains an anonymised copy of the server's db.

Your client will eventually (you can see its progress in services->review services->remote->my ptr) catch up to everything everyone has submitted to my ptr, which is now about 36 million mappings over 2.5 million files, and growing every day. If you have any of those files, or you get them in the future, your client will match up the tags.

Unfortunately, as I said in my release post, the system is really creaking now, so I want to write a new cache layer between the massive pool of mappings and your specific 13k files so things calculate and search a bit faster.

>>2075

If you haven't seen yet, you can customise a bunch of this stuff in file->options->maintenance and processing. You can completely turn off shutdown processing, if you like, and have sync only happen when the client is open but idle.


 No.2080

>>2073

>living on the edge m8, can't you just remember to clean ur trash?

…What? What does hydrus randomly deleting files have to do with clearing my trash?

It's spelled "your", by the way, not "ur".


 No.2094

File: 1456259228774.jpg (380.76 KB, 1659x2182, 1659:2182, a45781cd31531a80437dc9bf3d….jpg)

>>2071

I do remember you reporting this previously, and I am sorry this is still causing you a problem.

I just had another look at the code, but nothing pops out at me as to why this is happening for you. The client will only delete from an export folder if it is set to 'synchronise', which I think I remember you had not set. Even then, it is only supposed to delete files that do not match the expected search results. This might occur if you had a search set that might sometimes give 0 results, although I don't know what that would be–maybe something with multiple system:age parameters. I presume you have a simple 'wallpapers' search for your export folder or similar.

I think we should see if it really is the export folder code that is clearing these folders out. Please go services->pause->export folders (so they won't regularly run) and then close the client. Copy some random files into your export folder and check back after the period you would normally expect the files to disappear in. If they do not disappear, then it is likely the client is doing it, so then open the client and wait that period again. If they do not disappear, then it is likely the export folder code is doing it.

If it is the export folder code doing it, and you don't have a 'synchronise' export folder, then I really don't understand what is going on. I seem to remember writing some debug code or something for your situation, but I can't remember the outcome. We could try that again, if you like.


 No.2105

>>2094

I went off script and found something interesting.

>Opened the export dialog, which triggered a full sync (I am using sync at the moment)

> pause export synchronization

>deleted most files from the export folder, but not all

>added a random file

Results:

the file I randomly added was left untouched, but the remaining pictures were all removed.

Stated in other words: it appears something is triggering the synchronization logic, inverted. It deletes files that should remain, and leaves files that should be deleted.

I have verified, twice, that these files are being deleted from the export folder while synchronization is paused.


 No.2134

File: 1456608803663.jpg (1.8 MB, 2937x2086, 2937:2086, 30dbf56da2a1b276cc8c59ca86….jpg)

>>2105

That's interesting. If the files are definitely disappearing when export is paused, then I am confident that the export folder code isn't doing it–it is only ever called in one place, and can only fire when it is unpaused. It even abandons its work if the pause is set while it is running.

So, unless something very odd is going on, it is either something else in hydrus or an external program doing it.

Was the file you randomly added–that wasn't deleted–a jpg or something else that hydrus can import, or was it a .txt or something? If you add random jpgs, do they disappear? Although it sounds silly, please double-check you don't have an import folder set to that same location, especially if you run more than one client–this repeated clearing out could certainly be caused by a 'delete' import folder. You can even try hitting 'pause import folder sync', just in case there is some hidden import folder or something.

And how about your external storage locations? Are they all as you expect in file->options->file storage locations?

Then, can the delete occur when the client is not running? If so, then it is almost certainly an external program doing this. It could be some other wallpaper harvesting program or something like that that is only taking files younger than x hours, or only the aforesaid jpgs.

Just as an aside, I'll add some log stuff to import and export folders for v195–maybe that'll tell us more.


 No.2151

>>2134

OK, I found the issue, it was my bad.

Import task for ~/pictures

Export task for ~/pictures/bg

import task kept "importing" and deleting the contents of bg. I changed the import settings to ignore files already in the db, but I'd prefer an option to not recuse subdirectories.

Thanks for helping me narrow this down!


 No.2166

File: 1456771887513.jpg (320.73 KB, 737x1118, 737:1118, 3ba07418f0c54e1a4b4f90279c….jpg)

>>2151

Thank you for the update. I think this was ultimately a user-interface problem. Both of those dialogs could do with some better info text, and I will add a test for and warning about overlapping import and export folders.




[Return][Go to top][Catalog][Post a Reply]
Delete Post [ ]
[]
[ home / board list / faq / random / create / bans / search / manage / irc ] [ ]