No.499795
https://ipfs.io/
"IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository"
Share your cool ipfs hashes
No.499799
WARNING: THIS SHIT IS WRITTEN IN GO.
THAT'S RIGHT, WHOEVER IS MAKING THIS PROJECT IS CLINICALLY INSANE.
AVOID AT ALL COSTS.
No.499808
>>499799
What do you recommend? Fucking assembly?
No.499826
>>499799
>uses .io domain
of course it's going to be shit
No.499834
>>499826
Considering what issues the project addresses, it makes perfect sense for it to use .io domain.
You know, because I/O. And the project is about the structure of the Internet at the moment and how everyone has to connect to a server to get content instead of distributing the content, etc.
No.499853
>>499834
io is a fucking meme domain used by scripkiddiesat this point
No.499857
>>499853
>judging a piece of software by what domain it is using
No.499878
>>499799
IPFS is a filesystem you retard, you can write it in anything you want. There are other implementations planned like js-ipfs or py-ipfs.
No.499879
>>499834
>Considering what issues the project addresses, it makes perfect sense for it to use a British Indian Ocean Territory domain
No.499888
>>499878
>js-ipfs or py-ipfs
Here's a question. Why didn't it start there?
Answer: project maintainer(s) is/are insane.
No.499908
Last I heard it's been updated to version 0.4.0, what has been fixed/added since last version and what can we expect for the next version?
No.499955
>>499888
Are you saying that Python and Javascript would be a saner choice for this?
No.499958
>>499955
Are you saying Go is a saner choice than those?
plz, anon
No.499965
>>499958
I think Go is nice. What is it you don't like about it?
The thing is, scripting languages are not that good for anything intensive. They have their uses. Javascript runs in web browsers, and Python is easily portable. But for serious use, you want these things to run fast.
No.500002
waiting for tor and i2p support. dev says he'll focus on it on february. https://github.com/ipfs/notes/issues/37#issuecomment-171154379
IPFS NEVER
>>499799
it's just the reference implementation of the protocol. and there are some projects for other languages.
>>499955
>using python or javascript for a program that's supposed to become a core infrastructure piece
No.500083
>>499955
yes, you fucktard
No.500104
>still no i2p support
Lack of anonymity is the only thing keeping ipfs from becoming popular.
No.500292
OP you should really include more links.
This script changes all ipfs hashes into links to the ipfs gateway (on page loads)
https://greasyfork.org/en/scripts/14837-ipfs-hash-linker
These redirect all gateway links to your local daemon, it works well with the previous script.
https://addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/
https://chrome.google.com/webstore/detail/ipfs-gateway-redirect/gifgeigleclkondjnmijdajabbhmoepo
Here's some general propaganda (I genuinely think this is a good article that conveys some good things about IPFS)
https://blog.neocities.org/its-time-for-the-permanent-web.html
Can Go complaints be directed to the Go thread >>493136
We've had some nice IPFS threads in the past months with lots of tech talk and file sharing going on.
>>499908
A lot of performance stuff, uses less memory, does a lot of operations faster, and apparently fixes some network issues, some people were saying they couldn't get some hashes in the previous thread but could with .4. Outside of that I think there are some functionality things like more arguments and some more config options.
>>500104
It's being worked on iirc so it's at least planned, I believe they want to interoperate with a lot of different things so that people can use what they want for what they need like i2p for privacy but if someone doesn't trust i2p they could use whatever they wanted like tor or something else.
No.500889
So, IPFS booru, how could that work? The simplest way I can think of would be to save hashes and tags to a central website, and just retrieve the images with ipfs, but could this be completely decentralized?
No.500909
>>500889
What you said seems to be the easiest right now, handling dynamic content seems to be something they're working on with ipns and some other thing (ipld?) but it's not all finalized yet.
I've seen people use typical http for all the dynamic stuff and ipfs for all the more permanent things like the html, js, images, etc., there's also examples of people using Ethereum with IPFS to handle dynamic content but I don't know much about that though. Someone had a site hosted via ipns that displayed a number and a form entry box, you could submit a new number and the site would change (ipns record would update), the new number would be displayed for everyone who visited the ipns hash, it used Ethereum to handle the number somehow. If someone has the hash to that please post it.
This guy wants to make some kind of imageboard-like thing too so maybe it's worth looking into.
https://github.com/fazo96/ipfs-boards
No.500911
>>500889
>>500909
I forgot to mention >>>/hydrus/ is planning on integrating IPFS, The hydrus network is essentially a local booru that syncs up with repositories, the repos can contain tags, files, and more, this seems like the best option we'll get in the short term and maybe even best long term. You'll run a client with your media, sync remote tags over ipfs and distribute files via ipfs as well.
No.500936
>>500911
Nice. Has it always supported automatic tagging based on hash?
No.500941
>>500936
That's the basis of the project, it takes in media, hashes it, and you can assign tags to that hash and have relations with tags (parent and sibling).
The tagging is not automatic it's just shared if you use the public tag repository, if me and you have the same file and one of us tags it publicly (local and remote tags are kept separate) then that tag will show up for both of us eventually (after you sync the repo).
You can automatically assign tags based on things like filenames and other factors but it's not magic, however there is this which is planned which is actually automatic tagging
>>>/hydrus/1553
No.502055
>>500104
You can already use it over Tor and CJDNS. The i2p thing is just creating a pure TCP mode.
No.502060
>>500889
There's a site called Hiddenbooru on i2p
No.502189
>>499799
>using the best language after maybe C
seems like an advantage to me
No.502201
still no Tor or I2P support and no one uses it. IPFS will forever be a meme.
No.502202
No.502373
I just got my personal file host working which adds all files to ipfs while also giving me a http link to share with normies.
No.503167
Has anyone done a proper comparison of 0.3 vs 0.4?
No.503231
>new piece of technology still in alpha and being actively developed as a proof of concept
>NOBODY USES IT DEAD ON ARRIVAL MEME LANGUAGE
If you hate Go so much, go organize an IPFS implementation in a real language. Until then stop bikeshedding and go start another browser thread or something.
No.503599
If I leave my computer off for, say, two weeks and I turn it back on after a node runs a cleanup, do I have to do anything to re-enable the content I've added? Or does the daemon auto-pin the content to the node again?
No.504559
>>503599
When you add content yourself you're also pinning it, so it is still pinned whenever you start your node again.
No.504588
>>499799
Go is one of the best languages. Ah those nice, statically compiled binaries. And it has a great toolset. Finally the compilation process is invoked with a simple command and not 30 automake scripts.
No.505540
>>503231
I think the more important aspect will be the JS impl.
Once that's out, people will start using it as a matter of course.
No.508083
>not written in ocaml
It's shit fam.
No.508084
ipfs.io/ipns/gindex.dynu.com
No.508198
I just ran into this, so theatrically a file can never be deleted in the file system ... theatrically.
No.508209
>>508198
>theatrically
I laughed.
No.508220
>>508209
fuck i should stop using my phone for this shit
No.508523
>>508198
not really; keep reading
No.509350
>>502055
wot? i2p doesn't mandate TCP, tor does.
No.510525
Hey guys, I am thinking of implementing this as a fuse mount and read it as a p2p trivial deb repo. Anyone tried this? Any problems you can foresee? Are there any more mature projects as an alternative to this?
No.510557
>>499799
>>499826
jej. thanks for the warning. and it seems to be written by a bunch of nobodies. I still haven't got an answer to why I should study this instead of Freenet.
No.510614
>>502060
what is the link?
No.510669
>>508198
Coming this summer
Anon Shares A File
An ancient evil is about to be uploaded so you better hang on to your keyboards if you want to keep your daemons running!
No.510706
>>510525
>Anyone tried this?
Tried once adding the gentoo portage tree, but it would have taken too long.
>Any problems you can foresee?
Adding lots of files will probably be slow, you should try with 0.4-dev which should improve performance a lot.
One of the nice things that can be done is calling apt-get clean regularly or putting /var/cache/apt/archives/ on a tmpfs, since files are already cached by IPFS
>Are there any more mature projects as an alternative to this?
No, unless you count apt-torrent, debtorrent, apt-p2p, which were even more unstable than IPFS and had plenty of drawbacks.
>>510557
>java enterprise abstractorfactoryfactories, dating back to to 2000
>golang split into dozens of repos, most documentation incomplete
Pick your poison
No.510805
>>510706
Tight stuff. Thank you anon.
No.510845
>>510557
because freenet is for websites and this is something completely different
>tor
>i2p
the feds will soon control this
No.511361
>>510525
>>510706
Arch can be easily configured to pull packages from IPFS, I bet it wouldn't be hard to do the same for Debian repos.
https://github.com/ipfs/notes/issues/84#issuecomment-164048562
No.511684
Could a chan be made to work with IPFS?
My thought was to have normal SQL Database, etc, with post-content (or alternatively, have each post stored as an IPFS address that is loaded via JS or some shit if that's expensive to store too) and then the user's themselves could contribute to the hosting of image/video content.
As IPFS caches the files it loads, the user's of any particular board would then be contributors to the hosting of that board's content.
Could something like this work?
InfinityIPFS. We could get Josh to build it.
No.511687
fucking retards dont know how to use tor :)
No.511691
>>511687
tor isn't made for sharing large files you fag
No.511692
No.511725
>>511684
See >>500909
There's a handful of other ideas floating around of how to do it, search for discussion around how to use IPFS with/as a database or other existing systems.
Hotwheels mentioned before that he was lurking one of the IPFS threads and may be interested in looking into it, granted this is still alpha so I highly doubt he would (and I don't recommend he does) use it now.
There are big things that need to be done before this rolls out for something as big and dynamic as a populare imageboard, it's mostly for long term static content right now but the more dynamic things are in progress, IPNS with pub sub as well as clientside resource limits would have to be finished before this should be deployed on this scale. Another thing would be figuring out how to accommodate non-ipfs users, 8ch could host its own gateway and have IPFS users just redirect to localhost like the addons do for the official gateway (or use a hostfile), a more practical solution would probably to use the javascript implementation when that's done but I know nothing about that, I'm making a baseless assumption that it would be resource heavy and slow to run some kind of instance for each tab, maybe it doesn't have to I have no idea how browsers/js works really.
No.512278
Bump,
here's all 3 episodes of Boku no Pico
Boku No Pico Episode 1 - ipfs/QmbTWVLtUhdLJws4reyJ7CnkVwwivR4FTM3Jnj9YebNhBu
Boku No Pico Episode 2 - ipfs/QmeTUFENeJJjcN617m3Twd5kCdcTnoyZKHANkZ7NnYe2de
Boku No Pico Episode 3 - ipfs/QmXy6yZAwwtmQc44t7sy7ivndqdHMCBUHu3vbrsih5WzjG
No.512282
>>512278
single directory link instead: ipfs/QmeCq2H2w2tJ9Yr8AmLi7bjkopdNpaB5LaG9fZRy52Q4Ts
No.512321
>>512278
>>512282
my shitty upload is triple of download from the very start. seeeeeeeeed plz!!!!11
also yes, please put them in folders and name them properly because mpv couldn't figure out what the fuck those are. that's
>avi
>in current year
after all.
No.512346
>>511691
>tor isn't made for sharing large files you fag
wrong. making shitloads of connections to different addresses is what causes problem on tor.
you can download files as big as you want using things like http or ftp without any extra load.
>>511687
you're still cancer tho and your priority will automatically lower if you put too much load.
>>510557
i tried it here it worked:
/ipfs/QmdG8zpEQErMiAzEqnvEU29yk9DzZ6PK13tvuDLWdj5unv
/ipns/Qmeg1Hqu2Dxf35TxDg18b7StQTMwjCqhWigm8ANgm8wA3p
>>511361
pulling packages is trivial: just add an ipfs address as a repo. the issue is how to best publish and update the repo. there are several ways.
No.512351
>>512346
s/>>510557/>>510525/
>>510525
>implementing this as a fuse mount
it already can do that.
No.512383
>the issue is how to best publish and update the repo
IPNS seems like it would be the best way but they're still working on that. IPNS works now but there are limitations being worked out.
No.512459
>>512321
I just have really shitty internet
No.512506
>>500083
kill yourself. Literally wat. No typing in Javascript. Poor performance for infrastructure needing performance.
No.512560
>>512506
what is type inference you unadaptable cunt
No.513401
>>512346
>pulling packages is trivial: just add an ipfs address as a repo. the issue is how to best publish and update the repo. there are several ways
Deb guy here, this is why I mentioned the fuse mount. If the packages appear as a local directory which anyone can add to then a package db(I think it's a db) can be made locally to reflect available files.
So the /apt/sources.list would read
deb file:/usr/local/ipfsfuse/debs ./
Then the repo update script can be edited to include
dpkg-scanpackages /usr/local/ipfsfuse /dev/null
So the packages are made available to the package manager just by updating the repos.
In addition we can give it a very low pin priority to ensure it doesn't pull system updates from there.
No.513805
reposting my touhou link:
https://ipfs.io/ipfs/QmUSgfC3RsXKzKJuUNtpzDK2Wm4ehPxhQ1SxJMcgUqStxg
>>512278
>>512282
downloaded torrent & manually added, now should seed properly
No.513844
>>510845
freenet hosts arbitrary files. All I know about ipfs is that it hosts files
>>510706
>Pick your poison
Not that I care what PL it's written in, but why would you want Go *and* Java instead of just Java...
No.514087
>add a large file
>run out of space while hashing
Just kill me now.
No.514123
My files aren't showing up on the network. If my go-ipfs version is too old, does it just ignore my pins?
No.514128
>>504588
>Statically compiled
Please die.
No.514179
>>514087
I can't wait for pin in place, I get why you'd want to make a copy of it but there's no way I'm keeping 2 copies of everything I want to share. At least its on the issue list
https://github.com/ipfs/go-ipfs/issues/875
As soon as this is done I'm sharing my entire media drive. It will be like DC all over again but so much better.
>>514123
What do you mean not showing up? The only issue I can think of is that 3.x peers can't communicate with 4.x peers, if both of your endpoints are on the same version they should work fine. The public gateway should currently handle either though.
Make sure your daemon is running I guess.
No.514198
~ $ eix ipfs
No matches found
~ $ eix -R ipfs
No matches found
Sweet, I don't even have to throw it in the trash myself.
No.514208
>>514198
I really don't get why you'd install something just to uninstall it, if you think it's trash why even bother in the first place?
No.514754
Hello /tech/, /a/ here. How difficult will be to build a tracker on IPFS and if it would possible at all?
No.514766
>>514754
Doesn't seem that hard at the most basic level. You submit a hash to the tracker with comments about it, and the tracker displays whatever metadata it can find with it and indexes it for searches. Pretty much anyone can do that with a standard torrent tracker website template. As for actually tracking, I don't think you can keep track of how many people are seeding/leeching on IPFS. Could be wrong, though.
If it was really ballsy, it could have an option to download a local copy (below a certain size) and spit out some video screencaps if it detects a video. It would have to delete the file immediately afterwards but it's a way of verifying that the file is legitimate without having to trust the uploader. But that's just a pipedream for a widely-adopted IPFS future.
No.514767
>>514754
Check this
http://ipfs.io/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn
You don't need a peer tracker with IPFS, IPFS tracks all that itself. The only thing needed is a an index of IPFS hashes, what most people call a torrent tracker is in fact an index that also runs a tracker on the same domain, with IPFS you only need the index part.
tl;dr you just need some kind of text file that says hash X = series Y in format Z
Ideally you'd have rich searching too. You could more than likely modify whatever existing torrent tracker site frontend you want to just point to ipfs hashes instead of torrent ones after stripping out all the ratio stuff, etc.
Long term I really hope people use IPNS with mount points, imagine you want to watch Series X, some release group can go "here's this ipns (not ipfs) hash", you mount that hash to your media drive like ~/Anime/Series X/ then when the release group puts out an episode it would automatically be pushed to that directory as episodes are released. With pubsub this should be possible.
No.514774
>>514767
Is their DMCA infrastructure a problem for the network, or just a problem when fetching from the web?
If it is a problem, would it be possible to have a fork of the network? Like a private ipfs network bootstrap? Sort of like a private bittorrent tracker?
No.514777
>>514774
nobody can guess a hash of your file. so you would just share privately. as for security I would just have something similar to a seedbox to prevent people from getting your ip and contacting an ISP
No.514778
>>514774
But unlike a private tracker there's nothing stopping nodes from outside connecting to your swarm and then fucking it all up.
No.514781
>>514774
To be honest there's not much that can be done to take anything on IPFS down, the only current DMCA solution is to have an opt in blacklist for gateways, gateways are only useful for people not running ipfs themselves. If you're running ipfs yourself and you want a hash that is reachable then you're going to be able to get the content, it can't really be prevented. Kind of like torrents and their hashes, you'd have to take down all the peers hosting it.
>If it is a problem, would it be possible to have a fork of the network? Like a private ipfs network bootstrap? Sort of like a private bittorrent tracker?
Yes, you can do that now, you can choose your own bootstrap nodes and as such run private IPFS networks but there's no real reason to that I can think of outside of bandwidth considerations but IPFS will support resource limits natively eventually so this should be a non-issue later.
As for private sharing like >>514777 said there's no harm in being exposed to the network with private content since nobody can retrieve it unless they know about it anyway.
>>514778
I forget but I think there's some way of preventing this, I remember PerfectDark having a simillar "issue" but they treated it like a benefit. I agree with that too, everyone should be connected to everyone else, it keeps the network resilient AND fast when everyone shares with everyone as fast as possible, a race to idle kind of thing.
No.514789
File: 1454735044612.png (220.35 KB, 500x372, 125:93, gets tremendously upset by….png)

>add .webm video to IPFS
>load it through gateway to test its availability
>play it through Icecat and it doesn't work at all (file is corrupt)
>play it through Firefox and the subtitles are gone
>takes about five years to launch it in mpv because it needs to buffer the whole fucking video before playing for some reason, even though the video is minutes long
Is it my encoding? Is there any way to stream chinese cartoons to other people without hardcoding the subs?
No.514843
>>514789
Did you test the file in your browser locally? If the subtitles don't show there then you broke it yourself on encoding. Files work fine for me even through the gateway, with that said though the gateway is a backup solution, ipfs obviously works best client to client.
No.515087
>>514774
https://github.com/ipfs/go-ipfs/issues/1386
They plan on adding support for private blocks
>>513401
Why not just use the http gateway URL in sources.list?
Also running apt-get clean regularly would free some space, since packages are already cached by IPFS.
Creating a mirror from downloaded packeges would be pretty cool.
No.515091
>>514843
I've since decided to just bake the subtitles in. My purpose is for normalfags to be able to watch it with their shitty browsers, so the compatibility's gotta be high.
Since that post I've been able to serve it up better by shrinking the file size. Still get the corrupt error in Icecat but I figure anyone using Icecat is also competent enough to launch it through some other means like mpv.
No.515352
>>499808
Of course, what else should we use to write a filesystem? :^)
No.515355
>>515091
Normalfags don't care about whether their animu is in high quality or not, they just care about It Just Works, Click Play and Enjoy and quality not being total and complete shit. They basically just want a clandestine Jewtube/Netflix.
No.515388
>>515091
One could try extracting the srt/vtt files and embed them withing <track> tags and some javascript/redirect to switch from default.
No.515415
>>515091
On ipfs add, try using the -t flag, see if it makes it any better, it uses the trickle-dag format which should be better for media files.
See:
https://github.com/ipfs/go-ipfs/pull/713
and the older:
https://github.com/ipfs/go-ipfs/pull/687
For more info on the format.
ipfs add -t *files or directory*
I'm not sure but I think it's less efficient (uses more data) but more resilient. I could be wrong though.
As for the subtitle track, that shouldn't be a problem with IPFS, that sounds like a browser or encoding issue since an HTML5 compliant browser should support subtitle tracks in video files and video tags. I haven't experimented with those too much in browser myself yet.
No.515417
>>499795
>file system that seeks to connect all computing devices
>everyone is connected to everyone
The ultimate botnet?
No.515476
>>515417
i want to join it tbh.
>>515091
>I've since decided to just bake the subtitles in.
absolutely haram. why would you pander to normies on a animu on an experimental tech? wtf?
>>514789
>icecat
>firefox
you realize that normies use Chrome based browsers, right. They don't give a shit about Firecuck, which is good because FF sucks.
>takes about five years to launch it in mpv because it needs to buffer the whole fucking video before playing for some reason, even though the video is minutes long
did you put just 1 keyframe for muh filesize and expect it to seek? yeah sounds like shit encoding.
>>515087
>Also running apt-get clean regularly would free some space, since packages are already cached by IPFS.
No, you don't want apt to cache by default at all in that case.
No.515561
>>504588
>statically compiled binaries
t-this is a joke...r-right?
No.515564
>>515476
you realize no one cares about normies here, right?
No.515571
>>515417
Wasn't this one of the Plan9 goals? That's the world I want to live in, global distributed file systems for public files and private clusters for private files. You can't beat this scale of redundancy and ease of replication, over a network even. They should add some form of verification/integrity checking, that would be great. Given that you can dump a list of all the content you have you could do it now in a crude way that would repair any corrupt by just downloading everything you already have to /dev/null, it will fetch any part that's broken, there should still be some built in official solution for this though if they want to be a filesystem imo since they rely on the underlying fs to do it for them now. Maybe it's a long term goal, who knows.
>>515561
http://www.chop.edu/health-resources/where-get-speech-therapy
No.515587
>>515571
I want an ipfs-to-9p bridge that I can just run as a daemon and `mount -t 9p` from
Wake me in 2030 when it's done
No.515628
>>515587
I only know a little about 9p, I meant to look into it more. Since IPFS can be mounted via fuse and apparently even via Dokan(y) on Windows, what would bridging to 9p allow you to do that you can't accomplish already?
I wonder if you could just rewrite the existing mounting portions and use some 9p Go lib for what you want.
No.515718
>>515628
>what would bridging to 9p allow you to do that you can't accomplish already?
Run the bridge on one computer, now your whole LAN can access IPFS without installing anything and you don't even have to touch NFS's bullshit with a 10 mile pole.
No.516151
>>515561
It isn't and there is no way to do dynamic linking nor create shared libraries out of go libraries. Neither can Nim. Rust can, but you have to jump through multiple hoops due to cargo not supporting it correctly.
Why all these languages aren't just GCC/LLVM frontends is something I still fail to comprehend.
No.516279
>>516151
I'm not positive but I'm pretty sure you can do dynamic linking in Go via gcc, the standard Go compiler eventually added a way to do this in 1.5 as well.
No.516891
>>514767
>tfw I uploaded QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c
Here, add this to your list too: /ipfs/QmVYLYeFLvxEEV6qFP8SHH4kP8VJb4Py4vpRkgqH8Hjyfx
It doesn't want to play in-browser for some reason. Probably because it's 10-bit :^)
>>514789
I've never got WebM subs to work in any browser, even WebVTT subs. I don't even think that Chromium's built-in player supports subs.
No.517095
>>516891
>I've never got WebM subs to work in any browser, even WebVTT subs
They work fine if you have separate files and a HTML video>track wrapper, dunno about embedding them though.
No.517099
>>517095
Ahhhhh okay, I've just been embedding them in the WebM.
It should still work with embedding though.
No.517211
>>516891
The solution I chose is re-encoding for browser streaming. You get huge filesize gains (especially with libvpx9) and the subs show up just fine but it comes the price of (a little) quality loss and hardsubbing.
I'm imagining a niche for three-episode tests, where the convenience factor lets you try it out and you can download the top-tier (read: HorribleSubs 720p because nobody has good encodes these days) quality episodes from a torrent if you stick with it.
No.517226
>>517211
>>516891
Did you try this? >>515415
I'm curious about it.
No.517238
>>517226
Oh no, I didn't. I'll have to re-add it and see if it makes a difference.
No.517314
>>508198
Its the same as a torrent. As long as someone keeps seeding it it will exist forever
No.517633
No.517647
>>517226
>>514767
Okay, the new trickle dag'd hash is /ipfs/QmVb23Ad9Q3nyyLdhzRpqxVuqUJkwecPGwFViKTmSF6dEp
No.517997
>>516279
Really? That'd be glorious. I'll check it out, thanks anon.
No.519692
At this stage, how well does go-ipfs scale run on a server-level implementation? Would a site hosting all their content on IPFS, like Neocities for example, experience any significant delay/latency delivering pieces of a website? Do they just use it for archival (unless they specialize in IPFS storage, like Glop or ipfs.pics)?
No.519809
>>499795
>2016
>IPFS
>not ZeroNet
bad choice mate
This is the future >>519171
No.519828
muh update http://127.0.0.1:8080/ipfs/QmYt8G153xE6jaPxPKxa6mcWQBgVWGdz1BRVbxymdALCTF
>>519809
fug JIDF HQ says switch tactics. :--DDDDD
tho, it seems to require JS so IPFS will be simpler for pure file sharing (also mounting and all that).
also impressive for a meme even memer than ipfs (because "bitcoin crypto", JS), zeronet supports tor including tor-only.
No.520074
>>519809
Even people in that thread don't want to use it, bad choice I guess. That seems to have the same problem people have with freenet, sharing content you don't explicitly want (possibly illegal).
No.520101
>>519809
>TCP is obsolete let's use React.js!!!!!!!!!!!
No.521022
No.521376
>>521022
The best we can do now is to either convince them to host their files with IPFS, or just put up any book/paper we download from Libgen onto IPFS for at least partial redundancy if it ever somehow kicks the bucket. I don't think all of us can mirror such a large amount of content.
No.521755
Bumping to save from spam.
No.524702
Is it just me or does the ipfs daemon randomly stop working after many hours? I also put my computer to sleep every night, could it be a bug when it wakes back up?
No.524785
No.525759
>>524702
I've been running mine for days to weeks without issues. Does it give you some kind of error or can you just not connect to things after you wake up?
No.525797
>>525759
On my desktop it will randomly stop connecting to anything, even stuff on your local network. If you try 'ipfs stats bw' you'll at least see a speed at least in the sub-kilobyte/s range. When it's "dead" I see 0 kbps up and down. Then I kill it and restart it and it works just fine.
Just werks on my SBC, and that's the one I do all my hosting on, so it's really not a huge issue.
No.525806
No.527043
Where is the C++ implementation?
No.527458
No.527961
>>514754
pretty easy, but there is a built in faggotry filter that will likely delete your animu
No.532935
>>514767
>http://ipfs.io/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn
>If you have an anime you want added to the list, please send me an email with the link, title, and quality.
Yeah, this seems exactly what IPNS would be made for, considering that otherwise you won't be able to follow the page.
>>514774
>Is their DMCA infrastructure a problem for the network, or just a problem when fetching from the web?
Like >>514781 points out, it's only a problem when fetching from the web.
I ended up (stupidly) posting a link to some Light Novel PDFs on a public site, using ipfs.io's gateway, and when I checked back later, it was blocked. But you can still access all the files from any other gateway, including a local one.
Basically there doesn't seem to be any kind of risk of being shut down by DMCA, unless a copyright holder is really aggressive and plans to go after all the peers.
No.533197
No.535229
>>532935
>>533197
That page has always had an IPNS link at the bottom of it, you can see it on both of those and when clicked it returns the latest page if the host is online or someone else is keeping the IPNS alive. I forget if IPNS keep alive has been implemented yet but it's certainly planned if not.
It's the "load latest tracker" link which points to
/ipns/QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn
This practice is common on Freenet too I believe where people link to the version of the page they saw since it will always contain whatever it is you wanted to link however there's no compromise since you can always reach the latest version by clicking some link. It's a pretty good system imo, an HTTP analogy would probably be a page that hard linked to its domain name somewhere, if you save the html file locally and open it you may not have the domain name so you won't have a way to reach the latest version of the page but if you click the hardlink it will resolve if it can.
No.535463
>>519809
>ZeroNet
>not Ethereum
No.535464
>>535463
>Ethereum
>not MaidSafe
No.535473
>>535463
>>535464
> newfangled meme networks
> not freenet
No.535637
>>514777
>nobody can guess a hash of your file.
Until someone downloads it and sends the hash out to everyone looking for peers.
Just encrypt the file with a symmetric key first
No.535638
>>535473
>freenet
>literally 90% of the nodes are FBI
:-DDDDD
No.535642
Would it be possible to implement versioning and ownership/master options like you see in syncthing in this ?
the problem i see happening with this is that it's designed to be static so how would you avoid bloat without something like that
No.535740
>>535642
The only option at the moment is to use IPNS, but supposedly they're working on a system like you describe.
No.537250
found something neat about the http(s) gateways. they are all named after the planets of the solar system. also you can pull ur file from each one without letting wget gamble in the round robin. i used http only but you can do https with -no-certificate since wget doesn't like it otherwise.
https://ghostbin.com/paste/b7gkz
imo this software would be best used on VPS or in a datacenter were upload isn't an issue, but cucks insist upon using it backwards, trying to use the swarm as a backend to the http gateways thinking it will be free hosting. smdh why bottleneck the data at 8 machines which are already overworked.
any news about browser add-ons that run the daemon? i heard there was javascript version out somewhere too. the dev talks about browser integration but i haven't seen anything yet. if you could put the client node in the browser it would save work from the gateways.
yes i know there's a localhost redirect add-on and one for detecting hashes on webpages, but this requires you to run ur client/server on ur local machine by hand. I know it would be stumbing the program to have client browser add-on but it would make it more like MEGA and such. surely with WebRTC and other cuckware it shouldn't be hard to setup a botnet as well.
>hey mane get this add-on and let me send you a file.
No.537307
No.541191
https://github.com/ipfs/go-ipfs/issues/875
>avoid duplicating files added to ipfs
>anarcat opened this Issue on Mar 6, 2015
>open
This is like one of two things keeping IPFS away from not being meme software.
No.541198
>>541191
Not really, the common user doesn't want to share multi-GB files.
No.541212
>>541198
Would be a nice replacement for torrents, since it would be harder for a swarm to die. Also, season packs would be superfluous.
No.541224
No.541229
>>541191
>>541212
It'd be nice for large files, but not for files you are constantly editing. I believe a comment there mentioned it, but the reason it's like this is because people might end up moving or editing the original file, which would break the hash.
A solution might be to still move the file to the datastore and leave behind a link, but that only solves the "moving" issue, not the editing.
No.541282
>>535464
Maidsafe is SJW approved! Look at all the diversity on their website, isn't racemixing the most heartwarming thing ever to be forced down your throat? I hope it turns out to be a total scam, but as of now the coin is severely over-valued. As for ethereum, what can it do that counterparty can't? Check m8 altcoins.
No.541316
>>541224
This.
IPFS could have downloadable waifu-bots and I still wouldn't use it until there's i2p support.
No.541368
I think IPFS and ZeroNet threads should be merged into one versus thread. Does anybody agree?
No.541402
>>541212
It's already usable for large files as a torrent replacement, but normalfags won't just magically start sharing gb+ size torrents they made themselves, unlike power-users. Even then, if you're seeding that shit, you gotta have lots of space anyway. It's obviously clearly a flaw, but as they noted, not that prioritary over ironing out issues with the protocol itself.
No.541692
How resource intensive is the current implementation of ipfs? My server is a simple arm board with less than 200mb of spare ram.
No.541828
>>541692
Not very CPU intensive, but I think 200mb could be cutting it short, although you should try and see eitherway.
No.542068
>>541191
It's on the list and IPFS is still in alpha, surely it will be added when possible since they seem to be open to the idea. Honestly though shouldn't you be using an underlying file system that handles this anyway like ZFS?
Either way I'm also looking forward to that issue being closed, that functionality should be a part of it.
>>541692
>>541828
I don't know if it would be for everyone but the daemon uses ~113MB of memory on my machine with the latest master and the latest version of Go. Prior to Go 1.6 is was using ~200, I don't know if it was the runtime improvements or just coincidental with changes made in IPFs.
No.542080
>>542068
IPFS 0.3.11, compiled with go 1.5.1 here.
My IPFS daemon starts at around 50MB memory, but ends up working up to around 200MB after some time has passed.
Actually, right now I've mirrored glop.me, so the different likely lies there?
No.542086
>>542080
I'm not sure, I have a lot of content on my node and it used to go up to 200MB on 1.5 after a while but now it caps out in the 100's on 1.6. We're talking weeks of uptime too for both with some moderate downstream usage and high upstream usage.
They did improve the GC for Go 1.6 and said they're going to again for 1.7, maybe that's related.
Also I'm using the latest master which is version 0.4.0-dev so that could also be related as it's a big change in the IPFS codebase.
I do not recommend updating to 0.4 yet though simply because they say not to on github, I guess because of the repo and network differences they're telling people to wait, right now .3 can talk to .3 and .4 can talk to .4 but there's no cross talk yet so if you run .4 and try to grab content only on .3 then you won't be able to, in practice though I haven't had any issues myself, worst case scenario you tell the public gateway to grab the 3 content for you then request again on .4 since the gateway will have it and be hosting for both networks, then you're mirroring it for .4 after that.
Relevant link https://github.com/ipfs/go-ipfs/issues/2334#issuecomment-195046511
No.542130
>>541229
From what I understand even if you change small parts of the file and not the whole thing, those unedited parts live on. The file hash supposedly is a map to block hashes. Overlapping blocks could just be seeded from the original or other files. If I am wrong, please point it out!
No.542220
>>542068
I don't feel like having to backup my backup hard drive to change the file structure just to upload my animu to IPFS, or dropping it on my desktop and eat up my whole home partition.
No.542321
No.543138
>>499799
It gets worse
1) GITHUB IMPORTS. THEY REALLY USED THAT JOKE THING SERIOUSLY
2) MASSIVE OOP-INDUCED BRAIN DAMAGE IN DAG MANIPULATION CODE. REALLY, INSTEAD OF SIMPLE HASH->PAYLOAD MAP THEY USE SOME JAVA-STYLE MEMORY HOGGING PIECE OF SHIT
IPFS is a good thing, but it's only working implementation is quite shitty.
No.543143
>>543138
3) (protocol fault) SEGMENTS ARE UNTYPED
4) BLOCKS DIRECTORY. IT'S LIKE THEY INTENTIONALLY DONE IT MOST WRONG WAY POSSIBLE (protip: replacing directory with 36218 files by directory with 36218 directories with 1 file is not beneficial under any file system)
5) NO FUCKING STATS
6) LEVELDB IS SHIT
No.543146
>>543143
7) "IPFS ADD" CLI WORKS WITH STDIN - IT'S GOOD. "IPFS ADD" RPC DOES NOT WORK WITH POST BODY - IT'S RETARDED
8) TIMEOUTS, DO THEY KNOW WHAT THEY ARE?
9) IPNS WORKS IN 10% CASES. OF WHICH, IN 99% IT REQUIRES MANUAL KILL/RESTARTS BECAUSE (8)
10) IT LAGS IN PLACES WHERE IT SHOULD NOT (LIKELY BECAUSE 2, 4 AND 6)
No.543151
>>543146
IPFS IMPLEMENTATION IS SO SHITTY, THAT I AM SURE I COULD DO IT BETTER. BUT I AM DEPRESSED AND BURNED OUT BY MY DAY JOB. SO I JUST WHINE ABOUT IT AT FORUM FOR 15 YEAR OLD KIDS
No.543158
So how's that i2p suppor-
Oh... nevermind...
No.543250
>>543146
>>543143
>>543138
So, should we give up on ipfs and use maidsafe instead?
No.543256
>>543250
Maidsafe has strong smell of overengineered vapourware.
IPFS is shit, but it's simple shit that works right now.
No.543283
>>543138
So good idea/blueprint, but shitty execution? They should hire competent programmers, probably wouldn't take them this long to make it the thing work as well.
No.543303
>>541368
I'd argue they're complementary. Zeronet is better for dynamic, mutable content while IPFS is better for archival and immutability. At least that's my naive understanding of both.
No.543309
>>543143
>5) NO FUCKING STATS
Outside of 'ipfs stats bw'?
>>543158
>Why doesn't your alpha software work with my alpha software?
>>543283
They should throw a Kikestarter together. They have the excitement going and they should ride the wave instead of letting it fizzle out by the time go-ipfs hits beta. Especially with the Winblows crowd. I heard it barely works on that.
No.543330
>>543309
Because it was promised and they still haven't implemented it.
No.543371
>>543143
Aren't they planning on replacing the block store and leveldb with something else?
>no stats
How do you mean? There are file statstics, datastore stats, and they plan to add limitations (bandwidth caps, disk limits, etc.) so eventually there should be traffic statistics as well, there may be more I don't know about too but I don't look into that stuff, I just want an hourly bandwidth cap.
>>543146
Why timeout instead of trying forever? The whole idea is that things are supposed to be reachable always so why not keep trying until they are reached?
If I go to get a file or resolve an IPNS name I don't want to return after a timeout, I want the command to either succeed or block until it succeeds. Implementing your own timeout around this should be doable if you need one but outside of some kind of failure/abort state when would you even want to timeout? That makes sense for other protocols where high reliability isn't expected but that's not the intent for IPFS. Maybe it's silly of me to think that way though.
Also IPNS isn't even finished yet so I'd have to give it a pass if it's not working well right now, they don't work 100% of the time now because only the owner can keep it alive, they're going to make it so other peers can keep your name alive without you being online but that's not in yet, once it is though it shouldn't ever fail so there'd be no need to timeout on it unless you're not connected which would probably return an error prior to making the call. I'm not sure though.
>>543151
image related
>>543158
Who's working on that anyway? Are i2p people working on the support or are IPFS people doing that? I don't know much about the work being done there.
>>543283
The good thing is they don't have to, anyone can make an implementation however they want as long as it conforms to the spec. People can hate on the official Go version all they want but they don't have to use it. If someone really thinks they can do better they totally could and people could use their version while interoperating with everyone else.
>>543330
To be fair a lot of stuff is promised and not implemented yet, that's very typical of pre-release software, time has to pass for people to actually write it.
>>543309
>Especially with the Winblows crowd. I heard it barely works on that.
Works fine on my machine. The only issue with Windows is that it doesn't have fuse so there's no way to mount IPFS as a drive. Everything else should work though. Someone wrote a seperate program that mounts IPFS via Dokan but I have never once gotten Dokan to work with anything on any version of Windows. Maybe it works for some people but that doesn't work for me at all, it mounts it, I can go into the directory and then it crashes, the filesystem client though not IPFS. Probably has something to do with it being written in Delphi.
No.544275
>>543371
>stats
Well, I went full retard with that
>things are supposed to be reachable always
It is far from being a case for IPFS design, which does not actively duplicate and spread data blocks - it's design is closer to bittorent (swarm downloading from seeds, data duplication is either explicit (ipfs pin) or opportunistic - seeding from cache), than to distributed data store (where data blocks are treated same way as DHT keys).
Even if we assume, that data itself is always available, it's still absurdly strong assumption, as it would require that client (from IPFS application to physical network) will never fall into unexpected state, which could hang forever.
>I want the command to either succeed or block until it succeeds
Are you sure? Even if it means blocking for 5 years? There is always some deadline, just sometimes it is implicit "until I get bored with it".
>but outside of some kind of failure/abort state when would you even want to timeout?
You wouldn't, but, on the other hand, you always want "some kind of failure/abort state" - stochastic "it might return at any moment" state is very shitty thing to work with. Especially if you want to make something high-availability.
>Implementing your own timeout around this should be doable
It is doable, but it's a feature you expect from anything that is not ad-hoc bash script. Also "kill $(ps ax | grep 'ipfs resolve' | sed 's/ *\([0-9]*\).*$/\1/')" is not the most efficient implementation, but you cannot do better unless you do it inside application.
On the other side, making "infinite blocking" is trivial - either by calling in a loop, or by setting timeout to 30 years.
No.544535
Neat for thread archival, tried saving a dying thread with wget (with rewrite for ads -> localhost to remove annoying loading icon):
/ipfs/QmTAef5qs4EsALx59nEbXEd7nifoHzwsTWrpNtJapCW5Ue/tech/res/537009.html
No.545582
>>544275
>which does not actively duplicate and spread data blocks
That is a planned optional feature and there is also a project by the same team "filecoin" that will allow people to generate and spend a currency used for distributed storage, like you could pay me 1 to host your file for a day then I could spend that to do the same with someone else, either directly or just a random set of peers. I'm interested to see what they do with it but I'm planning on just turning on the distributed option myself, I have the space and network to work with and don't mind.
The optional "free" system is presumably going to work like freenet or perfectdark where it just distributes data to peers that allow holding random content. I think there are details on github but I don't remember.
I understand that forcing distribution is great for the network but a lot of people dislike that stuff being on by default since it uses their disk and network on something they don't even know, they could be unknowingly redistributing illegal content and they don't want that.
>Are you sure?
I mean that's the thing with this, even if it's not assured it will always be reachable that's the intent, so if I'm designing something which utilizes IPFS I have that in mind, if I want timeouts and such I'd probably use another protocol, if I'm using IPFS I intend on maximize the reliability of the content that's expected to be received. In some event where we rely on a critical object we may have no choice no matter what protocol we use, if your program needs a file to act on before it can continue you'll either block or poll anyway.
Regardless I doubt it will remain that way, there must be plans to incorporate them later once everything is more finalized, get it working first then polish it up. I could be wrong though.
>You wouldn't, but, on the other hand, you always want "some kind of failure/abort state" - stochastic "it might return at any moment" state is very shitty thing to work with. Especially if you want to make something high-availability.
That's fair.
Maybe it's worth filing an issue about it to see what they think and if they'll fix it sooner rather than later.
I hope my English isn't too terrible this early in the morning.
>>544535
Nice. I wonder if you could make a distributed archive this way by doing hash only requests on 8ch a lot. So like you maintain the front page and maybe some thread index that points to a hash of its state before it 404s but you don't host the content yourself just the hashes to it. Then if anyone else archives a thread or file via ipfs it would be reachable.
Maybe not the best idea but I kind of like the idea of an archive that only has threads that were manually choosen by other people to be saved and not just by the site owner.
At that point though I guess you'd just archive all the textual data and maybethe thumbnails while relying on other people in the network to host full images. Could be cool.
No.547377
>>544535
>/ipfs/QmTAef5qs4EsALx59nEbXEd7nifoHzwsTWrpNtJapCW5Ue/tech/res/537009.html
Why save homepage/frontpage and other bloat?
I've used this in the past to save without full-size images
wget -e robots=off -p -k http://8ch.net/tech/res/499795.html
>>545582
>At that point though I guess you'd just archive all the textual data and maybethe thumbnails while relying on other people in the network to host full images. Could be cool.
You would still need to download full images to generate the "full" hash tree
No.547402
>>543371
ima i2p bro and i wanna do ipfs's stuff but am too busy obsessing about nntpchan and other autisms to be useful ;~;
No.547403
IPFS is and always will be a meme until it supports Tor and I2P.
No.547442
>>547377
>You would still need to download full images to generate the "full" hash tree
For sure but you wouldn't have to store it forever. A lot of archives do that today where they store images for some amount of time but they will 404 eventually. You could use the same system but not have to worry about storing it yourself permanently as long as someone else did, but you still get the benefit of it always being reachable even without storing it yourself.
>>547403
Can't you use it right now with Tor? I thought someone was working on i2p support, I didn't get a response on that earlier in the thread.
No.547454
>>547402
nntpchan with ipfs for the images
No.547756
>>547377
>Why save homepage/frontpage and other bloat?
Mainly laziness, saving original images is easiest with just plain domain restricted depth of 1
No.549065
>>544535
>/ipfs/QmTAef5qs4EsALx59nEbXEd7nifoHzwsTWrpNtJapCW5Ue/tech/res/537009.html
Neat.
No.549078
>>547454
sounds great, but how would I nuke CP?
No.549140
>>549078
IFPS does not spread content automatically like freenet, you only seed what you get yourself. So a blacklist of some sort propably
No.549153
>>549140
so to delete an attachment, just stop seeding it and remove the reference in the markup?
No.549154
No.549162
>>549153
yes, its called "unpinning" in IPFS: If no node has the file pinned (or temporarily hosting it as a result of only downloading it) the file will not be available