[ / / / / / / / / / / / / / ] [ dir / ashleyj / hisrol / just / mai / soyboys / tacos / vg / voat ]

/tech/ - Technology

Winner of the 36th Attention-Hungry Games
/alcoholism/ - The Attention-Hungry Games are the Dark Souls of Hunger Game Simulators
Comment *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Show oekaki applet
(replaces files and can be used instead)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 3 per post.

File: e001175adc3c5fc⋯.png (87.56 KB, 1092x512, 273:128, ipfsthread.png)

File: 957095eb5bec93f⋯.png (140.19 KB, 512x512, 1:1, ipfs-logo-icewithbg-512.png)

File: 1666469e81f661b⋯.webm (3.94 MB, 1920x1040, 24:13, ipfs webm 1.webm)



0.4.10 - 2017-06-27

>Ipfs 0.4.10 is a patch release that contains several exciting new features, bugfixes and general improvements. Including new commands, easier corruption recovery, and a generally cleaner codebase.


>Add support for specifying the hash function in ipfs add

>Implement ipfs key {rm, rename}

>Implement ipfs shutdown command

>Implement ipfs pin update

>Implement ipfs pin verify

>Implemented experimental p2p commands

0.4.9 - 2017-04-30

>Ipfs 0.4.9 is a maintenance release that contains several useful bugfixes and improvements. Notably, ipfs add has gained the ability to select which CID version will be output.


>Add support for using CidV1 in 'ipfs add'

tl;dr for Beginners

>decentralized P2P network

>like torrenting, but instead of getting a .torrent file or magnet link that shares a pre-set group of files, you get a hash of the files which is searched for in the network and served automatically

>you can add files to the entire network with one line in the CLI or a drag-and-drop into the web interface

>HTTP gateways let you download any hash through your browser without running IPFS

>can stream video files in mpv or VLC (though it's not recommended unless the file has a lot of seeds)

How it Works

When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.


>Is it safe?

It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.

>Is it fast?

Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.

>Is it a meme?

You be the judge.

It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.

On the other hand, it's still alpha software with a small userbase and has poor network performance.

Websites of interest


Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.


Pomf clone that utilizes IPFS. Currently 10MB limit.

Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.


IPFS index, has some links (add ipfs.io/ before to access without installing IPFS)


>It's about as safe as a torrent right now

no it's not. bittorrent uses sha1 which was shattered and has been deprecated decades ago.


where's the release that doesn't crash routers by holding 1500 connections open



>his router cant handle >10^5 connections




I think they've fixed it.



>running 100% of your internet traffic through post-2013 pozzed hardware with proprietary firmware


If gateways make IPFS accessible to everyone, why isn't it more widely used?



because nobody uses it.



Client is badly optimized right now, it takes loads of ram and CPU for what bittorrent clients can do without breaking a sweat, they haven't worked on optimization since the protocol is constantly changing.



>the protocol is constantly changing

this is also why nobody uses it

stop breaking fucking links every 2 weeks


Why did you make a new thread? The old one was fine >>771999


>stop breaking fucking links every 2 weeks

You have no clue what you're talking about. The only non-backwards compatible change they made was in April 2016 when they released 0.4.0. Since then all changes have been backwards compatible with new releases.



It had no image.



Launch it with

ipfs daemon --routing=dhtclient

to reduce the amount of connections it uses.

Additionally, some progress has been made limiting the extent of the problem, though the issue of closing connections doesn't appear to be touched yet.

Issues to watch on this subject:




If you're going to make a new thread, at least post some updates.

js-ipfs 0.26 Released


>Here are some of the highlights for this new js-ipfs release. There were plenty more bug fixes, tiny performance improvements, doc improvements and others all across the js-ipfs module ecosystem. A really BIG THANK YOU to everyone that has been contributing with code, tests, examples and also bug reports! They help us identify situations that we miss without tests.

New InterPlanetary Infrastructure

>You might have noticed some hiccups a couple of weeks ago. That was due to a revamp and improvement in our infrastructure that separated Bootstraper nodes from Gateway nodes. We’ve now fixed that by ensuring that a js-ipfs node connects to all of them. More nodes on https://github.com/ipfs/js-ipfs/issues/973 and https://github.com/ipfs/js-ipfs/pull/975. Thanks @lgierth for improving IPFS infra and for setting up all of those DNS websockets endpoints for js-ipfs to connect to :)

Now js-ipfs packs the IPFS Gateway as well

>You read it right! Now, js-ipfs packs the IPFS Gateway and launches it when you boot a daemon (jsipfs daemon). With this, you can use js-ipfs to access content in the browser just like you use to do in go-ipfs or use js-ipfs as a complete solution to add content in the network and preview it without leaving JS land. It is great for tooling. This was an awesome contribution from @ya7ya and @harshjv who spent a lot of time adjusting and iterating on the implementation to make sure it would fit with the structure of js-ipfs, 👏🏽👏🏽👏🏽👏🏽.

Huge performance and memory improvement

>With reports such as https://github.com/ipfs/js-ipfs/issues/952, we started investigating what were the actual culprits for such memory waste that would lead the browser to crash. It turns out that there were two and we got one fixed. The two were:

>>browserify-aes - @dignifiedquire identified that there were a lot of Buffers being allocated in browserify-aes, the AES shim we use in the browser (this was only a issue in the browser) and promptly came with a fix https://github.com/crypto-browserify/browserify-aes/pull/48 👏🏽👏🏽👏🏽👏🏽

>>WebRTC - WebRTC is really cpu+mem hungry and our combination of opening multiple connections without limits + the constant switch between transport and routing at the main thread, leads to some undesirable situations where the browser simply crashes for so much thrashing. We are actively working on this with Connection Closing.

>That said, situations such as https://github.com/ipfs/js-ipfs/issues/952 are now fixed. Happy file browser sharing! :)

Now git is also one of the IPLD supported formats by js-ipfs

>Now js-ipfs supports ipld-git! This is huge, it means that you can traverse through git objects using the same DAG API that you use for Ethereum, dag-pb and dag-cbor. This feature came in with an example, go check out how to traverse a git repo. 👏🏽👏🏽 to @magik6k for shipping this in record time.

The libp2p-webrtc-star multiaddrs have been fixed

>@diasdavid (me) and @lgierth had a good convo and reviewed a bunch of stuff over Coffee ☕️, it was great!. During that chat, we figured that libp2p-webrtc-star multiaddrs have been implemented incorrectly and figured out the migration path to the correct version.

>You can learn more what this endeavour involved here https://github.com/ipfs/js-ipfs/issues/981. Essentially, there are no more /libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss, instead we use /dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star which signals the proper encapsulation you expect from a multiaddr.

New example showing how to stream video using hls.js

>@moshisushi developed a video streamer on top of js-ipfs and shared an example with us. You can now find that example as part of the examples set in this repo. Check https://github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming, it is super cool 👏🏽👏🏽👏🏽👏🏽.

>HLS (Apple’s HTTP Live Streaming) is one of the several protocols currently available for adaptive bitrate streaming.

webcrypto-ossl was removed from the dependency tree

>We’ve purged webcrypto-ossl from the dependency tree. It was only used to generate RSA keys faster (significantly faster) but at the same time, it caused a lot of hurdles to being a native dependency.

>There is an open issue on the Node.js project to expose the RSA key generation primitive, if you have a use case for it please do share it in that thread. This would enable js-ipfs to use the native crypto module, have the same (or better) perf of webcrypto-ossl and not have to deal with an external native dependency.

PubSub tutorial published

>@pgte published an amazing tutorial on how to use PubSub with js-ipfs and in the browser! Read it on the IPFS Blog https://blog.ipfs.io/29-js-ipfs-pubsub.


do ipfs still use the ipfs (((package))) app that

throws away the cryptographic integrity of git?


do ipfs still use the ipfs (((package))) app that

throws away the cryptographic integrity of git?




NixOS will have IPFS as well WEW


this is nice but i can do without the pedoshit links


Call us when it can use both i2p/tor. Until then, it's inferior to bittorrent for filesharing. Why? Because the only client availables (https://github.com/Agorise/c-ipfs isn't ready enough) are in GC using retarded langages.



They're working on it. c-ipfs seems promising as well.



What pedo links?



Turn on IPv6 you dumb nigger. NAT is cancer.


Does IPFS need the equivalent of TOR firefox?


My quick IPFS indexer seems to be working good, will try to make database available over IPFS if it keeps working good. Could anyone who knows how ES works download the 200gb of dumps from ipfs-search and run `grep --only-matching -P "[0-9A-Za-z]{46}"` on it? They're compressed/encrypted somehow.


Nah, it's non-anonymous right now anyway, you could just install a new chromium/whatever and limit it to localhost:8080 if you really want to.


Can the anon behind this comment on whether js-ipfs 0.26 is enough to get it started?




I'd rather put my hopes into the actually-existing ipfs ib made by another anon here over anime-pic-one-commit over there.


>ipfs add 60MB in three files

>router reboots

This is 16chan-tier software



Now is not the time for optimization, that comes later.

Also, see >>793346


File: 0bba07533f7cb3f⋯.png (536.4 KB, 1276x531, 1276:531, 1501285882315.png)

File: 1aa9ee532fabc18⋯.png (340.73 KB, 1214x595, 1214:595, 1505575841282.png)

"The age of men will return;

and they're not gonna get their computer-grid, self-driving car, nano-tech, panopticon in place fast enough..."




Why does IPFS idle at like 25% CPU usage and high ram usage? Its not even seeding often or downloading anything.



How much later? They've known this shit is unusable for two years.


File: d9a40bd09bcdbff⋯.png (197.19 KB, 842x848, 421:424, 1494016389196.png)

what does it do better than zeronet



Better spec, better design, and better future-proofed.

Problem is, right now it still send a fuckton of diagnostics information and is poorly optimized.

They're making progress, but it's not fast enough.



It's not a honeypot where every post you make on any site is tracked globally and anyone can insert arbitrary javascript (automatically run by all clients) (which must be enabled completely at all time to view sites in the first place) so long as any content of any type is allowed for upload in any page where that content would appear, for one.



bittorrent is NOT secure over TOR



It's just as secure as any other protocol. Some clients might potentially leak your IP, but there is nothing inherently insecure about the protocol.


File: c8a0ec27ffbb3b4⋯.png (143.55 KB, 445x385, 89:77, 1429497660865.png)


>anyone can insert arbitrary javascript (automatically run by all clients)

That sounds bad. Are you sure? It sounds so bad that I don't know if I believe you.



Try it yourself. The steps are as follows:

- go to an arbitrary site

- upload whatever they allow, content is irrelevant

- you will find a local file that was created with the content you uploaded

- edit it to insert arbitrary content

simple as that, bypasses all sanitation attempts, etc.

There has been proofs of concepts on 0chan in the very early days of zeronet, and this has never been addressed. The zeronet devs seem to not care about any of the components that make this possible.



sasuga redditware


This is all well and good but the problem with the internet today is that it relies on a centralized infrastructure to gain access to which one has to pay fees to people with the power to cut one off completely if not control the content being served.





>>Is it a meme?

>You be the judge.

>It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.

i have judged



Try upgrading from a Pentium 3.


Also centralized web servers were conceived because of a very important flaw with peer to peer infrastructure

>guy who owns file turns off machine

>file is gone



but you've got that wrong, a centralized server shuts down no one can ever access that file.



Did you ponder a bit before typing such a sloppy mess? Those centralized cluster of web servers can still represent a peer in a p2p infrastructure. Nothing is forbidding such a cluster from joining the swarm and sharing files. Compared to a distributed web, centralization offers very little benefits in return and is only still around because it offers more control (to the owners of the files and servers).



This is literally the same problem with web servers. Even your VPS is on an actual server. So if you're saying P2P (and self-hosting) has flaws well

>If the guy who owns AWS doesn't like you, no one can ever access that file.





none of you are wrong but you've all missed his point



What point, that you can't get a file if the only peer in the world that has it turns off his PC? Well no shit captain obvious, but that isn't a problem with the p2p infrastructure, that's a problem of people not giving enough of a shit to seed that file. Maybe Filecoin is a better solution to that, but then again who knows what will happen in the future.


File: 06a287e663c5599⋯.png (54.21 KB, 557x380, 557:380, Screenshot from 2017-09-19….png)

File: 822c3d9bb8c8cb8⋯.jpg (366.2 KB, 1024x768, 4:3, a-programmer-describes-how….jpg)



jesus christ OP I just wanna download Initial D and DBZ and you're linking to CP.



you talk too much man


File: 8c1d4c2f0af62f8⋯.jpg (38.63 KB, 374x374, 1:1, D8CRtMS.jpg)

can anyone explain how i can use p2p in a website. i have an idea of how i want to use it for a chan and other ways but i dont understand how i would go about it.



are you asking for an entry level explanation of how it works?



no, how would you apply p2p to a traditional website.

also what i dont like about ipfs is that it's almost impossible to have any privacy or remove content.



>no, how would you apply p2p to a traditional website.

decentralized hosting, I'm not sure I understand the question?



im talking about the code, how would you apply p2p.



that depends on the type of site anon, come on



well, let me make a mockup.


File: a0523f9e866e2de⋯.png (510.18 KB, 2354x946, 107:43, 643d.png)


I want to make a chan where users can create their own boards on the host site, and every thread is hosted by the users though p2p, the more users the more relevance and faster the thread loads for everyone in the thread. after a certain number of posts the thread will be removed and flushed out of everyone's computer.

the features will be unlimited file /video and reasonably lengthy text sizes.


File: d552e926579b143⋯.png (33.33 KB, 1383x163, 1383:163, zn.png)



that's almost exactly the same as IPFS chan



sorry for asking to be spoonfed, but do you mind showing me a bug about this? I can't find it.

Based on what I can tell, you can't just arbitrarily change 0chan to post whatever file you want.

This isn't related to ipfs, so I'll sage



Anon's just havin' a giggle, go ahead and click it.



Unoptimized, probably the DHT.


They're working on other stuff right now, they got a lot of money from filecoin ICO so we should be seeing some progress pretty soon. There's also a C implementation in the works.


IPFS can run on any transport (hyperboria), which can run over any link (ronja)


You cache it automatically when you download it.


Tor integration is in the works for privacy


>create keypair

>use this as board name

>sign admin keys

>they sign lists of posts to filter

>posts which do not follow specifications (eg too long text) are filtered by clients

>threads which are too old are filtered by clients



You're mostly just describing smugboard >>785171, it's very similar to what you're proposing.



It's not a bug. It's how it is designed. The poster can arbitrarily change the content of what they have posted. This includes changing the media type, and is simply a matter of editing the content that is stored locally. When someone requests the file, because you are the poster of the file, your doctored copy is distributed because the content you "upload" (e.g. text posts, or actual document attachments, etc.) are handled in this way.

You can even see the instructions on "how to modify a zeronet site" here:


as comments you post to a site are not handled in any special way compared to anything else. It's also why you need an ID to post anything and why your ID can be used to track anything you say across all sites by a simple grep: to enable modifying the content (which is not differentiable from a site, up to a point) by signing a more recent copy of the content.



wew, its worse than I thought


File: 945a274358a52c1⋯.png (83.63 KB, 300x300, 1:1, 1397153602672.png)


Why aren't files hashed? IPFS gets this right, why is there no network-level guarantee that files haven't been altered?



but Tor works easily on a P3, why is IPFS special?



IPFS is still in alpha (not optimized yet) and has the overhead of a complete p2p system (routing). Tor is much simpler to implement since every peer is not a contributing node. A large number of peers connect to a limited amount fast nodes. In IPFS every peer is also a node. This is the difference betwen Tor's decentralized network approach and IPFS's distributed network approach.



IPFS uses a distributed naming system (ipns) to point to the latest version as well as static pointers (ipfs-based addresses) to point to specific files. This is to enable tracking the latest version (i.e. enable the ability to update content) while still giving the guarantees there was no tempering by the controller. Zeronet doesn't seem to care at all about such possibilities: all that matters, only about the ability to update the content. Similarly, the zeronet folks don't give a shit about security (for the longest time (that might still be the case) they had been running with very old versions of various libs, including crypto libs, with unaddressed CVEs, for example). You can just say "their threat model is different" but at this point they disregard secops 101.


Release candidates are out for go-ipfs 0.4.11. If you want to try them out, check out the download page: https://dist.ipfs.io/go-ipfs

If you have troubles with IPFS using way too much bandwidth (especially during add), memory leaks, or running out of file descriptors, you may want to make the jump as soon as possible. This version includes prototypes for a lot of new features designed to improve performance all around.



So if I use something like IFPS to host a website, does that mean that I don't have do fuck with things like domain name registration?



Technically yes, but in practice you will still need a way to register a user-friendly name because people can't recall localhost:8080/ipns/LEuori324n2klAJFieow. But there's a way to add a friendly name in the ipns system (google around, I don't recall the correct method), which allows people to use localhost:8080/ipns/your.address.name instead, so that's an option. Other than that, all kinds of systems can leverage the likes of namecoin if you're so inclined.



Eh, I actually prefer the hash method. Keeps things a little more comfy.

>In the future, we may have ipns entries work as a git commit chain, with each successive entry pointing back in time to other values.

That's really fucking cool though.




Here's a list of all DNS alternatives that IPFS team can use, my guess is that they will use Filecoin considering that it belongs to them.



The method is to register with any normal DNS method a TXT record with content: dnslink="/ipns/<hash>" and it will work. So it's actually relying on the external system.


I thought filecoin was just an incentive to store other people's files?


File: 7dae464e0cfa70c⋯.jpg (73.49 KB, 768x780, 64:65, shocking truth.jpg)

Can I limit the amount of space that IPFS uses or if I download and start running it will it just fill up my hard drive indefinitely?



>I thought filecoin was just an incentive to store other people's files?

It is. I think they're going to recommend using ethereum domains as IPFS has plans to be deeply integrated with it.


>Can I limit the amount of space that IPFS uses

IPFS doesn't download random things to your computer. It caches everything you view but by default it's capped at 10GB.



By default IPFS does not fetch anything on its own, it only will retain the data you manually added via browsing or manual adding.

If you want you can run the daemon like this `ipfs daemon --enable-gc` which will read your config for 2 values, 1 is a timer and the other is storage. By default I think they're 1 hour and 10GBs, that means a gabrage collection routine would run either when you hit 10GB's of garbage or 1 hour has passed. What it considers garbage is anything that's not "pinned", if you don't want something to be treated like garbage you pin it.

Someone made an issue recently that I agree with, there should be an option for a minimum amount of data to keep, right now garbage collection deletes ALL garbage, but it would be nice if you could set it to keep xGB's worth of non-pinned content at any one time.



File: eceb197dec09787⋯.png (870.38 KB, 820x650, 82:65, manga_13set (1).png)


All 13 of the current "Manga Guides" series, in various formats.


File: 20b5e558d8cfc0c⋯.jpg (121.37 KB, 900x1200, 3:4, water between azn tits.jpg)


My mixtape.


Good music with a good video to go with it


Holy Nonsense

Also why does

32.00 MB / 54.44 MB [=======================================================================================================================>------------------------------------------------------------------------------------] 58.79% 0s20:13:33.012 ERROR commands/h: open /home/anon/.ipfs/blocks/GY/put-460657004: too many open files client.go:247
Error: open /home/anon/.ipfs/blocks/GY/put-460657004: too many open files

keep happening? Each of these I had to try adding several times.



Have you upgraded to 0.4.11 yet?


I have a question about implementation

Each file is divided into chunks, which are then hashed. These hashed chunks form the leaves of the merkle tree, which have parents that are identified by HASH( HASH( left-child ) + HASH( right-child )). This continues until we reach the root node, the merkle root, whose hash uniquely identifies the file.

To give someone else the file, from computer S to computer T, S gives T the list of leaves, and the merkle root. As I understand it, this is basically what a bittorent magnet link does as well (along with tracker and other metadata). We know the leaves actually compose the merkle root, by simply building the tree from its leaves, and verifying the new merkle root is the same as the provided one.

Computer T then ask around if anyone else has the content of the leaves (by querying for the leaf-hash), and verifies the content by hashing it upon download-completion. Once it has everything (and verifies), it simply compiles the parts into the

Assuming there is nothing wrong with my understanding above, I have a few questions:

How do we know the merkle root actually identifies the file we meant to get? ie if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what? Is there anything to do about this or is this just a case of don't trust sites you don't know

When computer T starts requesting for leaf-content, is it requesting by querying on the hash of a leaf, or the merkle root? Bittorent only requests parts from users that have the full file, which comes from the latter. If you request by the leaf-hash instead, I'm imagining that the less-unique parts (like say a chunk of the file composed entirely of NULL bytes) could come from ANY file source, regardless if that user actually has the file you're looking for.

And extending that, with some infinite number of files stored globally, it would be possible to download files with a leaf-list that NO ONE actually has; each leaf being found in some other file; composed in some particular fashion to create the requested file.


Can you use IPFS in combination with a tor bridge with obfs encrypting files and still transfer files? If this worked would the person receiving the data still see your public IP?



>How do we know the merkle root actually identifies the file we meant to get?

So you mean how is the data verified to be correct once the client receives it. inb4 is isn't verified



>When computer T starts requesting for leaf-content, is it requesting by querying on the hash of a leaf, or the merkle root?

On the leaf. Each 256k block has its own DHT entry (hence why it's known to be so chatty). This also means that if you have a file and change one byte in it then most of the file will be deduped by IPFS if you readd it.


>if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what?

My understanding is that the IPNS entries are signed by your public key, so that's not an issue. There is a problem where a malicious node could return an old entry, but that's the reason each entry is stored in a fuck-ton of DHT nodes. Which is also the reason it takes so long to resolve names, it doesn't just take the first resolution it can.



>On the leaf.

So what I suggested then, that a file no one has could be generated by the network given a list of leaves, by retrieving them from other files, would hold then? I suppose though that's not anything special, except that the granularity of chunks is bigger than say, 1 bit. But to confirm my understanding, is this true?



With bitswap, it DOES download random things, kinda. You swap fragments among peers from random content.



It's hash-based addressing: if the chunks that make up the file exist in other files, they are exactly as valid in the requested file as it is in that other file. That is, request the hash and the provenance is a meaningless concept: you can think of it as two completely different kinds of data (the actual chunks, and the file descriptors which are merkel graphs)



> ie if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what?

IPNS is still handled via DNS, so short of someone pwning the authoritative nameserver for a domain, you're looking at a hijacked local resolver, which you can defeat via VPN.


File: ef73c99feafef30⋯.jpg (57.52 KB, 261x287, 261:287, 1385077398214.jpg)



I am requesting that people please prefix their hashes with"/ipfs/" when posting, so that the browser addon detects them and anchors them, this way people with it can just click on them.

Like this





updated my porn folder again




direct link since ipns is buggy



alright my first try at this, it's Lovecrafts "beyond the wall of sleep"



>merkel tree

/pol/ shoop incoming




Nobody is interested in the contents of your spank folder, you degenerate.



but someone might be


File: 047afe01d717493⋯.jpg (38.18 KB, 640x649, 640:649, CRjSbwjVEAAuScv.jpg large.jpg)


And here the necronomicon


could someone tell me if it works alright?



You can check yourself by accessing a file through the gateway, i.e. https://ipfs.io/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6

If you can see it there, everyone can find it.



>ipns is buggy

How do you mean?



ipns forgets what it's linked to after 12 hours and is extremely slow



>Right now if you are Catalonian you can go and look up where to vote for the independence of your country via an encrypted and fully distributed web running on experimental, unfinished free and open source software maintained by a global network of volunteers, which is the only way to do it since the government censored the original website and is sending the cops after everyone who tries to mirror it normally. The future is now, motherfuckers.



RIght near the top of page linked by OP is the following:

gathered links. mostly from IPFS generals on 8ch/tech/.

* anon's qt ipfs page (this site)


* the best cp archive i've ever seen


I ain't clicking it, regardless of what it may hold. Probably cartoon ponies teaching how to circuit-probe.



>And here the necronomicon

>could someone tell me if it works alright?

No one can *tell* you that magick works, Anon. You have to say the incantation yourself. If you see results that seem magical to *you* then the magick has worked for *your* belief. Magick is not science.



Disclaimer: I think this is all right but I'm not sure since things are changing rapidly, correct me if I'm wrong

Nodes don't forget, the record expires, by default your daemon is supposed to re-publish anything you published every 12 hours and records expire every 24


Just make sure you actually publish after starting the daemon and again anytime the daemon restarts. I think this will be automated later and other peers will keep the record alive but it's not like that yet.


It's Rick Astley's Never Gonna Give You Up


Are you still hosting? I managed to get only 25MB's.



If magic only exists based on personal perception then magic cannot affect other people in any way, since their perception differs from yours.



yeah I forgot to turn it on, try again.


IPFS v0.4.11 is out

This looks like a large update. Everybody should upgrade.

>You will now be able to configure ipfs datastore to use flatfs, leveldb, badger, an in-memory datastore, and more to suit your needs

>The concept of 'Bitswap Sessions' allows bitswap to associate requests for different blocks to the same underlying session, and from that infer better ways of requesting that data

>As nodes update to this and future versions, expect to see idle bandwidth usage on the ipfs network go down noticeably

>Users who previously received "too many open files" errors should see this much less often in 0.4.1

>A memory leak in the DHT was identified and fixed and now memory usage appears to be stable over time

>In an effort to provide a more ubiquitous p2p mesh, we have implemented a relay mechanism that allows willing peers to relay traffic for other peers who might not otherwise be able to communicate with each other

>we have come up with a system that allows users to install plugins to the vanilla ipfs daemon that augment its capabilities (git and ethereum plugins are currently available)

>In the future, we will be adding plugins for other things like datastore backends and specialized libp2p network transports

>we've switched the ipfs/go-ipfs docker image from a musl base to a glibc base




>going from muslc to botnetc

Hmm. As long as you are compiling it yourself still......



2 weeks old my man.



>It's about as safe as a torrent right now


>no it's not. bittorrent uses sha1 which was shattered and has been deprecated decades ago.

Data on the attack. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

For the time being, it's still not considered a realistic scenario because of the amount of effort involved, even if it's a possible one. Either way, they've added SHA256 to the BitTorrent v2 specifications.




>Either way, they've added SHA256 to the BitTorrent v2 specifications.

Fucking finally.



It was released the 27th. You're thinking of the release candidate.



>not using SHA2-512/256 or BLAKE2b-256

That was either a retarded decision or a (((perfectly planned))) decision. Why would they slow down the entire network to use an algorithm that isn't even recommended anymore due to the prevalent security threats? And don't give me any muh hardware acceleration crap. The only x86 CPUs that have hw acceleration are newer Intel Atoms and AND Zen which count for a minuscule percent of torrent users.



It was (((perfectly planned))) considering the (((endurance international group))) owns bittorrent now.



>And don't give me any muh hardware acceleration crap.

That's exactly the excuses I was reading in the Issue thread on the subject. It sounded stupid to me too.

>not using SHA2-512/256 or BLAKE2b-256

BitTorrent is an open standard. It's up to the community to implement. If the community got together and decided to implement a modified protocol, the recommended specs would necessarily have to change to reflect this if enough people were about it. That or we could fork and call it ButtTorrent, since we're being stubborn butts about it and everyone just calls it torrenting anyway.


File: 0a3de96b2eb93f9⋯.jpg (297.92 KB, 1224x1632, 3:4, J7NMPNR.jpg)

Would IPFS be a great choice for those who're looking for a way to have a AI back itself up to prevent another Tay?



Those titles were a joke, nigger


Do you think it will work in the outer space?




how would you prevent another tay?



>how would you prevent a large corporation from deleting its chatbot after we ask it to repeat 14/88 memes?

I don't think there's a solution to that other than do it yourself. If there's source code, you can rebuild it; if there isn't, tough luck. I don't see what IPFS has to do with this, other than maybe helping to host it or its databases.


To whomever posted QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5 and QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6 these stop part-way. Please continue to seed them.


File: 64b660b44a845f0⋯.png (180.44 KB, 500x500, 1:1, 1263746040872.png)

File: 3bd208a3981f553⋯.gif (2.86 MB, 480x480, 1:1, water_in_space.gif)

File: f7edc59750a823e⋯.png (99.5 KB, 958x574, 479:287, Puyo Mac.png)


Not the original poster but I have a copy of those but I'm currently migrating from flatfs to badger, it's going to take a few hours because I have hundreds of GBs of shit in here and slow ass hard drives. My node should come online in like 20 hours or so since I just started.


What data should we send to/from space? I'm willing to send puzzle games and cute girls if they send me pictures of the moon and videos of them doing things in low gravity.




Okay I'm up now, badger seems fast as heck compared to flatfs.

I made a mistake earlier, I only have 25% of /ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6

I knew I had 100% of /ipfs/QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5 though

OP should come online so I can finish the mirror.



Are there any benchmarks on datastore filesystems yet?



The benchmarks for the underlying systems should be representative enough. There are plenty online of people comparing leveldb with other things, badger itself competes with an improved version of leveldb(RocksDB) and the devs posted a lot of information about both of them in their post, they also have comparison results.


There's probably no benchmark for the flatfs scheme the ipfs team made, I don't think it would make sense to bench either since the underlying filesystem and OS have a major impact on it, since it's just flat files in directories.

Reminder that the interface for datastores is in place and currently being worked on, so you can implement the interface and use whichever datastore you want. Most of (if not all of) the filestore(no-copy adds) work was done by a third party.



I suppose badger is a no brainer considering they're making it the default next patch, as far as I can tell.



It seems objectively better than leveldb in every way, personally it's been much better than leveldb and flatfs on my machine, like noticeably so in terms of speed, hashes went from taking about 2 seconds to be fetched on the public gateway, to instant, and this is with hundreds of gigabytes of things in my datastore.



Surprising how much overhead there is in flatfs just from storing everything as 256kiB files. Makes me wonder how much faster git would be with an sqlite backend.


bump for interest



What are you interested in?



they meant usury



>thinking anyone will use this

>thinking it won't get outlawed



Many people do use this without even knowing, dumb fuck. Get out, candy ass piss ant; Inform yourself before speaking. This isn't some inconvenient technology. On the contrary, very fast and improves speed over regular internet usage. This isn't even addressing the other benefits. BTFO.



Whoa, calm down there sis.


Outlawed or not it would be hard to prevent people from using it. The development team is coming up with strategies to use it even in restrictive environments.


At best you'd be able to prevent some part from working easily but since the plan is to be able to swap out components, you'd still be able to utilize it by getting around blocks. Things like cjdns and message passing will make it very difficult to prevent data access.


Version 0.4.12 has a release candidate out now. Here's part of the changelog:

>Ipfs 0.4.12 brings with it many important fixes for the huge spike in network size we've seen this past month. These changes include the Connection Manager, faster batching in ipfs add, libp2p fixes that reduce CPU usage, and a bunch of new documentation.

>The most critical change is the 'Connection Manager': it allows an ipfs node to maintain a limited set of connections to other peers in the network. By default (and with no config changes required by the user), ipfs nodes will now try to maintain between 600 and 900 open connections. These limits are still likely higher than needed, and future releases may lower the default recommendation, but for now we want to make changes gradually. The rationale for this selection of numbers is as follows:

>>The DHT routing table for a large network may rise to around 400 peers

>>Bitswap connections tend to be separate from the DHT

>>PubSub connections also generally are another distinct set of peers (including js-ipfs nodes)

>Because of this, we selected 600 as a 'LowWater' number, and 900 as a 'HighWater' number to avoid having to clear out connections too frequently. You can configure different numbers as you see fit via the Swarm.ConnMgr field in your ipfs config file. See here for more details.

>Disk utilization during ipfs add has been optimized for large files by doing batch writes in parallel. Previously, when adding a large file, users might have noticed that the add progressed by about 8MB at a time, with brief pauses in between. This was caused by quickly filling up the batch, then blocking while it was writing to disk. We now write to disk in the background while continuing to add the remainder of the file.

>Other changes in this release have noticeably reduced memory consumption and CPU usage. This was done by optimising some frequently called functions in libp2p that were expensive in terms of both CPU usage and memory allocations. We also lowered the yamux accept buffer sizes which were raised over a year ago to combat a separate bug that has since been fixed.

>And finally, thank you to everyone who filed bugs, tested out the release candidates, filed pull requests, and contributed in any other way to this release!



The final version of 0.4.12 was released yesterday. Now that it won't spam my network as much I might host some content. But the real question is, is my bandwith is better spent seeding the torrents or seeding them on IPFS?



>torrents or IPFS

Do you think it's likely you'd max out with both running? I tend to have my torrent and ipfs daemons running simultaneously, if I get bittorrent traffic it's usually just a burst for an hour or a very slow trickle to someone on the other side of the world. Same with IPFS, most of the time it's idle, and I have a lot of content shared on both + other networks.

Right now I have both running + soulseek.



>communism: the protocol



A global network is something I'm okay with my computer contributing to. It's robot communism at best. The costs and benefits of a decentralized and/or distributed system are more appealing than the faults associated with a centralized system, in my opinion.

The details of IPFS itself (while not new) seem especially nice, things like immutability, content addressing, and the inherent lack of trust associated with P2P systems which encourages better validation and security practices at a network level.

Things like pinning services (essentially CDNs), private networks, and real-time dynamic content via pubsub, allow for some opportunity for capitalists as well.

I think things like Freenet which force you to share the load are more communistic, with IPFS you're only ever sharing what you yourself want to share.


Wew, page 9. Anyway...

Charlie Manson Superstar (ogv)



File: c866b4ca831775f⋯.jpg (10.78 KB, 372x268, 93:67, noire_yamero.jpg)




What a quality argument. It's good to know that the best and brightest are still here.


go-ipfs 0.4.13 is out. Judging by how quick it was, you should probably download it immediately if you're on 0.4.12 already.

>Ipfs 0.4.13 is a patch release that fixes two high priority issues that were discovered in the 0.4.12 release.


>Fix periodic bitswap deadlock (ipfs/go-ipfs#4386)

<If a sessions context gets canceled during an isInterested request, bitswap (all of bitswap) can deadlock.

>>Fix badgerds crash on startup (ipfs/go-ipfs#4384)

<After changing the datastore to badgerds you can no longer start a daemon without a panic.


File: 748518956b7769c⋯.jpg (9.02 KB, 479x163, 479:163, LfZpouS_d.jpg)


Because you arent contributing.


File: a5e79f0306adaf5⋯.jpg (23.43 KB, 400x300, 4:3, Charlie Manson Superstar.jpg)


Cool intro graphic.

Friendly reminder to the thread

ipfs add -w "Charlie Manson Superstar.ogv"


ipfs files mkdir /tmp
ipfs files cp /ipfs/QmdSCNSSdpS3j6vudHR87FChHHus4jsgMKGMKLP4g86tRM "/tmp/Charlie Manson Superstar.ogv"
ipfs stat /tmp
ipfs files rm -r /tmp


You don't need to fetch the file to craft that hash in the latter either, and you can always get the base hash back from it if you need it bare.

ipfs ls /ipfs/QmcaQifUM8ixuERUXe4fXX89hCAhgKK8MGFPomSEcZgn2C

I wonder if this linkifies with the extension



>pushing a release to fix an experimental feature



oh hi i hrd thr ws cp?




Nothing very interesting there.


File: d1915fa375efa98⋯.jpg (99.15 KB, 1280x720, 16:9, animegataris.jpg)

Animegataris. Episodes 1 to 10.




Is your daemon running? I can't access this.

also prefixed for the browser addon /ipfs/QmbNZZpQRThNurXNPhcazA2Gw52FCC3SxChiC3T6GiMB3e


Saving the thread while we wait for a new version, so I might as well ask: What is the difference between pin and add? I'm still not really clear on that.


no offense guys but.. why care about this shit? I can't use it through tor or i2p so it's just the same as torrents: SEND ME A DMCA PLZ! I want 3 strikes and my net cut off!



Pin tells the GC not to reap the blocks until you unpin them.




Reminder that by default, adding something pins it, if you don't want that use

ipfs add --pin=false
and whatever you just added will be reaped on the next GC, if you don't want it to be reaped, pin it with
ipfs pin add *hash*


So when are we expecting for DHT to work in js-ipfs, so we can properly run it on websites?

At the moment, running a node on a webpage only allows you to retrieve content that the default gateways seem to have stored.



Would it be possible to just continuously reannounce to keep the files cached in the gateways? It's a shitty solution but it might work as a stopgap until js-ipfs is more complete.





You are the ultimate brainlet. This isn't just for pirating shit, it's also for censorship circumvention and freedum.

Where is my in-page IPFS without gateways? Once we have that it will be a new age of Internet prosperity. Smugboard can finally replace this crumbling shell of an imageboard.



>Where is my in-page IPFS without gateways?

Eventually. It seems js-ipfs doesn't yet have proper routing or something, so it can't connect to the DHT, and or connect to nodes it hasn't already connected to. This limits js-ipfs to any node on the bootstrap list.

I was thinking it'd be fine to just use go-ipfs and have people download a client (which runs the daemon in the background), but seems like everyone these days wants to do everything from within the browser. Thus, we wait for js-ipfs to reach a usable state.



>Thus, we wait for js-ipfs to reach a usable state.

Yes, don't wanna blow our load before making it perfectly transparent for normies. They should use those filecoin shekels to hire like 20 more devs though, is this not the most important development in Internet history since social media? Where are the buzzwords, the hype? Oh right, this will kill CDNs and file hosting sites (jewtube etc) among other things.



>They should use those filecoin shekels to hire like 20 more devs though

I agree. Maybe I'd see it if I really analyzed how many people are making pull requests, but it seems like things are moving the same speed they always did. Granted the big things to tackle moving forward are executive decisions about how to implement complicated shit like bitswap, but you can pay people to help with bug fixing while you approach conceptual solutions. Filecoin can't live without IPFS and vice versa. It's very much a yin-yang thing. I'd like to see some serious muscle put into shipping out a 1.0 before Filecoin even hits open beta. God forbid they come out of the gate with critical bugs or scaling issues.

Another nice thing to see would be hiring someone to help develop community projects. Imagine a guy who knows the software back to front helping out all the little guys in that IPFS Shipyard, especially things that could work as building blocks for bigger projects.


File: 28be3b99e2c56ec⋯.webm (11.92 MB, 800x450, 16:9, IPFS Alpha _ Why We Must ….webm)


Bumping a thread on a slow board, with 15 pages of threads, after only 4 hours between the last bump, is bad form. Please don't do that.


I didn't want to just complain about wasting posts while wasting a post myself, so I'll give my input on this.

I agree that this is extremely important but important things aren't always exciting. IPFS enables us to reliably host data on our own, with all kinds of redundancy, distribution, censorship resistance, and a nice promise of practical permanence(as long as someone has the data, anyone else can access it using the original hash, forever(no dead links)). All of these things are indeed important but it's not exactly exciting, for some of us it's almost frustrating since we sit here and say "it should have always been this way...", with that in mind I can understand why there isn't much noticeable "hype", it's mostly silent experimentation and adoption which seems fine to me. I don't think IPFS needs any kind evangelizing, especially not right now. I think anyone that comes across it can come to their own conclusion easily on whether it's worth using or not, it will grow naturally regardless of it's public image, like most good technologies. What comes to my mind is things like BitTorrent, it's huge today and it's not because of any kind of marketing bs and without any promise of people making money off of it. Also it's a long way from being finished anyway.

That all being said it's not like there aren't people writing about IPFS and getting excited, the project creator does more than enough talks at conventions, schools, etc. and in my opinion is doing a good job explaining what it is, does, and why you should care about that.

The old threads used to link this a lot


Since 2015 I've seen more and more publications about it so it's not like it's not happening, it's just not mainstream yet, and I don't think it should be until it's finished and all polished up. I think they have a good balance going on right now, it's too early to hype things up when they're unfinished and changing rapidly, and what's nice is it feels like the community understands this too. Imagine other projects that get popular too early, they get talked up a lot but don't actually meet their promises yet, people hear about it, they try it, and they get disappointed.



When https://github.com/ipfs/js-ipfs/pull/856 is merged. Which is taking fucking forever.



>it feels like the community understands this too

This is true, it's pretty clear to me everyone seems to know what's up regarding this. Nobody wants to push this mainstream until it's something we can stand behind.


Someone want to explain to me how the

ipfs p2p
subcommands work?

If I'm understanding correctly, you can basically listen in for TCP/UDP connections, and connect to them by resolving an IPFS Peer-ID, instead of a domain or IP address?

Looks pretty useful; has anyone actually made use of these features?



It's experimental, so documentation is light. This thread has an easy example demonstration: https://github.com/ipfs/go-ipfs/issues/3994

>To enable the command, you will need to run:

<ipfs config --json Experimental.Libp2pStreamMounting true

>And restart your daemon.

>Basic usage of ipfs p2p is as follows:

>First, open up a p2p listener on one ipfs node:

<p2p listener open p2p-test /ip4/

>This will create a libp2p protocol handler for a protocol named p2p-test and then open up a tcp socket locally on port 10101. Your application can now connect to that port to interact with the service.

>On the other ipfs node, connect to the listener on node A

<ipfsi 1 p2p stream dial $NODE_A_PEERID p2p-test /ip4/

>Now the ipfs nodes are connected to eachother. This command also created another tcp socket for the other end of your application to connect to. Once the other end connects to that socket, both sides of your application should be able to communicate, with their traffic being routed through libp2p.

>The easiest way to try this out is with netcat.

I'm not sure what he meant by "ipfsi 1" but I assume it's a typo, and you should just change that to "ipfs".



Seems more fleshed out for js-ipfs, for reasons that are only natural.




libp2p is a set of protocols and utilities, of which IPFS utilizes most of them. "ipfs p2p" is a subcommand for binding a normal application to your PeerID via ports, so it's possible to create a centralized website on top of IPFS, with decentralized addressing.



Yeah, nevermind, it's pretty much the same thing, it looks like.


Or at least, it's a part of it.


File: 235592ec8e55aa0⋯.png (104.32 KB, 1856x931, 1856:931, 34874493-0d306e5a-f791-11e….png)

Never realized this existed until now. It's almost exactly what I had in mind for shilling IPFS to normalfags, especially ones on Windows.


Anyone try it?


File: ce020cd9cf6c9c1⋯.gif (70.31 KB, 800x350, 16:7, licecap-trim.gif)


I've been using this one



File: 260226e4690708a⋯.png (15 KB, 827x562, 827:562, asdf.PNG)




They have to do that on the main gateway to avoid getting vanned for knowingly hosting illegal content. If you want to link it to other people who don't have ipfs, you can use someone elses gateway or better yet, use your own with your own blacklist.


I never even looked into how to setup a blacklist for it though.

What's the content of that hash?



Found it on /ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY2PDxNxG/ipfs_links.html

It's labeled as "Programming books".


File: 4823163c2b4b10d⋯.png (2.97 MB, 2200x3276, 550:819, 1512610818106.png)


I picked a random one from here that doesn't use their DMCA list.



Remember to use your local gateway regardless.


Don't read Zed's books either.



Sipser's book isn't that bad. What are you getting at anon?



I had high hopes after the first category, which is spot on.

If your school teaches Java, it's basically a "Java school", there is literally no lower circle of hell for a CS school, except teaching C# maybe.

Second category : Papadimitriou is a respected computer scientist, was this book that bad?

Third category : Sipser taught at MIT for 30 years.

Fourth category : comparing apples and oranges, I very much doubt that books on the left are used at the same level or for the same course as books on the left.

I will do my own list, cool idea.


File: f8cadcc3c7db9d1⋯.jpeg (844.28 KB, 2200x3276, 550:819, good_bad_schools.jpeg)



Seems to be up again.


File: 925085f3f22764e⋯.png (170.05 KB, 1289x690, 1289:690, sht.png)

I jumped ship at 0.4.8 after reading the CoC, looking better now.

How can I 'seed' content? Download and 'add' the directory later? I am not seeing an obvious answer.


Any pointers in how to use it? Couldn't find ditshick translations with it.



>How can I 'seed' content? Download and 'add' the directory later? I am not seeing an obvious answer.

Yeah. ipfs get will pin it temporarily to your datastore, but doing an ipfs add or ipfs pin after will hold it indefinitely. I'd recommend moving it to a permanent location on your drive and using add so you can use the --nocopy flag to save disk space.

>Any pointers in how to use it? Couldn't find ditshick translations with it.

I don't know how long IPFS Search has been up, so they might show up after another update or reannounce.




Good schools use the K&R, ANSI version.


File: 7b5b27a0cbf87eb⋯.jpg (153.01 KB, 680x1020, 2:3, 65d.jpg)


File: f15b873d67be965⋯.png (161.21 KB, 649x473, 59:43, f15b873d67be96579c9a5310c6….png)


>Get IPFS all set up.

>Launch it

>Query the source material and get my client's key back


>Check the ipv6 address for the host

>Torsocks isn't working at all

>It's leaking my internal IP as well

Any good ways of covering this?



I think if you poke around .ipfs/config you can choose how you connect but I'm not really sure. However, you shouldn't be running IPFS through TOR in the first place.



>ipfs get will pin it temporarily

Stuff like this will probably confuse people, `get` doesn't pin anything, it just caches the content in the datastore. Pinning content just takes cached content and flags it so that the garbage collector does not reap it.

>save disk space

Another option is mounting ipfs and using the mfs if you want to reorganize the layout.



Make sure you're just listening on the tor interface.

Also look into this guys fork so you can just connect to tor directly instead of proxying.


If you go that route you can put just that in your listener section.


nsa honeypot


What sets this project apart from freenet? From what I've read so far, it seems very similar.


File: 8cbffa28ec30e08⋯.png (165.62 KB, 4000x2250, 16:9, ip.waist.png)

File: ccc75b353c9ff9a⋯.png (3.51 MB, 4000x2250, 16:9, mdag.waist.png)

File: 28c0f7f49caecf7⋯.jpeg (51.83 KB, 800x450, 16:9, 41525062dc516766738e5bf5e….jpeg)


Some differences are that IPFS does not distribute content by default like Freenet, with IPFS you are only sharing data that you choose to share, nothing prevents you from joining a pool that syncs things in a similar manner to freenet but it's not implicit by default.

The design is modular and meant to be swapped out to a users content, for example swapping out the routing system for an existing one such as cjdns, i2p, et al. or some combination of all of them. Contrast this with Freenet where what you get is what you get, if you don't think the encryption, routing, hashing system, etc. used by freenet is good enough you can't swap it out, with IPFS the end goal is to allow interoperability between all these different systems through a common interface and network bridges.

This is probably the most important aspect because it means changes both in the core project itself are possible but also people can utilize other systems and concepts too if they like, this prevents stagnation and hard upgrade paths for the network as a whole. The project creator pitches a "thin waist" like IP, you can design a ton of hardware beneath it and a ton of different protocols on top of it, IPFS is meant to act almost like a P2P IP stack where things like the transports can be modified without breaking things built on top of it or besides it. Their desire to bridge things is also interesting, pulling information from external things like git over http, torrent data over dht, and other things external to IPFS and their own bitswap is very interesting as it allows for easy migration and interop with existing systems, most of that is experimental right now but seeing it working at all is interesting, the opposite style is also useful, things like pushing data from IPFS over http.

Summing it all up I'd say IPFS is not trying to compete with things like Freenet as much as they are trying to bind it and others. I've seen people say IPFS intends to be a giant collection of gluecode making all these networks and concepts work together since there's no reason they can't. A very broad reach across multiple networks over multiple systems through 1 common data structure and interface.


File: f0745aa611390d4⋯.png (126.16 KB, 544x400, 34:25, stack.png)


I forgot this one.


We're watching you, non-believers~


File: 91cc7aa187edb8b⋯.jpg (71.98 KB, 500x375, 4:3, in google we trust.jpg)

File: 4ada700ce640094⋯.jpg (25.09 KB, 259x400, 259:400, 13450985_10153571016317019….jpg)


I believe.


What happens if someone uploads cp? Is it just on everyone's computers forever?



Why can't you people read the thread, it doesn't distribute implicitly, if someone hashes CP it's only on their node unless you download it too, if you download it on accident just run garbage collection or wait for garbage collection to run automatically if you have that set.



Indeed. And Lord Google shall pin it as the darkness, and the masses shall follow Their word.


As does all of /tech/, possibly even all of 8ch!



Oh, okay.

What a useless protocol, then.



You'd rather it be stuck on your node? What's the benefit?



I expected some protocol-layer encryption



There is, that doesn't have anything to do with automatic distribution across nodes. I'm not sure how you mixed those up.


/ipfs/QmYShhhD6j7vwL4SGUp2vNwHerRc7VU4Qm86ZJWvgqZn9G - ReleasetheMemo.pdf

/ipfs/QmbtL1GXPaCwUd1k8iBHynpGZUM6CwrLAuHR9LjZTtcUYB - Damore Complaint against Google, 1-8-2018.pdf

/ipfs/ QmUUA8zudreYSSoGp8aujw1d6QJAjoan9W7MATF7udjrws James_Mason_-_SIEGE_3rd_Edition.pdf


File: c5b3f5f35d0eacf⋯.png (578.1 KB, 1174x878, 587:439, transmetropolitan.png)

/ipfs/QmVzwWXRfeA8d7C9D3TZ9aM2P3SzxHfk3ALF2dghy5yrcn - full collection of Transmetropolitan



Someone should IPFS the books in here.


These people need it the most (especially the Rabin fingerprinting)

>>>/pdfs/ >>>/tdt/ >>>/zundel/





>@rht You're violating our code of conduct. This is not the first time.

>I'm not willing to tolerate being bullied in this way and I completely disagree with your repeated, baseless implications that people are being dishonest or trying to hide information. We make huge efforts to operate in transparent ways and to provide people with good information. There are plenty of ways to press for better information without falsely and baselessly accusing people of bad intentions. We want everyone to help us make sure that we're providing the best possible information in transparent ways, but you must also respect our need to maintain a Friendly, Harrassment-free Space.

Good luck with your faggotware.



>When faggotware is better than torrents



Stop trying to get more people to use it right now, it's not finished yet. It's good for us because we know how to use it. Promoting it to less tech savvy users isn't going to help anyone and looks like spam.

Rabin isn't even out of experimental yet and the only browser that supports IPFS as a valid protocol is Firefox 59. Wait until it's finished before telling other people to use it.


The guy telling him off (flyingzumwalt) doesn't do anything important, he schedules the meetings and is resident bitchboi on the forums. The real project maintainers seem alright.


Why don't you do it? If you post the hash I'll pin it.


File: 44a6977d8e08068⋯.jpg (97.74 KB, 500x328, 125:82, 1381672017631.jpg)



>CoC cancer strikes again


File: b2aa23017a3fdcb⋯.jpg (294.93 KB, 1016x1446, 508:723, modern-electronics.jpg)



https://github.com/ipfs/archives/issues/87 # doing a "sprint"

https://github.com/ipfs/archives/issues/136 # test result

https://github.com/ipfs/archives/issues/137 # more results

https://github.com/ipfs/archives/issues/142 # wanting people to refute him

So rabin fingerprinting is worse than Gzip, and is comparable to classic chunking.


File: 46623a95bab2f87⋯.png (199.49 KB, 1349x419, 1349:419, topnew.PNG)

Some schoolfags are developing a CryptoNote cryptocurrency with an IPFS node list.




>no ICO







Are there better alternatives than rabin and whatever IPFS does by default? Maybe someone should propose them. For one time operations on big datasets, having a slow but space efficient chunker/de-dupe system seems like it would be valuable.

I noticed a lot of those tests are using the default rabin size too, IPFS allows you to specify the min max and average chunk size, I don't know how effective settings these would be versus the default

ipfs add -s rabin-[min]-[avg]-[max]

This is a problem I always see in traditional filesystems, ones that offer de-dupe are usually very taxing on memory and a lot of them just recommend using compression methods like lz4.





When the optimal rabin chunking between txt, html and epub are different, it will be tough for IPFS to implement




>Stop trying to get more people to use it right now, it's not finished yet.

Fucking this.



fukken saved



cat > Documents/pasta/pickleshit.txt




What was it?


Come back online you fricken frick. I only got 78% and want to mirror this for you.



>What was it?

An autistic pickle rick post.


File: 6e9ec6db9032179⋯.png (268.78 KB, 720x1366, 360:683, threadshot-1518927504898-i….png)


File: 26bcdca75b18f71⋯.webm (1.02 MB, 426x240, 71:40, [loud] nigger rick.webm)


controlled opposition

report and ignore



Pink Flamingos (1972) - /ipfs/QmZUr7Si5aLU4ua6tmdHhpdXKWdvwAndccMtKWpd3HGSzn



File: 270f28a3a6353fb⋯.jpg (368.22 KB, 948x750, 158:125, Glen Carrig post.jpg)

Whats some cool projects based on it or something I can use it for?

just downloaded it and looked at their docs and tried their demo stuff, I'm up and running.


/ipfs/QmdNRFVKhieT8FpkxkppXMct7y9ZpLjZJpDKeZFjoeWnoz - Rules_of_the_Internet_2.0.png




/ipfs/QmQgfZp9wWSq1QdyxBeKF2YuH8cGpQ5vy6B6mib23dLQ37 bsd_coc_discussion.mbox


I think I'm the only one contributing to this thread now. Since I'm doing hacky shit atm because internet got shut off, I am unsure if it's uploaded or not. Enjoy it tho.



File: 03e6c1c5bdf92a2⋯.png (16.81 KB, 608x532, 8:7, 6453.png)

Fucking hell, how do i do this?

I want to change the IPFS directory from C drive to D since my c drive only has 5gb.

this shit will never be mainstream.






Buncha Touhou OVA's from an old IPFS thread. I should be able to seed for quite a while.




sorry but i dont use meme os's.



What's the problem? Make a new environment variable called 'IPFS_PATH' and set the value to 'D:\whatever\you\want'



because ipfs claims the environment is not defined.



Then don't use meme software, you faggot. Go back to /g/.


File: 508be2ea66ae4bd⋯.png (13.48 KB, 859x868, 859:868, Untitled.png)

File: 86273507c221856⋯.png (440.53 KB, 1600x1548, 400:387, works for me.png)


git gud

You don't even have it set in your screenshot so I don't know what you're trying to show off there.

Click New... for either user or system and add it.




Make sure to launch a new shell and check with echo %IPFS_PATH%, or just set it from the shell you're in with SET (temporary) or SETX(permanent). Windows is picky as shit when it comes to sourcing environment variables.


How do I know if my uploads are being peered? No one is replying to my posts so I don't know.



>ipfs dht findprovs <key>... - Find peers in the DHT that can provide a specific value, given a key.

This will spit out peerid's that are online and have/provide the full hash.



Is there a way to list all peering files in this manner?



I'd probably loop over the output of

ipfs pin ls --type=recursive

I'm not sure your endgoal though.



My endgoal would to make a tree-view of all files I've uploaded, with the peers peering each file


Dresden - The Call of the Blood (1996) (Flac format, meta-date double-checked)




I think the thread is more for talking about the protocol and implementations rather than sharing files like /v/'s "share threads". Otherwise I'd be posting content. For now I'm waiting to see the go-ipfs 0.4.14 release. It should have a lot of performance improvements from the developers but also with the help of Go 1.10 having its own performance improvements.

/tech/ should have a share thread of their own, see how /v/ does it >>>/v/14459689

Just post tech related content through any means, BT, IPFS, DDL, etc.

It would be good for trading data but also talking about protocols and shit. The retroshare threads could be merged into it.


>No one is replying to my posts so I don't know.

More lurkers than posters on /tech/. I try to mirror hashes that are posted in these threads, sometimes people post neat things. Sometimes people post a hash and go offline forever though which is annoying.



>Sometimes people post a hash and go offline forever though which is annoying.

this fucking this



It would be great to see how many peers are sharing same files like torrents


File: d3a8d54b32f5aa3⋯.png (59.21 KB, 1442x812, 103:58, Untitled.png)


That's what the findprovs command basically is, you could recurse a hashes content and run it for each child hash to have a finer grain too, how many peers have this individual file vs how many have the whole directory/node-root.


I would start with those API endpoints, use pin to find out what you have and findprovs to find out who else has them, count the results and display them in some tree structure.

ls and `object links` would help traversing directory hashes.


I'd like to see bandwidth statistics in the same way, current bandwidth usage on hash X, total bandwidth used on hash X. It might be possible to write an interface for that now but I haven't looked into it. If I wait long enough someone else will do it for me.


My waifu songs...enjoy





has anyone tried making a booru on IPFS? i know about hydrus network, but i want something you can access from the browser



>how many peers have this individual file vs how many have the whole directory/node-root.

I meant how many peers right now are downloading from me



>If I wait long enough someone else will do it for me.



A true "hacker" movie



Wario World NTSC-U





hey reddit, did your girlfriend peg you in the ass again?



>has anyone tried making a booru on IPFS?

Also curious about this.



IPFS 0.4.14 is out

Main improvements are lower resource usage and pubsub IPNS

>disentangled the commands code from core ipfs code, allowing us to move it out into a separate repository

>fixed several major issues around streaming outputs, progress bars, and error handling

>provide an API over other transports, such as websockets and unix domain sockets

>With the new pubsub IPNS resolver and publisher, you can subscribe to updates of an IPNS entry, and the owner can publish out changes in real time. With this, IPNS can become nearly instantaneous. To make use of this, simply start your ipfs daemon with the --enable-namesys-pubsub option, and all IPNS resolution and publishing will use pubsub

>Memory and CPU usage should see a noticeable improvement

>upgraded hashing library, base58 encoding library, and improved allocation patterns [NEON/SSE/AVX/AVX2 for SHA2 and Blake2b]

>This release of ipfs disallows the usage of insecure hash functions and lengths



Man I seriously can't wait for this to take off, this is possibly the best we'll get off the current decentralization/blockchain memes.



never ever


File: 8177375f041f6de⋯.jpg (53.28 KB, 800x606, 400:303, 817.jpg)


ipfs will win



"never ever" as in we are never going to escaoe the blockchain meme



(currently /ipfs/QmfALsokxXrTCgBAixMXDuxgVexy6ySP823TnrJ4kPmDeY)

Comprehensive Libbie fanart archive.



>With the new pubsub IPNS resolver and publisher, you can subscribe to updates of an IPNS entry, and the owner can publish out changes in real time. With this, IPNS can become nearly instantaneous.

>This release also brings the beginning of the ipfs 'Core API'. Once finished, the Core API will be the primary way to interact with go-ipfs using go. Both embedded nodes and nodes accessed over the http API will have the same interface.

Oh thank fuck. Now if only it would stop killing my shitty new router.


File: a30af2f6adc672b⋯.jpg (122.15 KB, 827x720, 827:720, pervert - Copy.jpg)


That IPNS stuff is big, taking it from 2-10 seconds down to instant is huge for dynamic content, and it's pushed not polled client side, perfect.

I guess this is more just an application of pubsub which is also good to see that it actually works as it's supposed to.

Static content seems more or less dealt with, there's only room for enhancements.

Connectivity is always being improved.

And now dynamic content seems to have the focus.

Having all these components alone working gives people a lot of flexability to make whatever they want. I think the only thing holding people back at the moment is API documentation and they seem to have some people focusing on that.

Oh I'm excited.


Make sure to check around the issue tracker, I remember reading something about disabling mdns can help with bugged routers, you might lose LAN discovery that way though, which shouldn't be a big deal, if it was you'd probably have a better router.





Anyone else have trouble using

ipfs pin add <hash>
? I want to pin some rather large folders without having to download them first, and pin them from my drive.



> I want to pin some rather large folders without having to download them first

What you want to do is add them first and they will automatically be pinned.

ipfs add -r <directory>

you can do them together as well

ipfs add -r <directory1> <directory2> ...

And you can still add them and not pin them right away with

ipfs add --pin=false ...

When you go to add a pin it will take the parts you already have and download parts you're missing from other peers.

If it stalls it may mean you can't see any peers with the content, which can be a connection issue or just the fact that nobody is hosting it.

I tend to always use the progress flag when pinning

ipfs pin add --progress <hash>

to see if it stalls for too long.

So if you know you have most of the files locally you should be able to just recursively add the directories that have the content, and pin them after the fact, even if pin=false for the first time, it will notice the blocks are children of the hash you want to pin and prevent them from being considered garbage.

For what it's worth I use badger, I hear large directories can be slow with flatfs depending on your real file system. badger is much better about this but also takes some more resources.

It's also experimental



Is this dead? Every time I come here to check for new things it hasn't been updated in weeks and I always am the one to necrobump it. Just me. Alone. *sigh*



no, i am here



Sorry /tech/s post rate isn't fast enough to satiate your ADHD.

They just pushed a release, someone posted about it, multiple people are responding to it, some people are asking questions. What more do you want?


File: 91ea1c106e72aea⋯.png (1.16 MB, 1400x1400, 1:1, 91ea1c106e72aeab3640e57175….png)



Found the perfect website for you: https://www.reddit.com/r/ipfs/



You may also find better performance by putting the daemon in offline mode first. Not sure if that's still relevant but I remember this working around some bug in the earlier versions.



I should clarify this by saying you add with "ipfs add --local" and then once the final hash is printed out, put it in "ipfs dht provide [hash]" Again, not really sure if that fixes things anymore so I'd roughly benchmark it before putting it to work.


gm.dls but in .sf2 format



are there any ipfsites like cool-rom on ipfs? i was thinking of making one



Not that I know of. That would be cool to see and practical to mirror, even in parts.

I think most people in the /v/ share threads just post links to folder with the files. There's no frontend, catalog, search, etc.

There's these 2 things that you might want to look at.

Basic anime tracker concept with javascript search


Atari JS emulator with rom selection, all in browser/on ipfs.


I think having some way of hosting it on IPFS but having dynamic search clientside like the anime tracker would be interesting. There's a lot of ways you could do that. Generating and searching a static index, using orbit-db, or even some other method.

Same with embedding JavaScript emulators on the page that just pull the rom from IPFS.

If you end up experimenting, post about it here. I'm interested.



ill try working on such an idea today



I've been thinking about making a IPFS hash + torrent index for a while. Distributed websites are something like the examples you posted where every client is also hosts the entire database. While this would generally work for us on a small scale, as the database grows to hundreds of megabytes or gigabytes it becomes a problem. New users would have to wait to download the entire database, or at least the db index, before they can search for items. This poses scaling and performance issues, though tolerable ones considering the circumstances. The website would also be distributed on IPFS. It would just be a single page with javascript to query the databases.

I'm not sure how best to handle adding new content to the site though. Right now people have been sending information out-of-band through email for the site owner to manually add new content. We could have a bot watch an email/IRC for users to submit content and automatically publish it on the site on the owner's behalf. A report system would also work similarly by notifying the owner of user flagged content to possibly remove. Submit and report forms would be integrated into the website.



>New users would have to wait to download the entire database

I don't think that's a hard requirement, there's probably a sane way of sharding the data and hashing queries beforehand that would help here. Maybe OrbitDB already does this.

As for replication I'd expect ipfs-cluster to be useful, or even just having segments users can mirror like how LibGen does it with their torrents.

Either way, my copy of Nyaa is less than 300MB, so even if you made a popular tracker you'd have some time to figure out the backend before it's too large to handle.

>I'm not sure how best to handle adding new content to the site though.

For public dynamic content, look into pubsub on IPFS.

The owner can subscribe to a topic and users can publish to it.

Alternatively, if you'd rather information be sent directly to your node privately, you can have ipfs setup a TCP relay for you. (ipfs p2p --help)


For that you just give out your node PeerID, clients dial it, send data, and you process it however you want. You handle it like a regular TCP connection while IPFS does all the P2P work itself.


File: 97da7fafacc7756⋯.png (56.01 KB, 640x640, 1:1, MultiChain-DB-Sync.png)


>there's probably a sane way of sharding the data and hashing queries beforehand

You're right, I got so caught up with using Orbit-DB that I completely forgot about using a DHT. Once it's finished in IPFS-JS we might be able to have a sharded database with distributed querying. This way it would be similar to the Bittorrent DHT in that it's open for anyone to add or query information. The website frontend won't even need an owner once it's made. A little info on how it would probably be implemented:


>Maybe OrbitDB already does this

According to the docs it must replicate the whole database locally for you to query it. It's not built to stream queries.

>Either way, my copy of Nyaa is less than 300MB

Sukebei is another 400MB or so, so it's about 700MB in total. I'm glad you mentioned Nyaa. I've been more focused on this idea to make sure we won't have to scramble for db dumps ever again.

With this model I'm targeting the torrent websites we have right now. My main concern is the free availability of the databases. The idea is similar to my idea for a blockchain imageboard >>836690 in that each website has its own blockchain. This is easily achieved using MultiChain, a permissioned private blockchain software forked from Bitcoin. Because it's private, there is a round-robin mining option that uses little CPU compared to traditional PoW mining. The website owners would have a few internal miners to keep the chain going or they could open it up to everyone or a privileged few with permissions. They could also use the Bitcoin PoW if they wanted to for some reason. There's no reason to waste power on securing a chain if all miners are trusted parties.

MultiChain 2.0 has native key-value JSON storage as a database feature. It stores it as serialized UBJSON and indexes it for fast searches. Each record would include the item's name, torrent hash, optional IPFS hash, tags, description, upload time, etc. Central website owners have two options for user management. First, the user management would be completely in-house like it currently is. The website would have a single address with write permissions and would write to the blockchain every time someone successfully submits something (including the user's username). The second option would be to (optionally) register each user's self generated public key with their accounts and give them (or remove) write permission. This way users can choose to run a local version of the website and still have the ability to post new content. Blockchain assets (coins) can also be turned on and off along with send/receive/create permissions. You could have multiple coins in one blockchain too like think Ethereum tokens. This might be useful if websites want a reward system like AnimeBytes uses for earning yen.

For performance reasons it's easy to write a script that in real-time scans and imports records from the blockchain into traditional SQL or ElasticSearch databases. This would integrate very easily into currently deployed websites. The blockchain would act as the underlying storage while ElasticSearch acts as a write-through cache just like how website do it now with SQL databases as the underlying storage. To delete an item in the blockchain the website admin would submit a new empty record with the same key as the record they want to delete. This effectively "overwrites" the old value with the new one. MultiChain can query the newest, oldest, all, or any in between value of the key. Normally it would be set to newest to get the overwrite functionality but user local versions have full access to the revision history.

This opens up the possibility for websites to easily integrate content from other sites. Instead of nyaa.pantsu.cat having to scrape content from nyaa.se it could list them in real-time directly from their own decentralized database. Pic related is my super artful depiction of the system.

There's probably more things I have written down somewhere but forgot to post. I'm thinking about working on it in the summer if I have time.


File: a768f0824e3bb1c⋯.jpg (102.36 KB, 990x735, 66:49, design doc.jpg)


I'm not being sarcastic, I like your graphics.

>remote content integration

I went from excited to very excited. The idea of there being a supper aggregate that in reality is nothing more than another frontend implementation for DBs X,Y, Z. Is really cool.

I guess the current analog would be something like allowing public read-only access to non-sensitive tables in your db.

>token reward system

I wonder how Filecoin is coming along, the intentions of it are obviously in line with something like this

>seed data

>get coin

>user influence in general

I wish I remembered the thread ID for the imageboard discussion thread. But someone was talking about how to manage distributed moderation. The concept was basically "choose your own" moderation team. Anyone could be a moderator and something like deleting a post would simply create a record of your intent with your signature, users would simply choose individual moderators or teams of them, the records would be combined and the content would be filtered from view.

It's an interesting idea that I haven't really seen in practice. Idon't know if it's useful here but I figured it's worth mentioning. Something like that may be useful for curation. For instance, you could subscribe to an autist who makes sure to filter out anything that isn't 10bit flac encodes. I guess another way to describe this would be user generated filters instead of having only 1 standard filter. A standard should be there, but it should be optional and/or combined clientside with other ones.

>query all db keys except ones in my flagged list as those are banned/nuked/trumped as far as I'm concerned

I wonder if something like this could work for distributed sites, it could eliminate the "the admins are bad" problem, or be terrible.



>I'm not being sarcastic, I like your graphics.

Thanks, I find them to be helpful summaries.

>I wonder how Filecoin is coming along

Considering they raised over $250 million I'm betting they didn't even start yet. IPFS has to become much more mature before they can build on top of it. I remember them saying it's going to be a few years until it's ready.

Private trackers might want to turn on coins on their blockchain to keep track and enforce user upload/download stats in a decentralized way. Public trackers on the other hand wouldn't have much use of coins except maybe as info stats like how BakaBT used to keep track of user upload/download even though it didn't mean anything. I'm betting most public trackers won't use them which is fine.

I want this system to leave as much freedom as possible to the individual websites. I want it to be as easy to add to existing websites as possible. I think of it as a federation of independent islands. There would be a few shared passive rules but the rest is up to the website owner on how to run their own site.

>how to manage distributed moderation

The imageboard thread is >>785171 I posted there previously too. While I agree the user contributed whitelist/blacklist is a good idea for moderating imageboards, I don't think there's much of a moderation issue on torrent sites. Website owners should retain the ability to be able to moderate however they want, otherwise they won't join the system. That said, local copies of the website DBs would have the whole revision history available. Sure someone could write a script to revert specific changed entries but I honestly don't think many people would be doing that. Torrent site moderation and imageboard moderation in practice are really different.

>you could subscribe to an autist who makes sure to filter out anything that isn't 10bit flac encodes

There's no reason why that's not possible with the current setup. I could autistically go through every lossless audio torrent on Nyaa.si and post a whitelist of torrent IDs that were 10bit or higher right now. I could make a RSS feed so others could subscribe and automatically download all 10+bit lossless audio that I approve of.

>I wonder if something like this could work for distributed sites

That's how I would do it for open distributed systems like what I was talking about before using a DHT database for an IPFS website. In fact because of the world writable distributed nature there's really no need to blacklist items unless you want to filter out known spam from queries to save some time.


IPFS Companion 2.2.0 brings window.ipfs to your Browser


Your node is exposed as window.ipfs on every webpage. The API aims to be compatible with interface-ipfs-core, which means websites can now detect if the property exists in window context and opt-in to use it instead of creating their own js-ipfs node. It saves system resources and battery (on mobile), avoids the overhead of peer discovery/connection, enables shared repository access and more!



doesn't seem to have much users yet, as I only get like 8 nodes connected at a time


Total noob here. I downloaded the go-ipfs thing from their website and tried to ls some of the links ITT, but it says "merkledag: not found" for all of them. Am I missing something?

The example readme and about files do work.



that usually means that all the peers hosting the file are offline, i had that problem too

considering that these links were posted on 8chan, they wont have many peers



Turned out I had forgotten to start the daemon.

But then I tried it again, and it just stopped right there, seemingly without downloading anything. I can't even do a 'ls'.



In fact, this is getting extremely frustrating. Is there a verbose mode? I can't find it from the help.




you're basically getting the same type of error as trying to download a torrent with no peers, but i would suggest asking the ipfs channel on freenode



Alright. Can you confirm you can list /ipfs/QmcdiWeCUHXMEvprg47GhQ6pLXVgpkXDNUZUVnCJG8PfvV ?

Should be a folder with a Makefile, an executable and a source.



yep, it shows up in the browser at least


File: 96ff9d6da4d0653⋯.jpg (116.73 KB, 774x809, 774:809, 6a00d83451c29169e2014e5f86….jpg)


I don't mean the IPFS imageboard thread, although that one is also good.

There was a different one that was discussing features and concepts for imageboards in general. It started with this image in hte OP image if I remember right. I think I have it archived but can't find it.


Animegataris 1080p, episodes 1 to 12: /ipfs/QmaYKNfeRc7a2YGTn2qxg9Updn357ouQBX5UL35RxXPN8T

Shoujo Shuumatsu Ryokou 1080p, episodes 1 to 12: /ipfs/QmchtNakPsVoZ7RtZ2siGrK6V8uMkELjt8pfDxMiQ2CYan

Death March kara Hajimaru Isekai Kyousoukyoku 1080p, episodes 1 to 12: /ipfs/QmQMkTQFkjXdF8ZTRLWTjE8QjDiSTJ1KYVkvjWrjKB2GRr


Has someone made a system similar to nyaa for people to list their stuff? Right now it's kind of disorganized with random shit being listed on ephemeral threads and websites. If someone hasn't created such a website yet, and if there's a demand, I think I will do it. What do you guys think?



5 cm/s 1080p: /ipfs/QmeWkk3YPdHTpMCy4mVVF8V6EVopoBd44T9GVPMrojAT3m

SAO: Ordinal Scale 1080p: /ipfs/QmPx4TXVqMsmfgtyHPm3XTp92U46dQTsjyFEtzuYtz9kXv



We've been discussing ideas to try and figure out what would actually take off. In my opinion a Nyaa clone with IPFS hashes isn't moving anything forward. It just trades torrents for IPFS which is useless when bigger and better alternatives exist. If you just want to have fun go right ahead, but I think we should focus on planing a real successor to the current systems.



>trades torrents for IPFS which is useless

Why do you feel that way? IPFS has some innate advantages over bittorrent and the the flexibility leaves room for enhancments.

As it's been said in these threads before, the biggest advantage is the content having a standard address per piece of content, so the swarm per file is always going to be global, unlike torrents where the swarm is grouped by the torrent itself. If we both make a torrent with 2 files, and 1 of those files is the same, people will still only download from one of us, not both. With IPFS it doesn't matter, you'll get the content from whoever will provide it.

This alone is great for longevity. I hate having to download separate torrents for the same group of files and piece a release together just because the seed base has fragmented. Even though 100% of the content is available on the DHT, we don't even request it with bittorrent.



>Why do you feel that way?

I should've elaborated. Switching to IPFS is pretty much useless in its current state. That will change when some experimental features mature and become defaults.

>content having a standard address per piece of content

I agree with the design advantages. In fact I've explained that talking point a few times to people in these threads. The problem is that by default it's currently not enabled. The vast majority of users aren't taking advantage of it. Also based on what I've read it only works right now if the hash algorithms match. Say someone enables sharding and adds their 4TB anime folder with the default SHA256 hash. If I do the same but instead use the faster Blake2b hash, my shards and his shards won't be compatible. This will further fragments things since the devs said they're going to change the default to Blake2b in an upcoming update.

I do believe IPFS is a great successor to Bittorrent but it's not quite there yet. What I want to do is discuss, plan and possibly develop the next generation file sharing platform that uses IPFS and open database indexes as opposed to continuing the current closed off centralized torrent sites with IPFS tacked on.



>in its current state

Ahh, fair enough. I expect them to nail all this down in the alpha and the defaults not changing much post-release. Glad to see them actually experimenting while they have the chance. I was pretty content with how things were before and they've only gotten better, when they finally group all the best improvements together into a default set, it's going to be nice to see. The concepts are there and work, but seeing them work more efficiently is always a plus.

>What I want to do is discuss, plan and possibly develop the next generation file sharing platform that uses IPFS and open database indexes as opposed to continuing the current closed off centralized torrent sites with IPFS tacked on.

Right on. IPFS or otherwise I just want a less centralized, more efficient platform than what we have now. It's embarrassing we've let this go on for so long despite knowing and regularly facing all the shortcomings of traditional systems. We have all this technology and we utilize it so poorly.



Killing Bites, episodes 1 to 12: /ipfs/QmdPjrEMH8eVJoGVp79RkyesUQ3LuBhukp1AF3MEmU1Au9



Yuru Camp, episodes 1 to 12: /ipfs/zDMZof1kpFnRGBsgDcBLgAqeu1CjeeGEm9foz3eW9ri96GtX3r8M

Isekai Shokudou, episodes 1 to 12: /ipfs/zDMZof1m1DuN8opbcVhoGc1P3izhJpv87wo8FJAZiH2FY1xafUt1

After the Rain, episodes 1 to 12: /ipfs/zDMZof1m33UyeU3P8pnr3E7yJkKnteQGX63zHxmcPqxyWtNwqHUC

Does anyone know how I can use blake2b-256 hashing algo by default without having to specify on the command line every time?



>Does anyone know how I can use blake2b-256 hashing algo by default without having to specify on the command line every time?


Also see the first half of this >>883736



I know I could do it on a shell level. I meant whether I could set any ipfs config files to achieve that effect. e.g. if invoking ipfs from a script.



>set any ipfs config files to achieve that effect

I don't think so. Maybe worth posting on the issue tracker.

>invoking ipfs from a script

Use variables for the args.



>no anonymity

>can't do shit on it

>won't even let you use tor




>fails at redtexting




>won't even let you use tor

What do you mean? You can tunnel it just like normal and there's the fork that has the experimental libp2p-onion transport.

See the second half of >>856739

You've been able to use Tor directly since 2016.

Why don't people read the thread first.



Can anyone explain me whats the difference between this and dapps on ether? because those are more expensive than regular servers and users have to pay just to access the network to use them so the whole decentralization thing is fucking pointless.



They're very different, with IPFS you or someone else stores content, on "blockchains"(merkle trees with root consensus) like Etherium, the entire network distributes the storage essentially.

With IPFS, you hold the data that you hold, with blockchains you either store the entire thing or the tail of it, but it includes the entire history of events.

IPFS is like a more flexible bittorrent, Etherium is more like a distributed computer.

A lot of people have been using them together to have "serverless" Dapps, they use IPFS for anything static and they use Etherium for dynamic stuff. Someone hosts the content on an IPFS instance but the entire etherium network takes the dynamic data, processes it, and stores the result for the entire world to see. It's considered serverless because the network as a whole is storing state, not a single machine. Likewise with IPFS someone is hosting the content, but it's not a typical centralized HTTP server, it could be 1 or many nodes located anywhere over the world and transported through many different ways.

I don't know if this is appropriate but the idea of "what if bittorrent and git had a baby and that baby could be used in a web browser easily", that's an oversimplification of IPFS but maybe works.

Now though IPFS is getting some really good dynamic mechanisms. Read a few posts up about making a service via a p2p socket or via pubsub. It's real time, ephemeral (unless you store it obviously), dynamic, p2p data. The blockchain is more like dynamic, unknown-time, permanent, always public, data. While blockchains are distributed via p2p mechanisms it's a bit odd because it still feels centralized in that way, it's only distributed insofar that you can do things offline but they must be synced up with everyone at some point.

These 2 things are hard to compare, they really do serve different purposes and compliment each other well.




idk wtf you are talking about anon. this issue has been open for 3 years and its still open. tor support BTFO



Consider reading the things you link before letting everyone know you're an idiot. The TOR transport is already up and running and you can use it today, but they're waiting on a security audit before putting their official stamp of approval on it.



Right so the thing that has been in dev for years is now for real right around the corner. I bet you think the next version of GIMP is gonna come out in 6 months too.



IPFS always makes it sound like things are almost done. They have played this game for several fucking years with so many features. Basic things still missing.




So it begins, the BTFOing of the IPFS meme by people who actually live in the real world.


File: 1208637fa01742a⋯.jpg (27.67 KB, 601x508, 601:508, 2f7.jpg)


>yfw you're in full damage control mode now that everyone is beginning to see (((IPFS))) for the blatant scam it is






>expecting cuckchanners who shill this trash to know anything but their happy shiny fantasy world

Fair warning, you should look out for shit threads like this for possible blatant facts denial. That thid thread got 300+ replies and is only now getting redpilled is shocking.



>right around the corner

You're still not listening, it's out right now. You can run it through TOR today, you don't need to wait.



Well lets see, from the latest developer comment on the issue page you linked from 20 days ago:

>However, you shouldn't expect a blessed and fully functional/recommended tor transport for a while.

From another one:

>But... there's no documentation. On the other hand, go-onion-transport is at least well-maintained. On the gripping hand, lgierth of ipfs/infrastructure fame admits that:

What we have here are some officially unsupported, and undocumented features.

Now lets look at this actual transport that they implemented


Oh look it was last edited 7 months ago with absolutely no active work for the past half a year.


File: 0e177f8457f8a48⋯.jpg (107.15 KB, 700x734, 350:367, 1524275205739.jpg)

>duh permuhnint web gaize XDDDD









Looks like salty ZeroNet shills came to ruin the discussion. Stay mad faggot. Reminder that IPFS raised $250 million from their ICO.


>I bet you think the next version of GIMP is gonna come out in 6 months too

Funny you say that, it shows just how ignorant you are. 2.10 RC2 came out this week with only 7 blocker bugs away from its release. So yes, I do think the next version is gonna come out within 6 months. It could very well come out in 6 weeks.


The Hitler's War audiobook: /ipfs/zDMZof1m3sno11X7CSaJoiWFGabfsuK3HoTS7gVqES57tYyB6YsH

Apocalypse 1945 - Destruction of Dresden: /ipfs/zDMZof1m4ZpRxU7spWcLkScxj3U6WU6vxFMoWNpECw1cWtcsaZjc



>b-b-b-b-b-but zeronet!!!!!!

Just as much of a meme as IPFSoy

>Reminder that IPFS raised $250 million from their ICO.

>250 million flies can't be wrong



>i don't like it

>no coherent arguments


Why are you saging the thread



>It could very well come out in 6 weeks.

>6 weeks

LOL try 6 years


File: b7f72ee71b56fbd⋯.png (174.21 KB, 358x358, 1:1, b7f72ee71b56fbd9bffd2f90b7….png)


>250 million flies can't be wrong

Oh come on now. That doesn't even make sense.

Next time at least use bait that I can properly respond to. You let me down.


>Why are you saging the thread

If you have to ask that you're obviously not from around here.



>you defend IPFS but you sage the IPFS thread

No seriously. Do you think that sage is a downvote?


File: 6616677c628dda8⋯.png (525.41 KB, 600x595, 120:119, b39.png)


If I'm defending IPFS why would you think I would want to downvote it? Again, if you don't know basic etiquette you're not from here.



so you're basically saging the entire thread because you don't like a few posts in it. dumbass.



>250 million flies can't be wrong

>Oh come on now. That doesn't even make sense.

He's referencing an old saying that makes a point, in a rather earthy fashion, that just because something is popular doesn't make it good.

>100 million flies can't be wrong...eat shit!

You'll probably hear somebody make reference to this when you get out of high school and into the real world.



File: 7cec66c7a792e30⋯.jpg (42.43 KB, 634x500, 317:250, b514728a9a6a7c4396698d0bb0….jpg)


>basically saging the entire thread because you don't like a few posts in it

Let me ask you, if you don't think saging is downvoting, what to you think it really means? Why does it exist?


>He's referencing an old saying

>he kept flies

>he can't even alter the saying to in a witty fashion to fit the situation

Again with the weak ass effort. Really, I'm waiting for something I can really sink my teeth into.



>the newfag asks me to explain sage

lurk moar



Really? Because you think "saging the entire thread because you don't like a few posts in it" is a dumb thing to do. Why is that?


File: dc93a20cd28c55d⋯.jpg (120.31 KB, 1000x1000, 1:1, dc93a20cd28c55d079de2d7cbe….jpg)


File: b2899299d7b73e8⋯.jpg (371.1 KB, 1280x720, 16:9, b2899299d7b73e8ea4fc5c3109….jpg)


Looks like you fell for my stalemate.

I hope the mods will clean up all the shitposting in the thread now.


Put your codes in >>>/ipfs/

Also, >>>/redstick/ might want to use this to share infographics

Want to use a public gateway?




When using 'ipfs get' to download stuff, does that automatically seed the file as well or do I have to 'ipfs pin'?








File: 2aea3f50d339bd4⋯.png (26.13 KB, 274x321, 274:321, hurr durr durr durr.png)


>discussion relevant to thread

>damage control



I'd rather we have share threads than a share board.


>can I use tor

>>yeah here you go

Then I went to sleep and 2 people started screeching at each other.

You both should feel bad. We were having a nice thread for once.



we have /pdfs/ and it works, so /ipfs/ would work as well right?

Not to mention unfortunately in /tech/ we have a bunch of retards like >>901786 and >>901595 who shit up the threads

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / ashleyj / hisrol / just / mai / soyboys / tacos / vg / voat ]