[ / / / / / / / / / / / / / ] [ dir / aus / boxxy / choroy / dempart / f / jenny / jp / komica ]

/tech/ - Technology

Winner of the 75nd Attention-Hungry Games
/caco/ - Azarath Metrion Zinthos

March 2019 - 8chan Transparency Report
Comment *
Verification *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Show oekaki applet
(replaces files and can be used instead)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 3 per post.

File: 957095eb5bec93f⋯.png (140.19 KB, 512x512, 1:1, 957095eb5bec93f6b1d72f6393….png)


Last thread:

>>915966 (https://archive.is/rQakE)



>ipfs 0.4.17 is a quick release to fix a major performance regression in bitswap (mostly affecting go-ipfs → js-ipfs transfers). However, while motivated by this fix, this release contains a few other goodies that will excite some users.

>The headline feature in this release is urlstore support. Urlstore is a generalization of the filestore backend that can fetch file blocks from remote URLs on-demand instead of storing them in the local datastore.

>Additionally, we've added support for extracting inline blocks from CIDs (blocks inlined into CIDs using the identity hash function). However, go-ipfs won't yet create such CIDs so you're unlikely to see any in the wild.


>URLStore (ipfs/go-ipfs#4896)

>Add trickle-dag support to the urlstore (ipfs/go-ipfs#5245).

>Allow specifying how the data field in the object get is encoded (ipfs/go-ipfs#5139)

>Add a -U flag to files ls to disable sorting (ipfs/go-ipfs#5219)

>Add an efficient --size-only flag to the repo stat (ipfs/go-ipfs#5010)

>Inline blocks in CIDs (ipfs/go-ipfs#5117)


>Make ipfs files ls -l correctly report the hash and size of files (ipfs/go-ipfs#5045)

>Fix sorting of files ls (ipfs/go-ipfs#5219)

>Improve prefetching in ipfs cat and related commands (ipfs/go-ipfs#5162)

>Better error message when ipfs cp fails (ipfs/go-ipfs#5218)

>Don't wait for the peer to close it's end of a bitswap stream before considering the block "sent" (ipfs/go-ipfs#5258)

>Fix resolving links in sharded directories via the gateway (ipfs/go-ipfs#5271)

>Fix building when there's a space in the current directory (ipfs/go-ipfs#5261)

tl;dr for Beginners

>decentralized P2P network

>like torrenting, but instead of getting a .torrent file or magnet link that shares a pre-set group of files, you get a hash of the files which is searched for in the network and served automatically

>you can add files to the entire network with one line in the CLI or a drag-and-drop into the web interface

>HTTP gateways let you download any hash through your browser without running IPFS

>can stream video files in mpv or VLC (though it's not recommended unless the file has a lot of seeds)

How it Works

When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.


>Is it safe?

It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.

>Is it fast?

Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.

>Is it a meme?

You be the judge.

It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.

On the other hand, it's still alpha software with a small userbase and has poor network performance.

Websites of interest


Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.

glob.me — dead


>Thread #3

Fuck you faggot OP

Newfags shouldn't create threads.


File: 3787e1b2de2e012⋯.pdf (6.76 KB, code-of-conduct.md.pdf)



t. actual retard until your third line


Fork when?

I'll delete this thread if negative pressure ramps up in this thread. >>>/ipfs/



>I'll delete this thread

Ooops I'm unable to delete the whole thread





Fuck off, Either you stick with IPFS, or we will turn this car around for Beaker/Dat

https://beakerbrowser.com/ https://datproject.org


File: 5f1f5cbbb05d9cc⋯.jpg (88.81 KB, 535x650, 107:130, 1261249421772.jpg)

Is there a gui for windows x86? I wanna still wanna my run my old legacy programs on my ipfs box.



>implying both of them do not have CoCks

What about making your own alternative?




IPFS threads are fine. I was calling you out for being a filthy newfag because you continued the gay numbering of threads started by the last faggot cuckchan OP. We've had perpetual IPFS threads for over 4 years. We don't number threads.



Why the hell is your old ass windows install not airgapped?



Nobody will listen to 8chan... unless it is QAnon, in that case fake and gay

I would rather hope that we can fork all the source code and integrate it into our stuff



Numbering threads is a tradition (see Gamergate and /pol/ threads about Happenings)





>the last faggot cuckchan OP

Then invite him back here for and tell him why

>We don't number threads.

The next threads will no longer be numbered, hopefully.




<reading comprehension

Just ignore the autistic shitposting this thread started with.



I'm a poor fag with limited hardware. My rig is so old even just browsing the chans can result in freezing. It even has problems on threads with >600 posts or >300 files, to speak nothing of anything more resource intense like even vidya from the 90s, often the results in full blown overheating in <30 mins.



Why don't you use GNU/Linux or BSD on such a weak system?


File: dcbf7a756fadb4c⋯.gif (927.27 KB, 500x500, 1:1, 1488847687541-0.gif)


Partially because, to be quite honest I never imagined tech would degrade this far, that such an unacceptable level of quality would be condoned for mass production yet it's all I got, and partially I have some things I gotta windows on. I actually have quite a lot of stuff that requires old programs, yet no one ever made things to handle these applications and now-a-days all the code is so shit no one is even thinking on that level of power user anyway so I'm left mostly living in an old software bubble as it's the most technically functional and productive in the sea of nearly unusable products and services with shittastic interfaces and less and less hotkeys.

Really though the main reason I remain is this one program, Everything Search, that allows me to realtime index my drive, so useful this is that not only do I keep it on a global hotkey that I use hundreds of times a day, but it's functionally altered my whole style of using the computer. It's also functions for me as a launch bar, entirety of file search, and a lot of management too. Plus being able to access anything on my computer in <2 seconds without ever having to do something as archaic as navigating ever again, it's pretty hard to best.

One of it's key operating principles that allows for such speed is a table unique to NTFS and for that reason I have found nothing else that works on a competitive level on Linux. Maybe I missed it but I've looked several times for years, and I have used Linux as a main, even for years, but in the end I was spending so much time on my dual boot drive and on this machine with such inane limits here I am.



You could just install gentoo with a ntfs drive and then run the program in wine.



For what ever reason it's not within the wine db.



That doesn't stop you from doing it though. Just run it in wine with a ntfs drive and then see if it works. Considering how simple, and old, it is you should be able to run it just fine.



man locate


it's called IPFS because even FTP over moon-bounce amateur radio outperforms it


Bump because the (((distributed web))) is getting BTFO


Holy shit, fucking savage!




> Trusting (((the cloud)))

Are you scared of P2P, Moshe? What, Torrents destroyed your business plan?


File: c2e34a499e76b32⋯.png (24.23 KB, 1031x66, 1031:66, Screenshot_2018-10-19_21-4….png)

Look at this soyboy and laugh


To all the bluepilled autists who still foolishly have hope in decentralization, read this thread



Seems to be a lot of shilling in this thread. Let's clean it up with some good news!



The increasing pushback only tells me that this thing is becoming more and more sane as an alternative to paying Shlomo Shecklestein to host your data for you.

>No goy stop hosting things for free! My cousin Marty Rothenberg will only charge you $20 per GB, that's only as much as half a cup of coffee for *100% guaranteed uptime!

*up to 100%, terms and conditions may apply, we reserve the right to remove content at any time. Content hosted in our BaseMentTech™ datacenters are only allowed to be accessed in select geographical regions


WikiLeaks is experimenting with hosting via IPFS





Guess we'll just do nothing then.






Hi Mossad





glow in the darkpill





File: a54763ad6b15810⋯.png (226.45 KB, 640x307, 640:307, ORIGINAL-CARTOON-YAIR-NETA….png)




laughing lizards.bizmet




(((IPFS))) BTFO. We're literally stuck with Jewgle forever.



Daily reminder that IPFS users never stop winning.




File: a5b4d3a501b7289⋯.png (1.15 MB, 728x1024, 91:128, a5b4d3a501b72895dc6c01b73f….png)

File: aa5687123cf62cf⋯.png (1.08 MB, 1707x1202, 1707:1202, aa5687123cf62cfb70a19dfbf4….png)


Google bends to the will of IPFS.

Now it is us who glow in the dark.




Is anyone building an IPFS Nyaa?


Dat looks good, too.


Is this real or another sketchy CIA trap like that stream alternative shilled by 4chan's /g/ a couple months ago?



Have you tried https://duckduckgo.com/?q="Everything+Search"+linux



> sketchy CIA trap like that stream alternative

never heard of this. give link?

either way this is a real open source project, many companies are starting to adopt it as well


IPFS v0.4.18 Is Out

This is a really big update.

>experimental support for the QUIC protocol. QUIC is a new UDP-based network transport that solves many of the long standing issues with TCP

>now supports the gossipsub routing algorithm (significantly more efficient than the current floodsub routing algorithm) and message signing

>the new `ipfs cid` command allows users to both inspect CIDs and convert them between various formats and versions

>the refactored `ipfs p2p` command allows forwarding TCP streams through two IPFS nodes from one host to another. It's ssh -L but for IPFS

>there is now a new flag for `ipfs name resolve` - `--stream`. When the command is invoked with the flag set, it will start returning results as soon as they are discovered in the DHT and other routing mechanisms

>Finally, in the previous release, we added support for extracting blocks inlined into CIDs. In this release, we've added support for creating these CIDs. You can now run ipfs add with the --inline flag to inline blocks less than or equal to 32 bytes in length into a CID, instead of writing an actual block

>you can now publish and resolve paths with namespaces other than /ipns and /ipfs through IPNS. Critically, IPNS can now be used with IPLD paths (paths starting with /ipld)

>this release includes the shiny updated webui

>this release includes some significant performance improvements, both in terms of resource utilization and speed

>In this release, we've (a) fixed a slow memory leak in libp2p and (b) significantly reduced the allocation load. Together, these should improve both memory and CPU usage

>we now store CIDs encode as strings, instead of decoded in structs (behind pointers). In addition to being more compact, our Cid type is now a valid map key so we no longer have to encode CIDs every time we want to use them in a map/set

>bitswap will now pack multiple small blocks into a single message

>this release saw yet another commands-library refactor, work towards the CoreAPI, and the first step towards reliable base32 CID support

>CoreAPI is a new way to interact with IPFS from Go. While it's still not final, most things you can do via the CLI or HTTP interfaces, can now be done through the new API

>from now on paths prefixed with /ipld/ will always use IPLD link traversal and /ipfs/ will use unixfs path resolver, which takes things like sharding into account

>we intend to switch IPFS gateway links https://ipfs.io/ipfs/CID to to https://CID.ipfs.dweb.link

>this way, the CID will be a part of the "origin" so each IPFS website will get a separate security origin




Steemit/Bitchute is not CIA, just shekel grubbing.

Neither is GitGud.tv, just GaymerGayte




>ipfs p2p

>ipfs name resolve --stream

>performance patches

>all this browser standards stuff

I like it.


Does /g/ even use this? How safe is it? Are files encrypted and anonymous?




I dunno, why don't you ask them?


If they add I2P support, sure. In the meantime tracking peers is as easy as bittorrent.



>(though it's not recommended unless the file has a lot of seeds)

So same as torrents then? popcorn time for ipfs when?

>Is it a meme?

What are some use-case scenarios besides replacing torrents? hosting websites? storage? can compete against datacenters in terms of cost?



I don't think it is fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever.

I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost, and that I can edit / remove my files. Also, there is litterally no incentive for nodes to participate in IPFS, so you can expect most nodes to actually come from the NSA for surveillance purposes.



This is not to shill any cryptocurrencies that rent storage, because they are mostly shit, from a technological point of view.



I just installed IPFS. I have not downloaded anything and it has used 700 megabytes in less than 25 minutes. This is retarded.


File: fff36fd92ce0078⋯.png (214.24 KB, 270x270, 1:1, smugflayer.png)


>I don't think torrents are fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever. I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost, and that I can edit / remove my files. Also, there is litterally no incentive for peers to participate in bittorrent, so you can expect most peers to actually come from the NSA for surveillance purposes.




>edit function

Use IPNS, faggot


If you upload files to a distributed storage system and expect some way to magically delete the file from all the seeders' computers and prevent them from reuploading the file with the same hash, congratulations: you're retarded.


File: 8cbffa28ec30e08⋯.png (165.62 KB, 4000x2250, 16:9, ip.waist.png)

File: ccc75b353c9ff9a⋯.png (3.51 MB, 4000x2250, 16:9, mdag.waist.png)

File: 28c0f7f49caecf7⋯.jpeg (51.83 KB, 800x450, 16:9, 41525062dc516766738e5bf5e….jpeg)


Are you talking about 700 megabytes of bandwidth?

How did you measure this?


There used to be IPFS threads but they can't exist there anymore as a result of the spam filter changing. Almost always will the hashes trip the filter.


>What are some use-case scenarios besides replacing torrents? hosting websites? storage? can compete against datacenters in terms of cost?

As the name implies, it is much like a filesystem. It's literally just content addressable data, nodes, records, and probably other things. All built on the same standards and interfaces. With networking taken into consideration.

With the ability to reference data, globally, with read and write access, is very generic so you could do a lot with that.

Hosting a static website is as simple as hashing the content and connecting to the network, that's 2 commands and anyone can see your content using their own node or any gateway.

You don't have to consider a domain, how you handle load balancing, implementing your own protocols, formats, reference system, nat punching, etc etc.

The way they handle dynamic data and peers is built on the same principles. A peer id is just a hash, that doesn't change if they change networks suddenly, that you don't have to consider NAT, or implement anything new, or consider what is the latest transport protocol being used.

It's basically whatever HTTP stack you like, but instead of URLs and servers, it's hashes and P2P.

That's all you need to know, you don't need to know the entire stack, and better yet you don't need to know the damn user environment, like can they connect to domain xyz directly, did a file change paths, etc.

Another way to think about it is probably just simply, what if instead of your OS asking your hard disk for data, you also asked the network for it, and could make reasonable guarantees that it's the data you wanted, whether it came from the disk, or someone else.

From a user perspective I think it's really convenient to have a worldwide reachable chunk of arbitrary data, in 1 command, that progressivley gets more reliable as real time goes on and development continues to add more efficient traversal, storage, etc. that is all transparent to the users and developers.

Stagnation is prevented by using the same layering scheme as IP. Components can be added and deprecated without disrupting the whole system.

IPFS as a project is basically just saying "we should do it this way, because we can" and gluing everything together so it works that way.

You should be able to say this is the address of my data, and through some means that I don't even know, my packets will get to your machine if you request them. And that's not really an insane or improbable goal. A lot of these concepts are ancient, and already tried, but nobody has really tied them all together like this.

>I don't think it is fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever.

In what way are you talking about? Locally or globally?

Locally you just initiate garbage collection and it deletes the data. Edit is just adding the changed data in and deleting any orphaned children. Like how diffs work in git, an edit should only consist of the different bytes if it's chunked, it's not like every commit duplicates the entire file that was changed.

Globally, there isn't a delete, something becomes unavailable when everyone deletes it locally or is disconnected from the rest of the swarm. I personally see this potential permanence, as a benefit.

>I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost

Literally their sister project.


>there is litterally no incentive for nodes to participate in IPFS, so you can expect most nodes to actually come from the NSA for surveillance purposes

That doesn't make much sense to me. What would they survey unless they were hosting content for people on nodes that were determined to be the best, which probably means geographically closest and/or fastest.

At most they could determine that some peers downloaded hashes X,Y,Z from their nodes.

The NSA would likely have to act as a CDN for whatever they want to survey and be the best CDN on top of that to be chosen by clients.


File: 1feb5620a6939a0⋯.jpg (127 KB, 1024x640, 8:5, faster-content-distributio….jpg)

File: d29464c3c5810ba⋯.png (345.15 KB, 720x2000, 9:25, 07c60fb2243e97e8a1ca69bc2d….png)

File: f0745aa611390d4⋯.png (126.16 KB, 544x400, 34:25, stack.png)


File limit.

I wanted to post the rabin chunker, but may as well post these too.



Filecoin vs Stroj vs Sia vs Maidsafe



>Are you talking about 700 megabytes of bandwidth?

Yes. And it is still going all these hours later. Not as much but still a lot. High idle bandwidth usage has been listed as an issue for years now and it is still shit.


File: a89c87f8a9509ef⋯.jpg (78.11 KB, 344x891, 344:891, 0aaf7cfeb11ecbcab25878214b….jpg)


I'm not familiar with them all. I remember being interested in Maidsafe, but at the time, nothing was released. A lot of the Filecoin information was already published when I started reading about it, I forget if this was around the same time or later.

Maidsafe would have the name advantage if they changed it to maido-safe though.

Can you tell me about the others?

In any case, I like the concept, regardless of implementation.

Being able to utilize my free space and bandwidth, until I need it, is pretty appealing to me. Not even for profit per se, but the token offer alone, valid for at the very least, guaranteed storage, is very appealing to me.

Everyone should be able to participate in this market, and more importantly, everyone should have access to store and retrieve data reliably.

Having autonomous consensus mechanisms and mathematical proofs managing all of this, is what makes it so reliable, especially when coupled with all the libp2p stuff from IPFS.

On 1 side you have a system for connecting peers/nodes together through any means, and on the other you have a set of functions around verifiable distributed data storage.

I like it.


It's probably the DHT, I think there are issues with it right now. In one of the previous threads someone was talking about a new non-kademilia approach. But I forget if they were just mentioning it or if it is something they plan to implement.


Until it's fixed you could try


I think this makes it so you only make DHT requests.

I wonder what specifically is causing it to be so high compared to other networks.



>Until it's fixed you could try

Well anon knowing IPFS it will be a decade until it is ready.


File: ee7f48614bbaed2⋯.png (199.81 KB, 306x480, 51:80, ClipboardImage.png)


Better late than never imo. As someone who has been interested in these concepts for as long as I've been on the internet, I'm glad to finally see someone producing, even if it's slow.

For years we've had nothing but speculations and white papers without implementations. Multiple projects have died without shipping anything usable. I can use IPFS today and it's been improving regularly over time.

Being recognized by Mozilla, Google, etc. and being supported in things like Firefox, Chrome, Brave, etc. make me believe that this will actually still be around in a decade.

It's not like we have any alternatives anyway. I don't trust any of the proprietary BitTorrent Inc. projects and I haven't seen another project like this, that isn't strictly for experimentation, unsupported vaporware, or practically useless.

We have Zeronet which people point out the flaws of every thread.

Or things like "dat" which say they're decentralized but then rely on you to connect to a custom DNS server that they host for content resolution. And custom browsers like beaker.

I still need to look into urbit but that may serve a different purpose. If anyone wants to give me the quick rundown I'd like that.


Don't bother. IPFS has been considered useless to those who truly know what is actually going to happen.

Read >>997341 for example. More and more people here are taking the blackpill and accepting that nothing we do will make us able to escape the eternal kikery.

Don't bother fighting back anymore. You will either take it up the ass like the rest of us, or take your own life. We lost. We're done forever.


File: 826b03241ce3ce4⋯.jpg (69.42 KB, 204x299, 204:299, 1447821035542-2.jpg)


>You will either take it up the ass like the rest of us, or take your own life. We lost. We're done forever.

>please take it up the ass like me

Gives a whole new meaning to asses and elbows.


File: 054bf34172a36fc⋯.jpg (48.26 KB, 640x640, 1:1, 1543880117368.jpg)


don't listen to the blackpill jew, the future is not yet lost

only the people themselves can bring it back now

do whatever you can to act and make the world a better place!


File: 6a81ea3a0d5ec3a⋯.jpg (75.98 KB, 796x664, 199:166, 6a81ea3a0d5ec3ae904cbfffb1….jpg)


By default, there is a web gui at, where 5001 is your ipfs api port. Note that localhost is hardcoded into the webui, so if you're trying to use the webui remotely I'd recommend ssh tunneling ports 5001 and 8080.

Also please move your old windows boxes as far away from the internet as possible


File: eb2304bf8e86796⋯.png (589.29 KB, 500x537, 500:537, 1440667487310.png)

Could we replace Tumblr and all those other lame blogging platforms with IPNS sites and RSS/atom feeds?


File: fe7c853a831f758⋯.gif (824.55 KB, 266x199, 266:199, fe7c853a831f758bcb0cb111f8….gif)


Also, since there's no documentation for this, I'd like to point out a peculiarity in file chunking. The default chunker (size-262144) tries to split the file into pieces <= 256 KiB in size. However, it also limits the number of links an object can contain to 174. I think this was called MaxChildrenPerNode at some point, but I can't find any recent references to that name.

This means that if you try to add a file larger than 174 * 256 KiB in size, you will end up with a nested structure of ipfs objects pointing to other ipfs objects. Ipfs seems to try to fill each level of the tree before adding new levels.

As an example, when I tried adding a 2GB file, I ended up with an object containing 43 links. The first 42 links each pointed to an object pointing to 174 objects of size 262144 (plus 14 bytes of protobuf wrapper), while the last object pointed to 81 "full" data blocks and 1 "partial" data block.


File: ee530e7bbfb8bf4⋯.jpg (62.05 KB, 480x480, 1:1, ee530e7bbfb8bf44c4df105a45….jpg)


Yes, but the entire reason people use lame blogging sites is because they're too lazy to run their own websites. The technical know-how required to set up a /comfy/ rss-ipfs system is astronomically greater than "log into _, then use the wysiwyg editor."

Honestly, my main hope is that ipfs displaces torrents as an easy means of redistributing content. The whole "I changed a letter in one of my filenames so now there's two separate swarms of seeders" retardation needs to stop. All the different chunking options give me pause though, as they have the same effect.



With some normalfag-friendly application or web UI, you could boil it down to "log in with your key, then write a post/attach files and upload." Popular content would load faster as more people view and reblog it, and you wouldn't be limited to a centralized, proprietary platform's whims.

The big disadvantage to this over Tumblr and other blogging platforms is the lack of a search function or asks. This does mean pedophiles couldn't flood service-wide tags or search terms with CP, but it would make finding blogs outside word of mouth or a future IPFS search engine harder. It would also eat up precious data for phoneniggers and other burgers stuck with data caps.



>ipfs displaces torrents as an easy means of redistributing content.


>I changed a letter in one of my filenames so now there's two separate swarms of seeders" retardation needs to stop

And how is IPFS going to fix this?

On bittorrent I just update my RSS feed and let my followers update. The RSS article is OStatus compliant, so it's self-contained, containing multiple checksums and signature.

*Nix, do simple things well.


You just described Scuttlebutt.



>And how is IPFS going to fix this?

Read the OP, faggot. IPFS really, really hates file duplication and changing filenames doesn't change file hashes.


I'll look into this.



I read IPFS Golang, and JavaShit. It's kike software.

They have no solutions, only begging implementations.

I'm not sure how ANSI C code in libtorrent can beat Google Spysoft.






>and let my followers update

Swarm deduplication should be automated in the standard by default, not left up to clients to implement (or not), like in the case of Bittorrent.

Going through the trouble to automate 90% of the work seems like a waste of time when you can just have it 100% automated in the majority of the network rather than the minority.

There's no circumstance where it makes sense not to do this. If you're trying to get a hash, why care where it comes from as long as it's valid?

If you're publishing data, the intent is already to share the data with random peers.


>a gui


Like this or do you mean something else?


Why not separate the specifications from the implementations?

There's multiple implementations on github that are written in C. You could probably re-use a lot of existing libraries because of how they separate systems >>1000218

This is like people claiming i2p is somehow bad because the reference implementation is in Java. Despite a functioning C++ varient also existing.

You'd better spend your time writing the implementations you want, rather than complaining about the ones you dislike. The specifications are there. The designs are already done and you have multiple reference implementations.

You're not a nodev are you?



>With some normalfag-friendly application or web UI, you could boil it down to "log in with your key, then write a post/attach files and upload."

This looks like it might be that



>All the different chunking options give me pause though, as they have the same effect.

Thankfully the default is a fixed size and not "auto" like in most torrent clients, so it shouldn't be an issue unless people intentionally change the chunking method when adding files.




File: aa6dcb54cc5ff6e⋯.jpg (59.56 KB, 640x625, 128:125, 1527905399560.jpg)


After seeing >>1004674

I take back what I said here >>1004647

>implementing something in Python when ANY of the other implementations exist

What I should have said was, we shouldn't diverge into language wars. It's not really critical to the concepts and can change at any time if people find enough merit in the concept, to warrant the effort of implementing it.


No Jai, no buy. Obvious trash.



>should be automated

Ah, you have no respect for security or privacy.

>you can just have it 100% automated in the majority of the network

Seems you've never installed your own tracker.

>This is like people claiming

I don't sell projector screens.

>You're not a nodev

What's your github account? I don't use Social Networks, esp. Microsoft ones.


I'm fine https://www.libtorrent.org/projects.html


Sell me IPFS when we're making history:




I get the impression you're being insincere for the sake of attention and are not actually interested in discussing this. Maybe you should use social media instead of trying to do that here. Or try /g/.


File: e6230431b0a0242⋯.png (1.58 MB, 606x248, 303:124, mfw you're still using jew….png)


>Ah, you have no respect for security or privacy.

Renaming a filename without changing the hash is disrespecting your security and privacy?



Renaming a file*


File: f73eaff8fd1e026⋯.png (21.65 KB, 1342x175, 1342:175, oy vey.png)


How is swarm merging in concept a breech of security or privacy?

Security is completely irrelevant and privacy can be maintained in the same way you would if swarms where not merged. It's exactly the same as BT except automated.

If you want privacy, use anonymous routing or a private network.

>Seems you've never installed your own tracker.

What do trackers even have to do with this and what protocol are you even talking about? KAD, Bittorrent, and IPFS all use DHT for this, not trackers. Merging is handled clientside for all.


IPFS seems to have actual implementations for a lot of these concepts. What are you trying to say by posting an advertisement to a conceptual "Future internet".

If something does come out like that, what difference would it make since it's content addressed? All the IPFS developers would need to do is figure out a way to resolve NDN hashes and fetch content from their nodes. That's the whole point of IPLD

There are already implementations to add support for native hashes of various other formats and networks.


I'm not really confident in some research project that hasn't yielded anything since 2006 suddenly being relevant.

As for IPFS you can already work with git commits over HTTP, and ethereum blocks over whatever their network is. And Bittorrent is on their planned list.

I love the idea of having 1 program handle all my hashes/magnets. It's much better than having multiple different clients open, connected to multiple different networks independently.

If I request data, I want the program to get that data, through any means, over any network ,using whatever URN is considered optimal that day.

The whole benefit of IPFS is that these things can change.

How routing is done, what hash algorithm is used, what chunking method is used, what encryption method is used, what transport is used, etc. but the interface stays the same "ipfs get URN", "ipfs add ~/dont_click/boku_no_pico.xvid"

Maybe someone sent you a bittorrent hash and they're only hosting over i2p, the IPFS program should eventually be able to resolve that. Maybe a fork already does.

What's so great about NDN and why should I wait more decades for it when this exists today?



Full automation: misleading bit, download the whole repo.

You want semi automation, the more you can control the better, even if it runs mostly by itself.


>Security is completely irrelevant

Yeah, I concluded from you as much.

>except automated.

A windmill is also automated. Does it work like a windmill?

Does it not need my intervention if I made the typo?

>trackers even have to do with [Automation]

A lot. It's what you are claiming trackers don't have: a way to publish updates, even for typos.

>actual implementations for a lot of these concepts

We have prototypes, empirical evidence, and scientists working on this. Not hobbyists that wrote a thesis paper.



>IPFS developers would need to do

Nodev, got it.

>some research project that hasn't yielded anything

It's ok to ignore the 10 years worth of work in CCN, and working implementations.

Reminds me ZeroCoin.

>1 program handle all my hashes/magnets

Be sure to run it in ring 0, so you can't expect any breaches.

This is the systemd of statements.

>What's so great about NDN and why should I wait more decades for it when this exists today?

The same question Linux have in Desktop adoption, and display server support: lack of support.

Bittorrent works here and now, and a proven solid track record, while IPFS still begs to complete it's not so sound ideals.

It's snake oil, and you don't even see it.



IPFS already has the openBazaar fork with TOR/I2P but it has not been mainlined as the development is slow as shit.


> Libtorrent has no de-dup function

> Shilling NDN, which is in a worse state than IPFS and Dat Protocol




Yeah, that's what I thought of IPFS when it was announced, and thanks to you, even more refined.

Anything IPFS says it wants to do, Bittorrent software already provides, and more.

But we're done with conversation Nodev.

In the next ten years, we'll see Bittorrents on post quantum crypto, making it impossible to spy and tamper, unneeding TOR & I2P.



> we'll see Bittorrents on post quantum crypto, making it impossible to spy and tamper, unneeding TOR & I2P.

Well you are now the nodev.


File: 2a1c02a94a9ed02⋯.jpg (51.65 KB, 436x432, 109:108, 2a1c02a94a9ed0257ee626535f….jpg)




File: f0745aa611390d4⋯.png (126.16 KB, 544x400, 34:25, donut steal.png)

File: d552e926579b143⋯.png (33.33 KB, 1383x163, 1383:163, zn.png)

File: 19fd2dcd5ecb7ec⋯.png (32.86 KB, 658x174, 329:87, lel.PNG)


>I concluded from you as much.

I feel like you're being dishonest on purpose.

If you're going to pretend to be an expert here you should know that security is a separate layer irrelevant to peer management in practically every P2P system that still exists. It just wouldn't work any other way. This isn't even IPFS specific, as you should be aware.

>A windmill

Please be direct, don't try and use analogies, or you're going to make things more convoluted.

There shouldn't be anything confusing about the automation of swarm merging, if there is, just ask directly.

>you're claiming trackers don't have

My claim is that it's implementation specific, and requires coordination from multiple parties (tracker and client devs as well as users choosing to use those extensions), to actually see benefit, as opposed to it being implicit.

I see no reason this shouldn't be the case.

You had concerns about security and privacy but there isn't any reason to be concerned, those are irrelevant to distribution. Security around these systems is basically a solved problem and we have multiple anonymous networks to choose from to handle that, with i2p probably being the best example today.

>and scientists working on this. Not hobbyists that wrote a thesis paper.

I'm disregarding this as an appeal to authority.

These people are getting results and pushing out usable products, not siphoning academia funds while putting out whitepapers every few years.

>Nodev, got it.

That doesn't make any sense. You're the one interested in NDN, not me.

It would literally just be a matter of detecting NDN URNs, so I could probably do it if NDN existed.

>Be sure to run it in ring 0

I'm sorry but this is just inane garbage.

A single program is not a single component, and your CPU analogy isn't really relevant.

>Bittorrent works here and now, and a proven solid track record, while IPFS still begs to complete

What point are you trying to make?

I'm having a hard time believing your whole argument is based around being content with the status quo while simultaneously pushing a conceptual future internet academia moneypit.

More importantly, it doesn't have the distinct advantage we've been talking about.

Bittorrent clients, are bittorrent clients. IPFS is a set of interfaces around distributed data publishing and retrieval. Nothing prevents you from using Bittorrent as your exchange instead of their native exchange and torrent files instead of the IPFS internal format if you wanted to.

That's the entire point, that it's flexible. Again, if you think NDN is such hot shit set to deprecate Bittorrent, nothing would prevent IPFS developers from using those concepts and networks.

IPFS itself is nothing but a set of specifications. The reference implementations are not bound by anything other than those, and they're intentionally designed to be modular.

>It's snake oil, and you don't even see it.

I don't see how it's snake oil or harmful in any way and you're not doing a good job of demonstrating how it is.

We're all already 100% aware that this isn't done yet, but that doesn't invalidate its merits compared to the current software out there. What has been finished is good, what's not yet finished looks good. You can't just say that the good parts are bad because the rest isn't finished.

If anything it should be a mark of embarrassment that these people are pushing out the most robust thing while others do nothing, all the concepts of IPFS are unashamedly taken from "proven" projects, most of which are older than BT itself.

Not to mention the comparisons to Bittorrent are so secondary.

In the scope of IPFS that's literally only the exchange layer. Long term they're competing with HTTP, so it makes more sense to compare it to that in functionality.

The nice thing here is that you shouldn't have to focus on the exchange layer since it can change. All you should be worried about is what the URNs look like, and that you can issues a get request to it and somehow it gets exchanged, maybe over IPFS, maybe over HTTP, maybe over BT, or as we've talked about, an aggregate of everything available to you.

I'd appreciate it if you gave me legitimate reasons to avoid this project. I've spent a great deal of time looking into this and am only trying to see and prepare for what is coming next. Right now that looks like IPFS and for years of people posting these threads, nobody has convinced me otherwise. In the meantime several of the projects people shilled have fallen into unmaintained irrelevance, or had actual flaws pointed out AND exploited.

Nobody is doing that for IPFS and they've had more time to do so.



you're missing a couple of zeros



>integration with TOR

Doesn't tor get slow when people torrent over it? How would this even be implemented?


File: 59cb6c14df020b5⋯.png (114.83 KB, 600x367, 600:367, 1452412185136[1].png)


So this currently isn't released yet, but look at what they have planned for v0.34 js-ipfs. The ">" indicates a check, while the "<" indicates no check.


>Refactor files API

>Upgrade to latest ipld-dag-pb

>Refactor Object API

<--cid-base option

<CID version agnostic get and #1757


>IPNS over pubsub

>IPNS over DHT

<addFromURL, addFromStream, addFromFs

>Support for HAMT directories in MFS

The DHT isn't completed yet, but if you check on this thread [https://github.com/ipfs/js-ipfs/pull/856] there's only two things left to do.



So when the DHT Implementation is complete, does that mean JS-IPFS and Go-IPFS can finally pull data from each other?



you forgot the nocopy bug


IPFS is bloat, just use DHT+bittorrent



Fuck off, IPFS' main draw is its per-file deduplication autism and bittorrent doesn't have that. You might as well call bittorrent bloat and tell everyone to use FTP.



>per-file deduplication

solved by dht

>You might as well call bittorrent bloat

bittorrent solves a problem, ipfs does not


File: 3103186c53afd8f⋯.gif (904.07 KB, 150x150, 1:1, Stop_posting.gif)


>solved by DHT

Bittorrent's DHT does not magically give bittorrent per-file deduplication, you stupid nigger. Even Dat's deduplication is limited to files within a specific folder and not files across the entire network.

>ipfs does not solve a problem

Yes anon, torrents breaking if you change a filename or sharing shittons of files with other torrents without sharing peers totally aren't problems and don't happen in the real world all the fucking time.



>the entirety of ipfs could be duplicated using bittorrent and symlinks

Nice work guys


File: c78c21f0ab6daee⋯.png (109.64 KB, 263x326, 263:326, disabled_1.png)


>he's so fucking stupid he thinks sharing peers for the same files across otherwise entirely different torrents is the same as bittorrent with symlinks

>this is what anti-IPFS fags actually believe



SHA1 is broken though



So just track each file separately and have the client bundle together related files? Bittorrent can do it


It's much easier to have bittorrent clients optionally upgrade to sha256 than it is to try and create a whole new network


File: 7faec2b405db758⋯.gif (1.4 MB, 400x400, 1:1, 1461702194059.gif)


>a separate torrent for each file

>assemble them into folders yourself

>god help you if there's a folder hierarchy

There's a reason no one does this. Maybe you'll figure out why someday.



Those are extraordinarily trivial problems which could be solved in a bittorrent client, you don't need a new fucking network for that


File: 643ea2991609da2⋯.jpg (63.71 KB, 417x600, 139:200, 76e8342a054acd41f93f458e66….jpg)


>lol just track each file separately

>what are folder hierarchies

>why would you need those

>let the client figure it out

>how dare you suggest a new protocol

Just accept that bittorrent and IPFS have different design goals before you make yourself look like an even bigger idiot.



Also, the

>you don't need a new fucking network for that

excuse is like shitters insisting on using HTTP for fucking everything even if it doesn't fit their use case. To make the existing bittorrent network work like IPFS would create an ungodly mess which takes away bittorrent's existing perks so it can ape another protocol.



>a torrent of torrents

>one text file in the top-level torrent lays out the folder heirarchy and location for all the other torrents included in the bundle

There we go, took me 30 seconds and I just implemented IPFS in bittorrent+DHT



And how would you replicate IPNS, oh wise perverter of bittorrent?




What if you torrent some music, and you want to change tags? Now the whole file is different and you can't seed it anymore.



The multitorrent clusterfuck still breaks if someone renames a file or changes music tags as >>1007512 suggested.



I've got no idea what that is



IPFS is file-addressed, that breaks IPFS too


File: 7599c8c443df595⋯.png (146 KB, 392x329, 56:47, 1442161580220-4.png)


>that breaks IPFS too

That's the thing

It doesn't

lurk moar



You can't seed to a swarm if you've modified your files, that's literally impossible. If you're talking about something to do with IPNS, maybe that's revolutionary technology, but as for IPFS itself there's nothing in it that can't be done with bittorrent



> It's much easier to have bittorrent clients optionally upgrade to sha256 than it is to try and create a whole new network

This will never happen. I had this conversation back when shattered happened and nothing has changed since.

The only way to unfuck bittorrent is by abandonding it and creating something better.


File: f2210e12ef9e08a⋯.webm (1.36 MB, 1916x1032, 479:258, throw our heads back and ….webm)


>You can't seed to a swarm if you've modified your files, that's literally impossible

Read https://medium.com/@ConsenSys/an-introduction-to-ipfs-9bba4860abd0 , faggot. Get your mind around this idea: a file's contents can be separate from its metadata (including the filename and timestamp).



I'm not even going to read that, your proposition that you can seed into the same swarm with different files is absurd



How is it? If there's a document originator, he just announces that there was a change and others download the changes.



>he thinks metadata == file contents

Nigger, with a decent hashing system the file's hash doesn't change if you tweak the filename, timestamp, or other metadata. It's the same fucking file with different metadata so even if some other asshole changed his file's name, you can still peer with him because the hash and file contents are the same.




This whole discussion is pedantic as fuck and worthless but this entire thread started >>1007512 with some downloader changing id3 tags in his music. That changes the hash, and creates a new swarm. Regardless, from the link provided, do versioned files even work yet? The issue linked to from the medium post https://github.com/ipfs/notes/issues/23 is over three years old and still open. The only interesting part of this whole project seems to be IPNS. I will read into it some, but from what I've seen of IPFS, I find it hard to believe that IPNS is not some silicon valley-tier rebranding of a preexisting technology



Anon's id3 tags example is silly and likely wouldn't work, but changing filenames, timestamps, and similar metadata does not change the hash in IPFS or create a new swarm. Combine that with the ability to peer with those seeding a different folder with some shared files, and you have some interesting significant advantages over bittorrent when it comes to actually keeping torrents alive.




>versioned files

I don't remember that going anywhere.



I think so. They say this in the dht section

>It allows your IPFS node to find peers and content way more easily and it has full interop with Go IPFS nodes so y'all have the full IPFS network at your fingertips

I think before, you'd have to have an ipfs-go node set up a websocket connection in order to interact with one using js-ipfs. And since there wasn't a dht you couldn't really download content without knowing a peer that contained the content (could be wrong).


Not sure what that is. Don't see anything recent related to it for js-ipfs.



nocopy is a ipfs-go bug, search it in the issues page.


File: 945f66d56a54009⋯.png (181.37 KB, 381x403, 381:403, 3a2ba100a4aa1d11f5841b01e4….png)


>that breaks IPFS too


File: 52c2d4122c30248⋯.jpg (87.78 KB, 916x681, 916:681, 52c2d4122c302483d80a39ce63….jpg)

Ok anons, here's my pitch for ipfs "meta files." The idea is that since directories are light-weight, you can distribute an ipfs directory heirarchy as raw blocks. This way if the directory blocks are forgotten, you can still recover some if not all of the files.

My current idea is to have a flat tar file containing an "index" file (contains the master hash), and raw blocks named <hash>_block. Then to import the directory structure, you

ipfs block put < block
each block, retrieve the master hash from the index file, and proceed to use the master hash as you normally would.

Any suggestions on the file format and general premise are welcome.



>I see what you're saying, and I agree it could be done. I also recognize that this is not standard for BT. However, I still don't see the need for someone to make a protocol where this desirable feature is standard, when we could just stick with the existing protocol and make unofficial extensions that will only ever have azureus users connected because people insist on using minimal and or outdated clients with a subset of modern features

why nigga

How isn't moving to a format and network that has these things considered in the standard, a better solution?

There isn't even a migration path. You literally just add the directories that have your seeding content in it, and you're done. Migration complete, and if everyone did this all the swarms would merge.

Now think of how bad it is moving from 1 bittorrent client to another.

>oh man macroTurret just came out and it has this feature I want but my client doesn't have, I'll just migrate from 1 non-standard to another easily!

It's ASS.


>IPFS is file-addressed

IPFS is content addressed, that's the entire advantage. Any named content you see is metadata that's part of the node.

Notice that "Hash:" value is different to the object I'm requesting. The one I'm requesting contains the metadata and a reference to the actual data. Like a real networked filesystem, it seperates metadata from data, but guess what, metadata is also just data and you can content address it too.

Bittorrent can't compete with this, the transfers may as well be fire and forget. Unless you absolutely never modify the content, or meticulously manage it yourself. Why would anyone want to bother with this when it can just be automated and built in.

The absurd amount of dead torrent files that are basically duplicates with an additional text file, needs to stop.


>your proposition that you can seed into the same swarm with different files is absurd

Do you not understand how chunking works?

If you add a file, you chunk it and save the chunks, if you modify it and add it again, there's going to be duplicate blocks for the majority of the file or tree. The only thing you have to store to have 100% availability for both copies, is the difference between them. The duplicate blocks take up the same space, and the delta takes another space.

Read this


it's not a new concept, and everyone goes apeshit about ZFS for having it.

What if you could have that, but for network transfers too?

Why not use a tool that's more efficient with storage and bandwidth, for data transfers?

HTTP and BT need to be deprecated.


>Anon's id3 tags example is silly and likely wouldn't work

You could implement the MP3 format in the same way files are implemented.

Since ID3 is just metadata, you could literally just extend regular files to have fields like Artist, CoverArt, etc. and the Data field point to an audio stream or another container.

If that was done the blocks would be separated and the data stream would always have the same hash/swarm. Since metadata is so small compared to an entire extra copy, it's likely people would seed the original anyway since it would be the difference of a few bytes, not a few MB.

You could setup something to like output this ipfs mp3-object in whatever format like json and that would be really easy to support.

`ipfs object get hash ----encoding=json,` would give you the metadata with a link to the data.

Which seems really easy to add support for compared to other shit media players support. Like RTMP, DASH, etc.


I'm not 100% sure but I think that's how this works.



File: 214813138c41d95⋯.png (5.87 KB, 996x60, 83:5, benis.png)


Did this trip a spam filter?



>instead of just extending bittorrent to do what I want, I'll implement my own standard from scratch

The absolute state of /tech/, lapping up this silicon valley shitware


File: c59fed65f6d2067⋯.jpeg (322.91 KB, 1920x1008, 40:21, 1 qNZwhkmhcqXyRVpFJ1qG5A.jpeg)


You forgot to provide a reason why I'd want to build on a legacy platform instead of dropping all the incidental shortcoming and designing something that is built with the concepts I want, in mind.

There's nothing special about bittorrent that we need to clutch on to it.

Not the hashing, the chunking, the dht, nothing.

You act like just because something already exists that we can't replace it with something better.

That's not how this has ever worked.

Did you forget how we got here in the first place? Should I send you the rest of this post via an ed2k link?

Regardless the best part about IPFS is that I don't have to implement it from scratch, and I don't have to extend BT either. It already exists and is in active development. It gets better while I sit on my ass. So what's the problem and why should I care about bittorrent at all?

I've yet to see a reason to avoid using IPFS. And I don't think my life would be any better if I tried doing the same things I do with IPFS, with bittorrent.

Not sure why you feel the need to defend BT. As if I'm not using both as well as others.

It's not like once you install ipfs you have to uninstall your torrent client.

Not sure what your end goal is or what point you're trying to make.

It just sounds like a bunch of "stop liking what I don't like" or "it's new and I don't understand it so it must be bad".



It breaks the file folder and file hashes on the macro scale, but not the data chunk hashes or hashes of the other files in the micro scale.



The original hashes don't break. The new hash is different but the old hash will continue to work.

When you change the metadata you're only adding those changes, so the original will still work as long as the metadata and data blocks are available.

It's only a few bytes so there's no reason not to just keep the original pinned as well as your modified copy if you're worried about permenance.

Look here >>1007901

you basically are just changing this struct for directories that changed and these are tiny, even when you include upwards recursion on parent directories.

Anyone that lists the index is mirroring the metadata themselves until their next gc too, so even if you delete the original metadata, people can still retrieve the index from someone who has it, while retrieving the main data blocks from you.

You don't store 2 full copies of a file, to provide 2 full copies. You only need to know which blocks are in common and which are not.

But in any case changing something never breaks the original.

Just like how if you make a new magnet link the old one still works as long as people still have the data, but the issue here is that magnets will only download from peers of the same swarm which is only the people with the exact same metadata/torrent file. IPFS simply doesn't have this restriction, you can get any data from any peer, and in theory over any network they add support for, which will include bittorrent itself at some point.

If this happens you could not only swarm merge in the same network, but across networks too. So you could download X% from ipfs peers and Y% from bittorent peers for the same content.

You'd have to know the hashes for both networks, but this is how in-network swarm merging already works.

And the magnet link format already allows you to specify hashes for bittorrent, gnutella, ed2k, and probably more so you could just post shit like this.


and feed it to a single client and let it sort it out.

As it should be imo. Let the download manager manage the downloads and figure out where the data is and how to get it. I shouldn't have to know ANY of this bullshit but I do because it's all terrible.



>You don't store 2 full copies of a file, to provide 2 full copies.

This is important. I only know of 1 BitTorrent client that does swarm merging and it forces you to make duplicate files on disk.


I must have hundreds of gigabytes of duplicate data in different temeporary directories (to avoid name conflicts) from the manual merging process of multiple seedless torrents.

My options are to

- manually find and delete duplicates, removing myself as a seeder which is bad for the network

- symlink the files myself by hand, still leaving me the problem of storing them in a unique directory and spending time on it

- waste disk space keeping the duplicates. Which is not efficient.

I'd rather the filesystem take care of this for me, and IPFS does.


>You act like just because something already exists that we can't replace it with something better.

>That's not how this has ever worked.

>Did you forget how we got here in the first place? Should I send you the rest of this post via an ed2k link?

Well.. you are right something always replace the old shit.

However! You have it all backwards. ED2K is better than Bittorrent and Bittorrent is better than IPFS. Bittorrent still haven't caught up with the capabilities of ED2K/Kad. It was fun to watch the development.

Bittorrent now has PEX!

err.. come on granpa.. been having that shit forever.

Bittorrent now has DHT!

phew.. cool so that's what you call your Kad implementation.

Bittorrent has.. well still does not have distributed file search..

Ha! Move along fucker. The donkey has been stale for a decade and you still can't keep up.

IPFS.. hmm.. you have TOOLS to mount it and an HTTP proxy. Also some sort of revision system? Wow. Tell me more. Oh, no wait I think even bittorent built git over torrent. Hell the donkey even pioneered using your fucking DAG for error correction.

Tech is dead. Showmanship isn't.

Donkey is king.



I'm too young to remember edonkey, but why did it die? If you were to build a new filesharing application today would you use ed2k?


File: 2d2cc970a22f557⋯.png (6.81 KB, 499x116, 499:116, in current year 1 Anon was….png)


You're conflating the protocols with the implementations.

You can't say ED2K is good when talking about ED2K/KAD. Those are obviously 2 different things.

You have to remember that if you wanted to use the KAD network in an ed2k client, you had to migrate from 1 client to another to get it. And even though multiple clients eventually implement a way to search KAD it doesn't mean they did it in the same way, so everything is not only network specific, but often client specific too. Obviously edonkey clients are not going to find emule clients through KAD.

Look at this


And it got worse


Why do we have multiple global torrent rating systems. Does anyone actually want these networks fragmented?

Why even implement a chat protocol extension if it can only be used by people using the same client as you?

What do we even have, like 2 torrent libraries that people use to make torrent clients? And people can't keep feature parity across clients.

It doesn't have to be this terrible.

The whole benefit of IPFS is that the interfaces and formats are standard, and extensions should be transparent to the user and application developers, it should only bother IPFS developers to extend the clients and network.

Not force people to switch clients, hash formats, or any other needless thing.

At the end of the day the api directives "get mulithash" and "post nodeid" are going to remain the same. Just like with HTTP through several versions.

"get uri", "post uri".

Your client is either going to be capable of resolving something or not, but if you write a protocol extension implementation in 1 language, for 1 client, you should be able to easily port it to another.

It's not tied to a specific programs SDK or hacked up internals. It's tied to a broader/higher-scoped specifications designed with modules in mind from the start.

Yes you could probably do all of this over bittorrent but there's no way to coordinate that. Anything you've already posted is nothing but evidence of that to me. We've had the time, and some people have done it for some of the clients, covering some of the networks.

Fragments of fragments of peers.

IPFS is doing it different. Trying to allow for cooperation between clients and networks, by defining standard interfaces between layers.

I can't see this as anything other than a good thing for a P2P network. At every level.

If you think bittorrent is good, I don't know how you can't be excited for something that's more flexible from the foundation up.

It has the seamless upgrade potential that everyone else has neglected.

If I had it my way I would migrate to something if I could have the promise of never having to migrate again.

If HTTP can get 28 years of use by accident, I can only imagine how long IPFS will last when longevity is actively considered and they have past systems to learn from.

You seem to have some problem with IPFS but I am failing to understand what that problem is other than it being incomplete right now.

Obviously appealing to past tech like bittorrent fails on me too for the reasons mentioned above in this post. It's the current best, that doesn't mean it will remain the current best or that we should build on top of it either. It helps nobody and is a bigger pain in the ass long term than nailing standards down and implementing them fresh short term.

That's my perspective on it at least. I'm still interested in your opinions on the matter though.

I have no personal attachment to IPFS so even though it would be disappointing, I can be convinced that I missed something and it's actually bad.

But that hasn't happened in the years we've had these threads and IPFS has only improved over that time and grown in popularity with people like Mozilla and archive.org.

It's a whitelisted protocol in web browsers. When do they ever do that?

You don't see people putting NNTP:// into Firefox but ipfs://hash is valid

We have shit like webtorrent and they still don't want to impliment magnet handling natively despite so many people using BitTorrent regularly.

I'm not saying it's guaranteed to succeed because of this, but I doubt it's going anywhere one way or the other. Despite people saying "memeware" forever.



I'm not sure how old are you but if you're past your 30s you shouldn't even need to think why encompassing solutions are always preferred in the long run. Nobody likes to deal with abstraction layers.


Who did this?




you did, faggot


File: a70df0e19b47d80⋯.png (50.49 KB, 1407x946, 1407:946, tinfoil.png)


Found it in the usual place and I'm wondering what the commenters are raving about.

It just looks like a blog platform. Are they just upset that articles can't be removed and instead targeting the name for some reason?

What's going on here


>>Is it a meme?

>You be the judge.



<thinks they are the only content addressable storage

i have judged


File: 12295fae1eb2a8d⋯.jpg (1.73 MB, 1268x1490, 634:745, __justitia_and_shiki_eiki_….jpg)

>you be the judge




N00b question.

I've installed ipfs. I can launch it via command line / terminal on different operating systems, etc.

So, how the fuck do I find anything worthwhile on it?

It's not like I can go to a search engine and type in something_cool.zip and it take me to it on IPFS.

I can load my own shit up in it, and see the hash, etc, but I want to access other people's cool shit, not just shit I've created.


File: 5e3a195dae092e5⋯.png (2.86 KB, 472x17, 472:17, wew.png)

File: c699eb480b62e0b⋯.png (163.03 KB, 1661x748, 151:68, ClipboardImage.png)

I'm cleaning out 4 "tmp" directories of shit I downloaded. I'm realizing the benefit of downloads being considered garbage by default.

If I was using IPFS for these, I could just run garbage collection right now and it would automatically clean up shit that I wanted to seed, but only until I ran out of free space.

Waiting until I have TB's of garbage to manually sift through sucks ass. It was different when storage used to be precious and disk cleanup was frequent, not months-years apart.

I just found 4 copies of the same romset in different formats as well as duplicates from swarm merging.

500GB's of something I said I'd seed for someone and forgot about.

And I still have more. This was a mistake.


>It's not like I can go to a search engine


People post hashes to video games in the /v/ share threads and wherever else you'd post a magnet link.

Some Anons here used to have little web pages with personal indexes of things. Almost like geocities pages that updated over IPNS.





>The Vulpine Club is a friendly and welcoming community of foxes and their associates, friends, and fans! =^^=

What point are you trying to make by linking a random conversation on furry twitter? Is this spam?


File: 7f43a6c34bd681c⋯.jpg (331.79 KB, 992x1403, 992:1403, __wriggle_nightbug_touhou_….jpg)



>sniffs the DHT gossip and indexes file and directory hashes

Very nice. Though I should start wrapping files in directories so the filename actually makes it to the dht




> my unpopular program is much better because they get money a different way and are licensed differently

if dat's so good, why didn't it get as popular or as wide-spread as IPFS?

> inb4 muh VCs influence

fuck off. you should know that dat is BSD licenced, compared to IPFS's MIT license, something free software respecting



>random masodong namefag

Fuck off.


>if dat's so good, why didn't it get as popular or as wide-spread as IPFS?

Dat is closer to a bittorrent replacement with a built-in VCS for updating your torrent. It lacks IPFS' deduplication autism, aka the reason most people are interested in IPFS in the first place, so many of IPFS' more interesting implications and uses flat out wouldn't work on Dat.



>Exactly, but it changes the fact that if we use it and collaborate on its ecosystem we make easier for them to continue with their business model.

If there are other alternatives, it's better to focus on them instead of in VC funded software.

>IPFS is always going to be able to copy what we do, but they will need to put some effort on that, instead of using us as free labor.

I'm really surprised to see someone post something like this openly from a mastadon person.

Is this not just intentionally being spiteful to a software project solely because you don't like the people who make it. And you don't like those people because they have funds and are partnering with other funded software projects who also release free software and services?

How is this going to help mastadon users or anyone for that matter?

Not adopting a platform that you admit is likely to become dominant, because the developers get paid.

Nobody complains about this for any other open source software project that I know of.

The main argument BSD users like to throw around is that FreeBSD developers get full time salary from Apple and Sony. They treat it as a good thing and I don't see how it could be anything else.

Literally what's the fear? I'd much much much rather use software from someone who is getting paid than from someone who has to be supported some other way. Probably though ads or selling user data.

If protocol labs wants to trick /biz/ out of their money with cryptomemes why should I even care if I get distributed anime pictures as a result?

Who loses?


Would people use an IPFS tracker that functions like a Bittorrent tracker? It would periodically keep track of how many online people have pinned specific file hashes. To simplify things it would recursively keep track of items in subdirectories of a given hash directory. It wouldn't keep track of directories directly. Users could recursively query directories so a client could easily summarize the seeders count for each subdirectory and each individual file.


I try to upload and then it says

Error: read tcp> use of closed network connection

What does this mean?



>an IPFS tracker that functions like a Bittorrent tracker

Something like https://github.com/DistributedMemetics/DM/issues/2 ?



No not an indexer I already talked to him/you I mean like a bittorrent tracker in how it keeps track of available peers and seeders for a specific torrent. It would let people know the minimum amount of online seeders available for files on IPFS.



What would this provide over the DHT? If you're just wanting to find how many people have a file, you can already do that with `ipfs dht findprovs <hash>`.

Note that there is no way to know if someone has pinned a file or just has it in their cache.



Wow the DTH got a lot faster. I got results in 3 minutes compared to over 10 minutes last time I tried that last year. As you said, it would be a more conservative but reliable number since it would only include people who have actually pinned the file in question. I figured instead of people waiting for DHT results they could query the tracker database instead. And no, I wouldn't want to cache the DHT results since it includes files in people's file cache which is unreliable.



Just in case any JShitters like me are having trouble connecting web peers check above.


File: b45f7dc9750d747⋯.jpg (54.73 KB, 460x574, 230:287, 1d8bc3504fe9beebe7f03f91e3….jpg)


>still no I2P support in IPFS

>devs claim they won't add Tor support until go-ipfs gets a full security audit (https://github.com/ipfs/notes/issues/37#issuecomment-444685370)



File: 9b8f70c457c6608⋯.png (33.1 KB, 819x505, 819:505, dickhead.png)

File: 827f5c8cbf86951⋯.png (140.16 KB, 831x612, 277:204, we.png)


>the guy in that thread being buttblasted because they didn't adopt his code yet

Doing anything else defeats the whole purpose of using Tor. Is he really upset that they're refusing to integrate his unaudited anonymous routing method into their own unaudited p2p network until security experts have looked at them both?

Anyone that has a use for Tor wouldn't trust either until this happened anyway.

How can Tor developer be this blind? There's something (((glowingly))) suspicious about them being so insistent and inflammatory.

>It's Tor not TOR.


>At least Open Bazar is using the code we wrote.


>I explicitly told Juan that I was concerned about the prospect of writing a bunch of code for free that will then not get used.


>communist flag in their profile on an open source development platform

Why even be mad when the silk road replacement is using your code anyway? As if something that large for Tor counts as "not used".

As if people can't literally just use the bazaar fork right now.

As if that entire thread doesn't have multiple IPFS developers saying it needs an audit before the work was even done.

If this shit isn't provably secure people that need it could actually die, why try to rush it? When is that ever a good idea for security?

Regardless of anything else this dude seems like a dickhead manbaby. Especially for saying this shit.

>>At least Open Bazar is using the code we wrote.


Looks like he shat something out and someone else fixed it up and maintained it. Now he's trying to take credit.

Later in the thread he can't even figure out how to handle import paths and lists himself as a senior engineer on linkedin.

If there was a /tech/ equivalent for /cow/, this man would be on it.



I don't mind it not having Tor support, but I2P would be nice.


Maternal insults are lame.




>Maternal insults are lame.

Does "immature" fall under this? "arrogant" seems appropriate. I stand by what I said though, it's childish behavior.

>I don't mind it not having Tor support, but I2P would be nice.

Sorry, I got off topic.

It just makes me mad that Tor users are already seen as criminals and pedophiles, and then we have people like this trying to push the project on top of that. It doesn't seem helpful. Although my complaints aren't either.

I would like to see these integrated but I'm not in any rush for it either. To me it doesn't matter if they support it if they can't guarantee anonymity yet. But I don't see what that dev is complaining about since if you don't have this opinion, you can still use it today. And apparently multiple projects and people do.





>It just makes me mad that Tor users are already seen as criminals and pedophiles

Because the TOR devs are a bunch of fags who deserve no sympathy https://twitter.com/torproject/status/898256109789687808

more evidence for IPFS+I2P, not BitTorrent+TOR



>Because the TOR devs are a bunch of fags who deserve no sympathy

Their work in favor of privacy is unmatched and that alone deserves sympathy.

A dumb opinion doesn't undo it all, especially as they don't act on it.


File: 5323181e55a66ac⋯.png (21.54 KB, 991x468, 991:468, SSL_fail.PNG)


All the certs for archive.is, archive.fo, archive.today, etc... seem to be bad.

It also appears someone might be trying to hijack their domain name


I think having a similar service using IPFS over i2P would be perfect.

Are either of these projects mature enough to have something like this?

Does anything like this already exist?



Loads fine for me. Maybe you are being too conservative for your security settings.



Could someone be trying man-in-the-middle me?



You may have wrong date settings. Check out whether the date on your computer is accurate.


Somebody tell me if this shit will do anything resembling a normal file system, or is everything just an object because some developer was weaned off his moms tit on Java and Soy.

And yes, yes, I have read the fucking docs. They were written by someone who wears a pussyhat.




>on Java

Object doesn't have to refer to objects from OOP.


File: 9e6a75ed2971ea8⋯.png (404.4 KB, 878x842, 439:421, 1420885648354.png)


>he's read the docs and still doesn't know this

Soy or not, something is rotting your brain if you haven't figured this out yet. Yes, you can use it as a filesystem and mount it through FUSE. You're especially retarded if you expect deleting a file on your end will magically keep others from seeding it.




ipfs is not meant to be anonymous, it uses DHT essentially and nodes are essentially broadcast. What it /is/ useful for is to make it hard to censor the content since that would require taking down all the bootstrap servers (which you can patch with your own) and the clients that host the content.

To anonymize it you simply need to tunnel the traffic over a network interface (actual or virtual) that is secured.



Lets not also mention that he doesn't know what a merkledag data structure is. Doesn't know that its also used to de-duplicate content and optimize for speed.



>it is meant to be insecure

Ok. Into the garbage it goes.


File: 533e7221ca145d8⋯.jpg (47.76 KB, 506x345, 22:15, 4739d9323dde77914fef53e7bf….jpg)


>program is designed to reuse existing anonymity layers like I2P instead of reinventing the wheel

>this somehow means it's "meant to be insecure"



>We are committed to providing a friendly, safe and welcoming environment for all, regardless of gender identity, sexual orientation, disability, ethnicity, religion, age, physical appearance, body size, race, or similar personal characteristics.

>see the hangout mettings between the devs

>literally nothing but white guys, no wimminz, kangz, nothing

>all this shit to appease faggots who dont even contribute to this




Ever since tor started with muhnatzees I been suspicious that they might have been compromised



Software project constantly get into trouble for not having a so-called code of conduct in their repositories, so adding one could be merely a preventative measure. At worst, it's virtue signaling. You probably shouldn't let it bother you, although, if IPFS gets really big and starts being used at a massive scale the above-mentioned cock will likely be used by yids to purge bad goyim from the development team and install their own actors, similar to what's happening with Linux recently.



What happened to just telling annoying faggots to fuck off?



This is the laziest argument I've ever heard on tech, this is worse than "install gentoo" or "delete system32", seriously, get off the motherfucking soy you fucking faggot.



>getting mad because you're stupid



Defamation, blackmail, and rape allegations happened.


This thread has devolved into bickering about nonsense. I suggest we make an IPFS board. When was the last time any of you even shared something over it? I was trying but the file size of my file I was trying to upload crashed my net.

Keep linking shit and stop arguing over nonsense. Its not meant to be anymous, dumbasses. The CoC is pointless toosince the only people using it and programming it are whites.



There's already >>>/ipfs/ . We used to have more sharing before /tech/ went to shit through cuckchan newfags and the mods not giving a shit.




Delete this before the bright hairs find out.


We don't need a whole board for this. /v/ has a constant "share thread" where it's not bound by the protocol or distribution method. The post ipfs hashes, magnet links, mega urls, vola rooms, and other things.

Someone could start one here if the board owner would allow it. But I'm not sure what kind of stuff /tech/ would even (want to) share.



File: 1d45e29f2034569⋯.jpg (17.66 KB, 240x280, 6:7, 1d45e29f20345699f87177c6dc….jpg)


This is why we need i2p. With Tor people can leech, and all traffic has to go through a few somewhat fast central nodes. With i2p everyone is a node just by using it. The more people use i2p the faster and better it gets. Tor just gets congested with more people.



IPFS+I2P through the Temporal project


Is there any distributed imageboard being currently worked on? What's your take on proof of human work blockchain? Captcha puzzles are generated collaboratively between recent miners and none of them know the solution



The first paper suggests rewarding betrayal to prevent collusion between capatcha generators, thus involving currency. This is also an incentive to mine to prevent attackers from taking over. There must be a way, it could be fun to watch anime directly on anon post.



Distributed =/= decentralized in terms of software development/engineering.

Polite sage.


File: feddcbb613b997a⋯.jpg (90.72 KB, 720x830, 72:83, 1486380435594.jpg)


I know this one https://github.com/smugdev/smugboard but there's more that I don't remember.

I honestly like the idea of machine proof of work over human pow, especially if the owner can profit from it.

This kills 2 birds with 1 stone. People have to be at least somewhat committed to their input so they don't spam total garbage, and the owner gets profit without having to resort to other methods.

Most captchas benefit nobody and are simply a blockade, the exception being Google.

And worthless cryptographic challenges (like hashcash) fell out of style in favor of ones that reward tokens (that hold some form of value).

The only real issue is fairness across devices. You can't exactly have weak hashes for some people since obviously people are going to exploit that on powerful systems. And having only strong hashes unfairly locks out low power devices.

It would be interesting to see unique token exchanges become decentralized and simplified. Maybe the Ethereum people are already doing something like that.

When it comes to pay-to-post I hate that idea, but I have no problem with token exchanges since they're more specific and tied more to services.

It would be cool to have some domain specific token like an 8ch post token, and be able to automatically barter with the owner's system with other tokens.

So I don't have to compute something like heavy hashes with monero at the time of posting but I could say preemptivley host data on a low power machine to generate filecoin and exchange that for post tokens.

Ideally these things would all hold no monetary/fiat value an be pure barter tokens but that's unavoidable and it ruines the whole system since someone could just buy post tokens. But it would be cool if we could force service based tokens rather than a generic cash economy.

It might be possible, but I can't imagine how.

I want to see a digital world where only the people who generate hentai@home credit can post on my website. More seriously imagine using something like folding@home points.

Something that implies you did useful work and not just money based since people can just steal cash it implies nothing and can't be sourced or proven. Having a lot of it doesn't even prove you deserve free passage everywhere either.

Having things tied to a crypto identity would be good but then people would be upset over the lack of anonymity and the ability to have multiple accounts for the same service.


Sage shouldn't be considered inherently rude.



>I want to see a digital world where only the people who generate hentai@home credit can post on my website

How would you implement this using IPFS and Ethereum/Tron/Neo/Cardano/Eos/Stellar/Iota/etc... (smart contracts)? Asking for >>>/hydrus/ (to create a social network seeding/tagging layer on top of it)



For reference https://bitcoinexchangeguide.com/neo-vs-eos-vs-trx-tron-vs-xlm-stellar-vs-ada-cardano-altcoin-battle/

Some ideas:

1. Those who seeds more have more rights in downloading materials they like, similar to private trackers and Sia/Storj/Filecoin/Maidsafe/etc. storage tokens

2. Those who tags more may or may not be genuine (e.g. tag trolls, tag bots), with or without incentives, thus we need an overseer to make sure the tags are good

3. Tying #1 and #2 together may not be a good idea, as seeders could also be a tag abuser, or worse use tag bots to rig the system and gain dominance

4. To fix #2, those who engage in the community with tagging should have the most power, but we must create a consensus system for a Steem-esque "proof of collaboration"



Some ideas regarding voting https://en.wikipedia.org/wiki/Penrose_square_root_law https://en.wikipedia.org/wiki/Jagiellonian_compromise

The power of a person's vote should be the square-root of the person's contributions.

Some problems regarding quantification of file contributions:

1. If we assume every file is created equal, does that mean a high-resolution image is the same as an emote/flag?

2. If we assume that file is ranked by its file size, people would cheat by converting low-res jpeg into png, mp3 into flac, and epub to pdf.

3. Even if we can quantify the files, how does that stop things like Waifu2x 2x upscale and SVP 60fps from slipping through the cracks?

4. If we actually use file popularity to rank file weight, then it becomes a shilling and spamming operation, how do you tag and manage them?



Anyone have a plain english guide on setting up "geocities pages that updated over IPNS." Because, for the personal "I'm soooo random!" website, IPNS looks tempting.



There are no captcha puzzles that can't be solved by AI. Human pow has no future.



What about faptcha?



You don't need a guide.

ipfs add -Q -r my_website_root | ipfs name publish

When it says "published to Qm..." this is your IPNS reference. "/ipns/Qm.../"

The apis you want to read about are

ipfs key --help
ipfs key gen --help
ipfs key list --help
ipfs name --help
ipfs name publish --help



I think the answer is simply "yes, smart contracts could help" at this level. The things mentioned here



should be ironed out ahead of time by the hydrus community (and/or developer).

In principle, smart contracts and blockchains allow you to emphatically claim history and truth at a global/network scale. But that's not really interesting to talk about until you know what you need to prove and how the data is generated. What "value" is, has to be defined before we can speculate how to implement it.

Then you can discuss potential exploits and how to defeat them.

I think you should start a thread there if one doesn't already exist, either way link to it here.

When it comes to auto image analysis we have lots of options today. This thread has some and there's A LOT more >>>/hydrus/1553

When the results are unsure, we can always fallback to human intervention. How those people are elected, what power they have, etc. again has to be defined in human language first.

For reputation, I guess I would look into older p2p networks that tried to have reputation systems, like ed2k/kademelia.

And I'm sure things like sybil mitigation are good to know.



If you can figure out how DHT networks work without spam killing them, you might have a basis for distributing votes, once a vote's weight and impact are defined.

Somewhat related to A3 - (bots to rig the system and gain dominance)

I wonder when the IPFS people are going to release filecoin research. If they're going to have a credit system based around hosting data, they likely have to solve a lot of these same problems. But IPFS isn't even out of alpha yet and it's been years so it might not make sense to wait for that. Then again if it's taking them years maybe it's not worth trying to solve on your own if they're going to do it for you.

At the very least they'd have to have some way to prove you're hosting and that the data you're hosting is cryptographically legit.

Helping A1 - (Those who seeds more have more rights in downloading)

Without relying on peers or trackers to be trusted.

We need to have a thread where we can dump knowledge. I'm not really strong on consesus or trust tech since I feel like it's constantly been evolving for years. But I'm wondering how other people handle this, specifically blockchains, p2p networks and even centralized systems. Like if there are boorus with points/credit systems, how they handle it, etc. How do federated platforms deal with it too, things like NNTP based services, or even modern things like Mastodon. Things like OpenBazaar might be worth investigating too.

There's also the idea of using libp2p directly to implement your own system that is decentralized in most parts but (semi-)centralized in authority.

Like distributing files via IPFS while forcing vote data to be sent to specific peerID(s) who processes it and then updates the single source of truth.

This way you don't really have to maintain your own blockchain if you don't feel like you need all the things that offers you. Like I doubt most people actually need an entire history of events versus just operating on the latest copies of something in a traditional database kind of fashion.


>Peers a,b,c tagged files X with

This is everything the auth server needs to know and distribute, probably best on demand


>tag X was tagged y by peer a on $date -> tag count was incremented by peer b -> 3million entries...

This is not central and everyone has to have a copy of it(or its head), and they traverse it themselves to find the truth about something.

Depends on what you want and need.

If I remember correctly the former is more or less how the hydrus tag repos work already


Can't you just do automated image recognition?




File: 7edd401801889b9⋯.jpg (35.63 KB, 800x600, 4:3, 81d90389ee5f16185423b79206….jpg)


>yet another "guys we should have an imageboard ON THE BLOCKCHAIN" shitter

Fuck off.



>list at least 3 options

>bias against blockchain specifically in favor of simpler approach

>this is somehow DUDE BLOCKCHAIN LMAO

Can you just not.


File: 6591cf774236320⋯.jpg (126.59 KB, 640x736, 20:23, blockchain.jpg)



All of said options are stuffed with convoluted bullshit just so you can set up a crappier and less convenient version of something people already have. For imageboards we're better off with a hybrid solution where images and videos are served over IPFS to save server bandwidth.



Hydrus isn't an imageboard, it's a booru.

>a hybrid solution where images and videos are served over IPFS to save server bandwidth.


>Like distributing files via IPFS while forcing vote data to be sent to specific peerID(s) who processes it and then updates the single source of truth.




We need to consider WHEN simpler approaches break and WHEN blockchain breaks. To put simply, simpler approaches can't scale with user count.

No matter it is a private tracker on libp2p and IPFS, or using storage coins like Sia/Storj, or NNTP-esque solutions, we need a solution once we expand out.


File: dc23ffb4f895f25⋯.png (60.38 KB, 470x684, 235:342, wojina and pepette.png)

Could IPFS be used for web page archiving? Like archive.org or archive.fo type of thing?

How could I do something like this? Would saving a webpage with relative links and then uploading it work?



But you can use Bitcoin SV (Satoj's Vishnu) to store data files, as big as 128 mb, forever and ever in the blockchain, for a minimal fee. Miners will be forced to keep your file around forever.




wget the site with relatives and just ipfs add -r the files.



It worked. You need to add -w also.


File: 8099881edc01789⋯.png (33.94 KB, 664x232, 83:29, Iota-tangle-1.png)

So, it should be a distributed imageboard with proof of work tangle.

>each post must validate two random previous ones

>time/sorting is estimated by the sum of PoW accumulated by all validations (weight)

It should be more resilient to censorship than a blockchain because history can not be rewritten even with more mining power. You can post offline or in a local network and merge the subgraph when back online. The client settings define a target difficulty to filter and mine posts. In a way this chose between quality or quantity without having to split the community.

>How does it do against spam?

People will rise the difficulty until they are satisfied with the amount of spam. What should happen is an endless race of computing power until an equilibrium is found where either spamming becomes impractical or the network is too weak, but then it is still possible to dynamically reduce the difficulty when the spamming stops.

>How could this be abused?

Since time is calculated from the amount of validations, it is technically possible to mine against selected posts. Nevertheless it easier to censor posts with little to no validations, so hopefully the majority will keep it random and unbiased.

I'm going to try doing it in Rust.


Since its still in heavy development I'm worried about getting hit by some zero day if I were to install IPFS directly on my computer so I was thinking about ways to isolate it from the rest of my system. Using a VM should probably be enough but obviously I don't have resources to launch a VM for every other program so my next best option is probably a sandbox.

So how did you set up IPFS? Should I just install it directly on my system and live on the edge?



Use firejail or some unix user fuckery to sandbox it.


File: 05ec81f8c02be18⋯.png (7.55 KB, 640x300, 32:15, MatrixDAG.png)


>especially if the owner can profit from it

How are you defining profit?

>And worthless cryptographic challenges (like hashcash) fell out of style in favor of ones that reward tokens (that hold some form of value)

Hashcash still serves a good method to reduce spam. I agree though that good behavior should be rewarded in a provable way.

>You can't exactly have weak hashes for some people

>And having only strong hashes unfairly locks out low power devices

I too was concerned about this. The way I solved it is by giving users the option of either paying a fee per transaction or attaching a PoW to their transaction. Miners are incentivized to prefer transactions with fees over transactions with PoW attached. Miners can choose the number of PoW transactions they will accept into a block.

>When it comes to pay-to-post I hate that idea

>It would be cool to have some domain specific token like an 8ch post token, and be able to automatically barter with the owner's system with other tokens

You just contradicted yourself. How would the 8ch tokens be used and produced?

>I want to see a digital world where only the people who generate hentai@home credit can post on my website

That is entirely possible to do right now. hentai@home could transition to a public privately permissioned blockchain such as Multichain where the current rules still apply. Users would be credited in "coins" that may or may not be freely tradable. Even if they're not freely tradable, the user who owns the coins could sign a message proving they own their amount of coins. Then you as the website admin could check the hentai@home public blockchain to verify they really have them.

>Something that implies you did useful work and not just money based since people can just steal cash it implies nothing and can't be sourced or proven

As I just explained, a public privately permissioned blockchain solved that problem. Account rules and coin distribution are entirely up to the owners of the blockchain. It's basically just like a SQL database where the admins have complete control but anyone can download it and look at the changes in real time.

>Having things tied to a crypto identity would be good but then people would be upset over the lack of anonymity and the ability to have multiple accounts for the same service

In UTXO based blockchains generating a new deposit address for each transaction is encouraged. The problem is when a service requires users to submit more then one of their transactions. Then that service is able to link multiple addresses to a single user.


DAGs can never provide an absolute global view of the network. Eventually nodes will become partitioned with some discussions only happening on a particular node. Even comments on a universally accepted thread can become partitioned among nodes due to partial network failure. That said, I agree that for an imageboard a DAG would probably be the best approach. Using the IOTA Tangle as a base is not a good idea. It was designed for monetary consensus, not for message consensus. The best approach I've come up with is simply extending the Matrix DAG messages to include PoW and IPFS image hash fields. With a few more quality of life adjustments, I really think Matrix is the solution we've been looking for. Instead of like in Tangle where new transactions refer to only two previous "tip" transactions, in Matrix new messages refer to all previous "tip" messages. This allows the DAG to always approach a linear graph. Under ideal circumstances this will make a linear graph where messages are ordered linearly. Individual nodes may determine how to present multiple branches to users.

>It should be more resilient to censorship than a blockchain because history can not be rewritten even with more mining power

IOTA Tangle only require attackers to have 34% of the global hash rate to take over and remove transactions before they can be confirmed by nodes. Besides, the PoW wouldn't need to be an arms race. All messages above the global minimum PoW would be downloaded and individual nodes could choose the fixed minimum PoW level they require to be accepted into their graph. A general consensus of acceptable PoW levels would be an ongoing discussion for each node.

I'm personally working on a design for an improved version of DPoS that should make things much more palatable (since competitive PoW on all blockchains will eventually be phased out) though as of now I still think a Matrix inspired DAG is the best bet for a long history preserving distributed imageboard.



>How are you defining profit?

Gain of any kind that prolongs the service. Be it monetary or some other kind of token for distributed computation, storage, etc.

"Profit" may not be the most appropriate word here though.

>The way I solved it is...

Interesting approach.

>You just contradicted yourself

You left "but" out of the quote.

Despite the 2 being similar they are not the same, given that 1 is inherently generic (cash) while the other isn't intended to be (post token). Likewise with the other tokens. Specific vs generic.

As already mentioned, the issue that would need to be solved is some way to prevent exchange for monetary value. Potentially you could maintain a blacklist of tokens going through known fiat exchanges, but that's just cat and mouse and doesn't prevent individual private trade. As far as I can tell there's no way to handle this.

>How would the 8ch tokens be used and produced?

This was just an example, tokens could be used instead of a captcha challenge, generated through whatever hashing scheme you wish.

>a public privately permissioned blockchain solved that problem

If you can have access to an account, you can sell the access to that account. This happens a lot in video games for non-transferable items.

It only prevents the tokens from leaving the system, not people buying into the system with (preexisting) credit.

>Eventually nodes will become partitioned with some discussions only happening on a particular node

Wouldn't CRDTs resolve this?


I believe this is how people are building chat and document editing programs on top of IPFS now.

They will still diverge in the event of a large outage, but can become joined again.

Clients could have bias on this too, when a conflict was resolved, in the data it should still be in the correct order, but you could render it as if it's a post at the bottom of the thread with out of order timestamps.

I think meguca does this. Assume 2 postsers A and B

A reserves a post slot, B reserves a post slot, B submits post, A submits post. I think it renders AB until you refresh, then it would be BA. Or it might render AB with out of order timestamps. I forget. Not that the rendering of the data is worth considering before the data itself. But it seems like divergence and then (re)unification shouldn't be cause a technical or visual disaster. But I'm not very familiar with all that, so something could be problematic with what I'm saying.

>Matrix DAG

Neat. I didn't know about this. Seems like a good base if it's designed for chat in the first place.


File: f50fac53aaa2fc4⋯.png (9.88 KB, 720x300, 12:5, MatrixDAGmerge.png)


>"Profit" may not be the most appropriate word here though

I think you're thinking of a store of value. I agree we should be storing (subjective) value in some systems. Imageboards are not one of those systems. Instead we should (and have been) penalizing disvaluable behavior.

>given that 1 is inherently generic (cash) while the other isn't intended to be (post token)

>the issue that would need to be solved is some way to prevent exchange for monetary value

>As far as I can tell there's no way to handle this

I understand the intended difference between tokens but unfortunately I can't think of a way to prevent it either. Ultimately all "cash" transactions are barters too. Unless there's an always correct whitelist I don't see how you can selectively allow some barters but not others.

>If you can have access to an account, you can sell the access to that account

>It only prevents the tokens from leaving the system, not people buying into the system with (preexisting) credit.

I overlooked that. I actually assumed because the seller would retain a copy of the only private key valid for those coins that people wouldn't purchase them. You're right, people are very stupid and/or risky. You might be able to prove value for some service but only in real time. Historical records of the service wouldn't be reliable.

>Wouldn't CRDTs resolve this?

CRDTs are really just conflict resolution rules when combining alternate representations of a solution. Each message is a solution so there's nothing inherently conflicting about them. I was talking more along the lines of individual node selective PoW requirements. Message propagation shouldn't be much of an issue due to longer reply times since users aren't really chatting. Even then, your personal local node could go offline, you could reply to a few posts on different threads, go back online and resync with the network a day later and everything would be fine. Thanks to the way the DAG is set up, your posts would be seen as a branch from older transactions and would be accepted since they stemmed from an older valid message. The next new message would refer back to your branch consolidating it back into the main branch. This actually doesn't even need to happen but it's nice because then we don't have to worry about a million branches when traversing.

>I think meguca does this

I'm pretty sure meguca orders and displays posts after the captcha is completed and in order of when users first begin typing regardless of then they hit the submit button.

>Seems like a good base if it's designed for chat in the first place

It's actually designed for decentralized text chatting, VoIP, and video calling. Of course we'd only want to use text messaging and change a few other things like removing a mandatory domain name for each node. Scroll down to the "How does it work?" section at the bottom.




>I want to see a digital world where only the people who generate hentai@home credit can post on my website.

>decide to duckduckgo hentai@home

>first result is FBI's recruit website






>not knowing what hentai@home is


I wasted my time reading the docs. Looks like he's just writing his own implementation of Matrix but without domain names required and with decentralized image storage and real documentation. He said he wants Grid to be federated with Matrix. The PoW he's describing is literal proof of work. Only well thought out and well backed ideas will be considered when coding things.

I'm talking about creating something brand new. Take Matrix's DAG idea (every new message references all previous tip messages), add a configurable hashcash PoW to each message, and use a gossip protocol like in Bitcoin to create decentralized full nodes. Full nodes will determine consensus rules, just like in Bitcoin. The advantage of not having to enforce a global universal state is that each node could determine the appropriate level of PoW required to be added to their DAG. They will still propagate all messages that conform to the global consensus rules. Thus nodes that choose to enforce higher PoWs than required will be subsets of the larger global graph. Light clients connect to full nodes to view and post messages. Higher level things like client level message blacklists created by full node operators could optionally be used for further reducing spam or optionally enforcing administrative policies at a given PoW level.


File: 459c6cc88ef905c⋯.png (38.36 KB, 606x273, 202:91, Figure 6.png)


>DAGs can never provide an absolute global view of the network. Eventually nodes will become partitioned with some discussions only happening on a particular node.

IOTA chose transactions to validate by computing random walks towards the tips, giving a higher probability to node with higher cumulative weight, to achieve some sort of "soft" consensus.

>IOTA Tangle only require attackers to have 34% of the global hash rate to take over and remove transactions

Unlike IOTA if we chose not to prune orphaned posts we can make censoring almost impossible, at the cost of giving attackers the possibility to forge subtangles and manipulate the weight of older entries.

>in Matrix new messages refer to all previous "tip" messages.

I didn't know about Matrix. It's true it seems more appropriate. Although I wonder if it can handle spam.

>A general consensus of acceptable PoW levels would be an ongoing discussion for each node.

I agree. We might even set a target number of posts / seconds (preferably low, e.g. one every 10 minutes) which would be automatically be adjusted according to the global hashrate. Not only this would help to mitigate spam without having to actively review parameters, but this would also discourage mindless posting no matter how popular the imageboard could get. This should be achieved with synchronized network time. At least it works if we forget about the fact shills definitely have more computing power than legitimate users.

>I'm personally working on a design for an improved version of DPoS

This sounds interesting. Do you mind sharing more details? The reason I won't use PoS is that I can't see a way to make it anonymous but I would gladly give up on PoW.



>to achieve some sort of "soft" consensus

Each node having their own view of the network is too much of a trade off when dealing with serious monetary systems. We'll see what happens when they turn off their central coordinator.

>giving attackers the possibility to forge subtangles and manipulate the weight of older entries

In a message oriented system long subbranches aren't an issue because no message takes priority over others and relative ordering doesn't matter. A message branch is "confirmed" (added to the graph) if and when the trailing message refers to a previously known message. There is no waiting. It either is or is not valid. See my image and explanation >>1028930 for offline merging and >>1029088 for my whole idea.

>We might even set a target number of posts / seconds (preferably low, e.g. one every 10 minutes)

I also think we should have a hard cap of incoming messages per second. Not quite that low though. 8chan experiences less then 1 post per second and 4chan experiences 9-10 posts per second on average. I think matching 8chan levels is a good starting point.

>which would be automatically be adjusted according to the global hashrate

I'm not advocating for a PoW arms race like how cryptocurrencies use it. See my previous posts. I would also argue against dynamically adjusting the acceptable PoW level on the node level because as you said, attackers can always raise the difficulty high enough that no honest users can reasonably post anymore.

>Do you mind sharing more details?

Basically anonymous coins using DPoS but publicly revealing your votes doesn't compromise your or anyone else's privacy.


I WOULD upload a buch of big awesome directories, but this error keeps appearing. Asked help chat for help, only person was confused. Seems to be an error in GoLang itself. (They provided link)




Independent uploads (Very inconvienent)

I physically stole these things and made copies before destroying the source. All wrapped with -w, so you should be able to see the directory names (right?)




God damn, can't even upload small packages rn. wtf did they do?

Gonna keep trying here..




There are several directories I STILL can not upload due to this error. This annoyes me greatly. Enjoy what I did upload.



>Note: This is actually just a bug in error reporting. That is, go reports one error when it should be reporting another. In this case, you likely don't have read permissions on one of the files you're trying to add to ipfs.

Does it error out on the same file every time?

Fixing the issue in Go won't help, it will just make the error message what it's supposed to be. You'd still have to fix whatever is wrong with the local file. Since you can add some files but not others it's probably permissions.

You should try running

cd /where/it/is
find . -type d -exec chmod 755 {} ";"
find . -type f -exec chmod 644 {} ";"

on the directory you want to add and see if it shows any errors. If not, try adding it to IPFS again.




I forgot

chmod -R +rX directory
also works if your chmod supports it. Not sure what OS you're on.



Well slap my ass and call me Judy, it worked. When I physically stole these CDs I had full intention of pirating them for free on IPFS and the like, and it's been a long while since I could attempt to.



Also, IPFS Companion for Firefox no longer exists, so I downloaded the beta channel and it doesn't work. Wat do?



Some misc now, also taken from physical media




Bunch of movies I have saved to my machine
















This is it for now. Please mirror, enjoy, etc.



>IPFS Companion for Firefox no longer exists

It looks like Mozilla only removed it temporarily because the builds failed.


I don't know how their process works so maybe it's waiting on review from Mozilla, or maybe it has to be built by them.

I can't believe they delete the whole page though.

I have 2.7 stable if you want to try it


But the Beta branch works fine for me. What's it doing wrong for you?



It simply does nothing, the beta branch. I click it to see if there are options I can configure and the pop-up is a blank white window. No links are modified to clickable. I'll download your 2.7 version.


After uploading all those gigabytes to IPFS, my net as a whole is not working very well. Any tips to rectify this?



Restart everything like your computer, modem and anything else in between. Works %99 of the time and but if it doesn't, just reinstall your operating system.



>if it doesn't, just reinstall your operating system

Lol, ok. I'm not that stupid



IPFS is P2P, when you add something to IPFS you're not uploading it, you're just making it available for download from that machine (while the daemon is running).

Your daemon also has to tell the DHT about it so people know you're providing it. IIRC someone in a previous thread (or this one) said that this is really inefficient right now, but is being worked on so maybe it will be different soon.

This only has to happen if the network doesn't already know about something (if a DHT record isn't expired). At least I think that's how it works for KAD-DHT, I don't know how theirs works specifically.

If it's even remotely sane this should only happen when you add something new or if you've been offline for a long time and come back online. You need to tell some peers that you have the content and where you can be reached which takes some bandwidth and opens a lot of P2P connections temporarily. Some ISPs probably throttle you when this happens.



I used "Upload" for want of a better word. My ISP is Cox Communications and I don't know if I am being throttled or not. However, I am only running this for 30 minutes now. It probably just needs more time. After that, my connection should normalize, right? According to the WebUI I am currently peered with 69 peers, but when I checked earlier it was at over 280. As I was typing that it jumpped to 183. IDK then...



>After that, my connection should normalize, right?

It should, after the DHT records are propagated.

For multiple GBs at once that's a lot of records to provide. Since each hash is made up of smaller hashes that also need to be provided.

For example check this out >>1029842

>ipfs ls Qmb1dT9M2jEMLkB9HgidcQpwUkQ74wHVf4J3meFejpiZB9
QmVXMmiViNa3LtwJDHhkNrqYY16XHgYLusEsAAPn7F4H5c 6831852 ipfs-firefox-addon@lidel.org.xpi

Shows the file but even the file itself is made up of smaller blocks (in BT these are called pieces/chunks)

You can see the individual blocks with the same command

ipfs ls QmVXMmiViNa3LtwJDHhkNrqYY16XHgYLusEsAAPn7F4H5c

So that 1 6MB file with a directory turns into 28 hashes that I need to tell the network I provide.

If the number of peers I need is 5 (I don't know the actual number) that's 140 DHT STORE requests that have to succeed. If we count just the hash as 46 bytes that's 6.44KiB of total DHT traffic to provide the folder to the network, and real requests have overhead and other shit.

Take into account you have a lot more records than that coming from GBs of files and they're all probably fighting for the same bandwidth.

If a STORE times out, you try another peer until it succeeds.

So forwarding the port might help you not waste bandwidth since you'll just connect to whoever is fastest and know the connection won't fail.

My knowledge of DHT comes from using ed2k, bittorrent, and Wikipedia though, not IPFS specifically. So it might work differently, or I could be flatout wrong.

They might only provide the root hash or something more optimal.

> I am only running this for 30 minutes now.

It depends on your upload bandwidth and other DHT nodes/peers.

>over 280

Right now I have around 900 but I leave my daemon up all the time (except when playing games). I also have my ports open, a fast connection, and am in the US so there are likely a lot of nodes wanting to connect to me that are also physically closer.



If I could, I'd have it running 24/7 on a dedicated archive/seedbox machine but I have no money and this is running on a laptop. My options are limited atm



is it some fuckhuge collection of memes and books again? let that shit die already if it is



No, they're electronic textbooks and video games I stole, and various films/videos



Do not disparage the meme posters.


If you keep your connection up long enough for me to mirror it. I will host them until my next repo garbage collection. So far though I only managed to grab 2 blocks. So it might be working but it's getting out there slowly. Maybe your connection is dropping in and out.

Hopefully your personal situation improves regardless.



I've kept it up for hours, what's your status?



>Each node having their own view of the network is too much of a trade off when dealing with serious monetary systems

The computational power required to assemble the main tangle have to be large enough to eclipse small sub graph made by isolated or malicious nodes. Though I agree that unlike bitcoin, consensus is never 100% achieved. Only the probability a transaction to be valid is known from the cumulative weight of all confirmations.

>I also think we should have a hard cap of incoming messages per second.

I have been thinking about how to implement it on a dag and to proves historical records, but I'm not sure. I might be missing something stupid. We could make nodes not validating posts which were witnessed being created too fast or with too title PoW/PoS, but the problem is that can the spammer confirm his own posts and then merge into the main branch? We could let nodes vote against posts ([-1, 1] weight with each dag link) but we open an easy way to censorship (although always publicly available). It isn't even fair for high latency nodes. Only the hard consensus of a blockchain provides a straightforward solution. Else users have to decide individually or it should be coordinated by moderators.

>I'm not advocating for a PoW arms race like how cryptocurrencies use it.

Neither do I. I think I'll just wait for a better solution than PoW.

>attackers can always raise the difficulty high enough that no honest users can reasonably post anymore.

The main way to defend against it is to have the right amount of centralization through opt-in signed moderation. Nodes must agree that blacklisted posts don't count toward the speed limit so moderators are able to negate such attack without cpu work. Some moderators could even handle by neural networks.

>Basically anonymous coins using DPoS but publicly revealing your votes doesn't compromise your or anyone else's privacy.

Having untradable tokens to do anonymous staking would be nice but how would they be distributed? If they are automatically rewarded through time, how to protect against Sybil shills creating many fake identities?

We need to set rules in a way we don't even have to compete against corporate shills or spammers. We could trust the ability of people to process information comprehensively. For exemple, this vague idea: a board where an identity is required, but posting is anonymous through a zero-knowledge proof for user validity (ring signature). Misbehaving would get the identity tainted, then doing so repeatedly will increase the probability of being blacklisted. This could work as an anonymous Twitter too (only the public messages from followed users get displayed but it's impossible to tell identities apart, adding and removing follows would be random but statistically optimal after enough votes for one user).



ipfs get QmV16dyLSoBnb7JMfpy7wgHh8SLKvfSGuA9opjCeifaDY5
Saving file(s) to QmV16dyLSoBnb7JMfpy7wgHh8SLKvfSGuA9opjCeifaDY5
7.99 MiB / 1.33 GiB [>---------------------------------------------------------------------------] 0.59% 37d5h12m28s

I tried connecting to you directly (via peerid of the only provider for that hash) but it kept timing out so there must be some connection problem going on. You're only reachable rarely. This shows some kind of network problem because regardless of the DHT I should be able to connect to you.



What do then?



>I have been thinking about how to implement it on a dag and to proves historical records, but I'm not sure

Easily, the node software has a queue for incoming messages and only relays them at a fixed rate, say 2 messages per second. Honest nodes won't send or relay messages over twice per second and will block nodes that submit messages more than twice per second. This weeds out spam nodes that try to submit messages at much higher rates. It becomes a consensus rule.

>We could make nodes not validating posts which were witnessed being created too fast or with too title PoW/PoS, but the problem is that can the spammer confirm his own posts and then merge into the main branch?

Messages can only be broadcasted at a fixed rate. If I'm connected to 8 nodes, the overall incoming message rate will be 16 messages per second if each node is limited to 2 messages per second. Individual nodes can connect to any amount they want but a minimum of 8 would be recommended.

>The main way to defend against it is to have the right amount of centralization through opt-in signed moderation

I agree but on the node or client level, not the overall network level.

>Nodes must agree that blacklisted posts don't count toward the speed limit so moderators are able to negate such attack without cpu work

Blacklists shouldn't be network wide. They should be applied on the node or client level. Multiple nodes can contribute to shared message blacklists. One of the great qualities about PoW is that it can be very hard to produce but very cheap to verify. To verify all you have to do (simplified) is hash the message and check it against the PoW hash. It takes milliseconds to verify a message with an arbitrary PoW difficulty.

>but how would they be distributed?

>If they are automatically rewarded through time, how to protect against Sybil shills creating many fake identities?

I'm talking about a regular cryptocurrency not anything special. DPoS works with 1 coin being 1 vote. Votes are cast by locking up coins for a small period of time and signaling your vote for specific nodes to be allowed to produce blocks. Once the voting round is over (it happens periodically) the top 30 or so voted nodes get to "mine" blocks at a fixed rate and in a fixed order. Free market competition drives the limited amount of block producers ("miners") that are voted for to give away a large portion of their block reward to their voters. They're basically buying votes. This is a good thing because it distributes coins most to those who secure it, the voters. Most nodes will give away between 70% to 90%. If the miner node violates consensus rules their block will be rejected and both they and their voters won't get paid. This will lead voters to vote for another node who follows the rules because they directly profit from it. It's a pretty good self regulating system with no power wasted on heavy calculations. The only weakness I see is that attackers could take advantage of low voter turnout to get 51% of the votes to vote themselves as "miners". They would profit the most but the system itself would still continue to be secure if checkpoints are included. The worst they could do to maximize their profit would be raising transaction fees. That's the same outcome as PoW but with way less capital needed to sustain the system.

>a board where an identity is required, but posting is anonymous

I suppose that could work if you wanted a whitelisted board. Ring signatures can be used.

>posting is anonymous through a zero-knowledge proof for user validity (ring signature)

Looks like you don't understand zero-knowledge proofs. They can only prove an unknown variable is within a range of possible values. In the case of Monero, they prove all inputs to a transaction are equal to the ouputs. This means the transaction's net value cancels out to zero. This proves someone can't create coins from nowhere and some unknown valid amount is being transferred. Only the sender and receiver know how much was transferred.

>Misbehaving would get the identity tainted, then doing so repeatedly will increase the probability of being blacklisted

If you use ring signatures you can confirm all the given signatures for a message are whitelisted. That means any one of the whitelisted users who's signature was used could have posted the message. It's impossible to determine which in the group of signatures was the actual message sender. That's the whole point. If you were to profile undesired posts you would be forced to correlate message content with multiple posts that share a common signature and deduce the bad actor relatively easily. If you did that, it defeats the whole purpose of the ring signature in the first place. You can't have your cake and eat it too.


Looks like Filecoin is more or less public, but at the moment it's only good for research and devs, I guess.




>Please note: Filecoin is in heavy development. Code is changing drastically day to day. At this stage, the repos, devnets, and other resources are for development. This release is aimed for developers, researchers, and community members who want to help make Filecoin. Miners and users who seek to use Filecoin will want to wait for a future release (likely, the testnet milestone).




My connection is significantly better now and I am at 610 peers and growing. Please continue to try, or retry if you stopped.


File: 3bcc00c1d6473f8⋯.png (360.47 KB, 450x500, 9:10, 3bcc00c1d6473f8af43638b68c….png)

Time to share some shite




File: 8a5457f53f329c5⋯.jpg (107 KB, 1280x718, 640:359, 悪い面接官.jpg)


What if we fork filecoin and make a version that cannot cannot be transferred between wallets, only people with a certain amount of hentai@homecoin are allowed to make posts, and seeding everyone else's posts is the only way to gain hentaicoin (though you can just buy wallets, so maybe this isn't necessary). It isn't spent to post, and once you're over the requirement you could spam the network but there's an opt out moderation system to offset this. You subscribe to a moderator group and will never see posts that get filtered by your mods, so one group can automatically filter every post made by a user marked as a spammer. The spammer will have to make new accounts to continue spamming and host the site for everyone.

Also, I'm not an expert on crypto stuff but I have an alternative way to allow for anonymity in this system. There can be "post tumbler" services similar to VPNs which are accounts that post anything sent to them from approved users, those users being anyone that hasn't been filtered for spamming. If a post tumbler allows spammers to post through them then the spam mods can just filter the tumbler. Of course these are centralized and go against the idea of the decentralized service, but just like the moderator groups if people don't like one service they can just use another one. If a service becomes compromised the filter can be set up to filter any post from them after a certain date.

Lastly, I think a generic social media backend would be interesting, pretty much every social media site is just posts with linked media and chains of replies. Clients could be made to mimic anything from anonymous imageboards to facebook to twitter, and if a client sucks you can move to an alternative one without having to abandon the userbase. If people don't want to be anonymous or interact with anons they can just filter the tumblers and put all their personal info on a "profile" (which would just be a website on IPFS whose IPNS name is their public key).


File: 6d5ab807f858f26⋯.png (236.64 KB, 409x450, 409:450, 14943565321510.png)


And a little more



File: adb07fd1b206689⋯.png (305.04 KB, 1280x960, 4:3, 6ae8b5dde38a67d2b1fc5c3446….png)


Anyone find anything interesting?



That page is CUTE


Things seem to load much faster than they used to. Even when I add something and try to get it through the gateway, it shows up pretty quick.

>experimental AutoRelay and AutoNAT

Really excited to see these since people either don't want to, don't know how to, or can't open ports.

I hope it's on by default in the next release.



> I hope it's on by default in the next release.

AutoNAT is already enabled by default, not sure about AutoRelay though. They elaborate that it uses other IPFS peers (described as relays) to get around a NAT.



>Things seem to load much faster than they used to. Even when I add something and try to get it through the gateway, it shows up pretty quick.

I agree, using this hash here >>1035793, directories load much quicker. Looks like IPFS is actually becoming viable! The only problem is that files aren't loading, but that may because peers aren't available or my router is blocking some stuff.


I ask for your help, /tech/. After adding a bunch of files to IPFS is there anything else I need to do in order for the network to be able to access them?

I can access the file by going to my own hosted node (http://localhost:32148/ipfs/QmUwsKgJxpxtEgqWrwjuYFbudDgm3JchPvy9TcK3LPuUFT) but if I want to use an external node I just can't access the files. It time outs or just throws an error. I tried both with ipfs.io and ipfs.infura.io

How long does it take for ~100 GB of files to reach the network?

I'm using version 0.4.19



>~100 GB of files

I'm guessing you probably have these in a fuck-huge directory with hundreds of files, while at the same time having a slow internet connection.

Maybe try organizing these into more manageable sub-directories?

Alternatively, it might just be that your hosted node is having trouble connecting to the network?



Yup. More than one thousand files.

I typed "ipfs swarm peers" and I see a bunch of IP addresses and stuff. Is that sufficient evidence my node is properly connected to the network?

But yeah, there's close to no traffic according to the WebUI so maybe that's the problem.



How did you add your files? Because it's possible your node didn't announce that you're a provide for the hashes in question, in which case there aren't any nodes that know you have those files in the first place.

Try to run:

ipfs dht provide <cid>
and then try again?



this may be a dumb question, but are your ports open? open port 4001, tcp only. you could also alternatively use UPnP, but that isn't as safe.



It is a dumb question but since I'm retarded it was the appropriate one. I guess that's what I deserve for using DHCP. Now the port is open and traffic is flowing.


It seems this is also a needed step. Is there a way to batch provide all the CIDs or should I just create a script?

But yeah, now a test video I added is working.




They should be provided by default when adding, and I think they're provided at an interval regardless of how you added them (while offline or not).

You don't have to do manually provide them, but they were probably not available after forwarding the port because the provide didn't happen automatically yet.

If this is wrong, someone please correct me.


File: 2393bada1fb61f1⋯.jpg (217.8 KB, 1436x674, 718:337, Youtube 2011 vs now.jpg)



Not providing CIDs for possessed blocks defeats the point of IPFS. So that should not be problem.

As for ports - how much is 'bunch of IP addresses and stuff.'? With forwarded ports and around 800-900 peers, I still get random delays between `ipfs add` and anything becoming visible at ipfs.io gate.


Thanks :3

And more junk: QmbxsmdSN9wt7SSFpRpdYnjk5A4PNc2H2vLhCsHaadHe96



I'm >>1038495

It doesn't matter because now I have a different problem altogether. Ever since opening port 4001 (i.e.: getting real traffic) my shit-tier Technicolor cablemodem router DPC3848VE craps itself after about 10 minutes of running ipfs daemon.

I'm thinking about getting a new router and using that piece of crap in bridged mode but at the same time I really don't feel like buying a new router just for IPFS, especially since I live in a third world country and everything is expensive as fuck.

Disabling MDNS help mildly but not decently enough.



Start your daemon with the flag "--routing=dhtclient". That should help stop your router from shitting itself.



Didn't help, crapped itself after less than 5 minutes. Thanks for your suggestion.



dht is just really heavy. no consumer nat router can handle it well. they are designed for normie web browsing not having thousands active connections at the same time and thats what dht does.



Why is it done at the router level? How much router RAM is needed in order to use it without problems?


File: 7ae212d7a2a4f4c⋯.jpg (50.88 KB, 303x311, 303:311, 0_int.jpg)


Have you ever wondered why if you make an empty folder and check the size of it you get 0 bytes even if it has a long name? The name is not a part of the file, if you manually open up the memory of a file you won't find its filename in the header because it's not a part of a file. This is also why when you wipe a hard drive and have to perform data recovery you lose all the filenames and metadata, that stuff is all stored in one place in the file system.



>Numbering threads is a tradition

Not when it an eterna-bread newfaggot.



23 days later I am here to tell you that you can get a netgear R7800 from china for $70.

I have a asus ac68u and my node hovers around 600-800 peers. I also run H@H. No issue so far. The R7800 is an even better router.


File: aec69490ad77179⋯.png (3.64 KB, 486x20, 243:10, ipfs.png)





Well, I didn't want to turn this into my personal blog but I ended up buying a router and my cablemodem still craps itself, even in bridged mode. Maybe it's the sheer amount of connections or something. I don't even know how to diagnose it.



What is "bridged" mode? How many peers do you get?

I'm running on a home connection that also relies on a ISP modem. I have it in bridge mode and it works fine. Router says 1500 connections.



"Bridged mode" is the cablemodem's term for not acting like a router.

More than 900 peers last time I tried.


File: 35a25d8594d333e⋯.jpg (339.51 KB, 1122x698, 561:349, Dorohedoro.jpg)


>memory of a file

>that stuff is all stored in one place in the file system

Are you trying to remember programming lessons you took in year 1993? Because that sounds like mangled description of FCB & FAT



Is that actual memory usage or just allocated/reserved memory usage?



Pretty sure it's actual usage.

It's up to 528MB now



My ipfs running in the background is using 6MB you gianormous faggot I have many connections. Did you compile it from source? Did you set how many peers are allowed to a more sane value then 2000 in the config i.e "HighWater" and "LowWater" being max and minimum connections? If no then you are a faggot who needs to go back.


I've never looked at IPFS thing, convince me to try this.



IPFS is just bittorrent but without the bloat and duplication of files via multiple torrents. Do you have a file you want p2p'd and don't want jewgle to take it down? i.e piracy Are you a reporter that's not a fag and will get censored by the cabal? Are you a giant media organization trying to save costs by making all your clients act as CDN's thereby saving you electricity, bandwidth, time, and keeping your files up even after your organization ends or changes all without you having to worry about it after publishing? Then IPFS is for you.



Who gives a shit I just download it and run, I'm not autistic enough for your bullshit.

And it only has ~900 peers at most.

542MB now



Ok then, you still can adjust the config file under ~/.ipfs/config to change the highwater and lowater values. You also could update to the latest version instead of whatever you are using. Where are you downloading ipfs from?

install gentoo faggot




What do you have yours set to?


File: b3a9240c0e57990⋯.png (21.78 KB, 157x215, 157:215, 1416294447275.png)


>IPFS is just bittorrent but without duplication of files via multiple torrents.

This. Much better than official description, which is full of buzzwords and outright bullshit about IPFS being persistent storage.




I'm suspecting some kind of memory leak now



maybe ipfs checks available ram to decide how big to make some cache an maybe anon has way more ram than you?

just guessing though as I haven't used ipfs yet.


File: 648578af2a0a578⋯.png (7.22 KB, 590x37, 590:37, 666.png)

Satan has shown himself



>666M memory usage

>ipfs daemon

Pretty obvious :^)

In all seriousness I'm quite sure computers and the Internet aren't just "happy accidents" and created with malicious intent.



IPFS is like 5 projects: IPFS, libp2p, multiformats, IPLD, Filecoin.



web protocol that is supposed to replace HTTP


abstract data model (requires high IQ)


self addressing data. Numbers that tell you their base. Hashes that tell you their hash function, etc.


network stack (can be used separately from IPFS)


payed storage on IPFS


Anyone can correct me if I'm getting something wrong or if I'm being too optimistic.

I think of the best things about IPFS is that it makes it possible to create and sustain websites like YouTube for astronomically less money than what Google pays to upkeep it. You just outsource to everyone else (users) to act as storage and they will do it willingly because they will be getting payed with Filecoins for storing your shit. It's genius really as you need incentive for people to lend anything, whoever came up with Filecoins is clearly really damn smart to look this far ahead and come up with this shit.



Yup. It's just about getting the implementation right but getting it right is not that simple. I guess we'll see profitable IPFS usage around 2023.



>whoever came up with Filecoins is clearly really damn smart to look this far ahead and come up with this shit.

The founder says in basically every conference talk, that few to none of the ideas in IPFS are new, and that IPFS is just an aggregation of existing, and proven concepts.

See the images here even



IPFS could probably better be summed up as "git over bittorrent" in concept. Neither of which are new, but of course that would be good. The challenge is in implementing it. Reminds me of PARC, how lots of companies basically just took their research and made it widely available.

But in this case it's more like rounding up standards, and implementing your own modular variant of it, rather than rounding up single products.

That concept in itself is the IP thin waist model.

Not that it has to be original, I'm not knocking them for it. The concept may be simple and unoriginal, but furthering an interface and then implementing it is no easy task. I'm reminded of all the poorly extended forks of p2p tools like amule. Also failed attempts like Bittorrent DNA, which probably would have worked fine if it wasn't proprietary (a p2p network without peers isn't very useful).

>for astronomically less money

I'm honestly surprised people are not attacking them aggressively because of it. It disrupts the necessity for a lot of services. I'm thinking CDNs, DNS, HTTP, and other services people pay to host, that are basically eliminated so long as you have any internet connected box and can run a command that's 2 words.



That's exactly what I was referring as 'buzzwords and outright bullshit'.

>web protocol that is supposed to replace HTTP

Bullshit. Makes as much sense as 'torrents are replacement for email attachments'.


DAG of content addressed nodes. It's a paragraph in IPFS documentation, that for some reason is called 'project'.


Ultimate hipster shit. Take something that was practised since programming stopped involving physical rewiring, and is so ubiquitous that nobody even notices it - and give it brand name.


is there any way to search files on the ipfs network (like trackers and dht with bittorrent)? i'm not very familiar with it, so that could be a stupid question.



Read the thread.

ctrl + f "search engine".



>Bullshit. Makes as much sense as 'torrents are replacement for email attachments'.

There is no reason IPFS cannot replace HTTP.

>It's a paragraph in IPFS documentation, that for some reason is called 'project'.

IPLD is a set of specifications. You are probably looking at the wrong repo if all you are seeing is a paragraph. Look at ipld/specs instead of ipld/ipld

>Ultimate hipster shit. Take something that was practised since programming stopped involving physical rewiring, and is so ubiquitous that nobody even notices it - and give it brand name.

But that's wrong you idiot. They have a whole fucking section explaining why that isn't the case. Copy paste:

Q. Is this a real problem?

A. Yes. If i give you "1214314321432165" is that decimal? or hex? or something else?

Well, which one is it nigger?

And that's just bases. SHA-256 vs SHA-3, can you tell the difference?



>replace HTTP

Ok, let's start from beginning.

What features does HTTP provide?

>Look at ipld/specs

Looked at it. It's still DAG+content addressing.

>you are seeing is a paragraph

I don't see 'a paragraph'. I see paragraph worth of information, inflated to look like a full standard.

>real problem

Yes, it's a problem. And it's solved decades ago. From tagged pointers (first LISPs) to URNs and glibc's crypt.

multiformats is design equivalent of leftPad js library.


So as IPFS stands today, could I host a website using it? Assuming of course I was able to convince my users to install an IPFS client.



Your users could just use a gateway

It'll probably be very slow when you're just starting out though.



I don't care about users needing to install software. I've been playing around with wireshark recently to figure out how acestream works because I want to make my own free version of it. All I really need for it is some way of hosting dynamic content (eg a constantly updating m3u8 playlist), and the video chunks linked by that playlist. I am new to IPFS, but this should be possible using IPNS right?


Not inherently related to IPFS, but if I get a Intel NIC (currently using realtek) will it handle the many connections IPFS makes better?



>multiformats is design equivalent of leftPad js library.

If something is important, it should be standardized ESPECIALLY if it's a solved problem.

The complexity or size of it doesn't matter since it's just as arbitrary as the rest of the features of a specification. If we all agree that this is how it should be then there's no problem in having a spec devoted to it. There's no ambiguity, no "defacto" bullshit, extensions and modifications to it are comparable and explicit. A multihash is a multihash, as described in the standard.

Not having it defined is just as bad as not having fixed sized ints defined, and those as a concept fit in much less than a paragraph but are obviously important, and the same thing, a type declaration/specification.

Fundamental building blocks of technology should be varied in size like this, it's basically the entire point.

Also it's ironic for you to point at LISPs and then speak like leftpad is a problem when high modularity and reuse is a core LISP concept. Everything problematic about that package is meta, and mostly centered around the removal incident. The package itself in concept is an ideal. Having the potential to disappear is not a library problem, it's an infrastructure problem. Discouraging reuse for any reason (slow package manager/interpreter/loader/compiler/etc.) is a tooling problem. In a perfect world there would be no penalty or risk for importing dependencies.


Depending on the frequency, you'll want to look into these things in order of slowest to fastest

IPNS, pubsub, IPFS P2P sockets / LibP2P streams.

IPNS publish takes a few seconds to propagate to the entire network.

pubsub has peers subscribe to a stream, and they can all publish to that stream, so it's like a shared broadcast channel. It's as fast as the broadcast, which can be sent multiple ways (floodsub = you broadcast to everyone, gossipsub = some kind of message relay)

sockets are sockets, you connect peers via their peerid, either by wrapping go-ipfs's p2p command or using libp2p directly. It's as fast as the connection.



I'd recommend pubsub for this kind of thing. It seems like the easiest way to do it without implementing your own complex peer management and broadcasting, but you can still have that if you need it. Like if you need to transmit data to a specific peer and not the entire group, you still can.

i.e. if a peer joins the pubsub topic, they announce they joined in the topic, you can grab their peerid from that announce message, establish a connection with them, send them the current header for the playlist, and then they'd be in a ready state for whatever the current stream data is supposed to be (probably text multihashes to video fragments in this case).

But it all depends on what you want to do and how you want to do it. You could easily just publish to IPNS and be done in 1 command if you make sure your stream delay is large enough to populate enough fragments before each publish. But it seems less than ideal.


There's too many variables. The nic model, the driver stack, the network config.

You're most likely to encounter problems at the router level because router vendors hate you.



The Intel NIC is a EXPI9404PT. The realtek is a RTL8111G (built into my MB).

Drivers are just linux mainline kernel drivers. No special network config.

My router doesn't seem to have any issue either. It reports about 1100 connections (total, not just ipfs).



Thank you. I was planning on essentially duplicating the HLS protocol and having the client refresh the master playlist, but pubsub seems like a much more direct way of doing what I want, I'll just broadcast the hash of each chunk as it becomes ready. I doubt I'll need to do any sort of peer management as I'll just be broadcasting an immutable list of chunks, then the client will request the chunks it requires as HLS does, but it's good to know I've got the option if necessary.

But from that IPFS blog link:

>As it is today, the pubsub implementation can be quite bandwidth intensive. It works well for apps with few peers in the group, but does not scale. We have designed a more robust underlying algorithm that will scale to much larger use cases but we wanted to ship this simple implementation so you can begin using it for your applications.

So it isn't ready just yet, but as long as as there's eventually some sort of peer hierarchy so the original host doesn't have to broadcast to each client individually this should work just fine.



oh someone already did this lol, never mind.




>So it isn't ready just yet, but as long as as there's eventually some sort of peer hierarchy so the original host doesn't have to broadcast to each client individually this should work just fine.

I think that's what gossipsub is trying to solve.



Very cool. I'd love to see someone take it further and have the server be standalone instead of a proxy to a container though (dropping the legacy stream). Probably wouldn't be a hard thing to convert.

I'm real interested in seeing how viable P2P streams are. Sometimes I do want to just stream my desktop with like 10 people, and I'd love that to be doable without relying on external services or shouldering the entire bandwidth requirement yourself. If I can stream and just have other peers relaying the blocks, the delay shouldn't be as bad as existing amplification solutions. I wonder how Skype handles all this.



>I'd love to see someone take it further and have the server be standalone instead of a proxy to a container though

I think that would be fairly difficult to do practically. Someone streaming would need to be broadcasting (ideally) to at least 3 or 4 peers himself so the stream doesn't immediately cut out if a peer drops off, and that would be quite challenging for a client trying to stream HD video off of a home connection. Doubly so if you're streaming on wifi, or on some sort of conference's internet.


The Corbett report is now on ipfs.


If you don't know Corbett, here's the RationalWiki article:





because everyone knows the far-left is rational



Corbett is baby's first /pol/ come on



I've got no idea who corbett is, just pointing out rationalwiki is only good for a quick laugh.



>rationalwiki is only good for a quick laugh.

Uh yeah, i know, that's why I linked it?


Why has IPFS not become huge with the recent crackdown on piracy? Like a lot of old trackers getting shutdown and whatnot? Or is there a large 'underground' community that I'm not aware of?



maybe its harder to use than torrent magnets. public "trackers" are just magnet lists now.



It's slow



very slow



IPFS in inferior to torrents for every use case



It's not finished yet.

You can share multihashes today in the same way you share magnet links. With the index being just a file in IPFS itself, it's hard to take it down without going after every node hosting it.

IPFS will likely gain popularity when anonymous routing is integrated into the main branch.

It would make openly distributed indexes simpler to make and use, while harder to shut down as well.

There was something like this


that used some json hosted over IPNS so the client was static but the index was dynamic.

If you scrape the DHT you can just index the entire network as a distributed database. Some people are doing this already.







Because 99.99% of people aren't autistic like you. If you want it to become "huge" then you need to make it accessible to people who have less than 20 centimeters of beard on their neck.






There, now answer me again, WHY has IPFS not caught on? And don't tell me it is "shit marketing"



>some obscure meme address that doesn't mean anything nor make reference to ipfs

>website doesn't even load without javascript

Do I even need to answer?



My brother is relatively normie tier. He asked he for help pirating textbooks this fall. First he tried looking them up on the Pirate Bay, but they weren't there. Then I showed him libgen, which is where he found them.

The two major problems for pirates are:

1. you need to be able to find the files your looking for.

2. you need to be able to rely on the file being available (have peers)

For web based platforms like HTTP and IPFS, 1 is basically impossible. Look at the money google pulls in doing a shit-tier job of this. 2 is unsolved in bittorrent, and IPFS (afaik) does nothing to improve on this. This is why libgen beats torrenting here, because you know the file will be there.

Okay I installed orion. How is this supposed to help non-autists? I probably have autism, and I have no clue what to click on to start downloading things. Do you really think I could suggest this to my brother and have him start pirating his books with it? Fuck, do you think I could recommend this to my brother and have him do anything with it? If this is what normie-tier software looks like in the IPFS world, then it has a really long way to go.


You could just install the ifps browser plugin with ipfs in the background. I don't know what orion is but the plugin seems preety normalfag friendly. Just find a ipfs adress on the web and click, download starts, assuming it was pinned and or there's peers, with a nice GUI showing the progress.



this isn't normie friendly at all. Normies don't normally run daemons. "Assuming it was pinned and or there's peers" is way too strong an assumption. TPB sorts purely on peer count for a good reason. ipfs-search.com only hosts dead links. First search I tried: last seen a month ago, last seen ten days ago, last seen 4 hours ago ... what about things I can download right now? What niggerfaggot normie is going to sit there for ten days waiting for the resource to come back up? (they have a cute color scheme: green means "seen within the last month", yellow means "seen within the last year", and red means "seen longer ago than a year". how helpful)

Normie tier software would look like this: you install a standalone program. You double click the icon on your desktop. Up comes a window with a search bar and some recommended links. The search bar finds resources that are: 1. alive 2. relevant, in that order. The recommended links are new, interesting, and of course alive. It also has a little pane where you can add new resources from your computer, add a description to them, whatever.



>For web based platforms like HTTP and IPFS, 1 is basically impossible

With IPFS you really only need to know the hash of the book you are looking for and IPFS will automatically requests nodes for this file. If there are any seeders it will find them without any extra work. But then there is the problem of you finding these hashes and where. A good improvement to IPFS would be to make some sort of hash database for when you are looking for something, and since it's just hashes you can't be fucked with the excuse of copyright infringement since it's just a hash and that's not copyrighted. This could make #1 normalfag and non-autist friendly.

>2 is unsolved in bittorrent, and IPFS (afaik) does nothing to improve on this

There's filecoin which was made with the goal of incentivizing people to store files and make them available on the network for as long as possible. It rewards people with the cryptocurrency according to how much time they seed the files, and if I'm not wrong, also based on their demand.


Is there a way to use ipfs without filecoin? I.e without having your download/uploads linked to a hash chain that is ummutable which is to say always able to be tracked back to the filecoin account/metadata that made the transfer? Or is there a way to generate a new file coin account/identifier on demand?



>since it's just hashes you can't be fucked with the excuse of copyright infringement since it's just a hash and that's not copyrighted.

Ha! how kindly you think of various western governments. I think "what color are your bits?" is the most complete treatment of this, but the pervasive banning of TPB demonstrates that you don't need to host the resource itself to get torn down.


File: d22bc8bb25226cd⋯.webm (135.38 KB, 640x360, 16:9, oy_vey_shut_it_down.webm)

Hosting your own data on your own machine? Who let this happen?

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / aus / boxxy / choroy / dempart / f / jenny / jp / komica ]