[ / / / / / / / / / / / / / ] [ dir / bestemma / clang / doomer / fa / flutter / utoronto / voros / wmafsex ]

/qresearch/ - Q Research

Research and discussion about Q's crumbs
8chan Cup Knockout Stage - Friday, January 18 at 08:00 p.m. GMT
Winner of the 65rd Attention-Hungry Games
/cure/ - Your obscure board for medical-tan appreciation

December 2018 - 8chan Transparency Report
Comment *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
(replaces files and can be used instead)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.

Welcome Page | Index | Archive | Voat Subverse | Q Posts | Notables | Q Proofs
Q's Board: /PatriotsFight/ | SFW Research: /PatriotsAwoken/ | Bakers Board: /Comms/ | Legacy Boards: /CBTS/ /TheStorm/ /GreatAwakening/ /pol/ | Backup: /QRB/

File: 8e1a4c121b66428⋯.jpg (23.65 KB, 400x265, 80:53, ComputerProgramming16x9.jpg)

d1575b  No.2352371

This is a thread created by a programmer for programmers to interact in support of QResearch. Great place to perform live Q&A chat.

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf

Max filesize is 16 MB.

Max image dimensions are 15000 x 15000.

You may upload 5 per post.

d1575b  No.2352393

TW, you there?

8ce561  No.2352397

TW here.

d1575b  No.2352403

WT, here. TW said:

>It's a searchable offline archive of posts and tweets, easy to seed with multiple collections. Shows time deltas. Qclock filter function. Caching proxy for images, rewrites URLs to fetch from archives when original picture is not available.

>I run it locally. I have no idea what the best packaging/distribution approach is, thoughts are welcome.

>Multiple levels of trust needed to share, run. Doing this safely for everybody is my main concern. Not everybody can inspect the code at every release, or would be willing to install/run on their machine.

Wow. Sounds involved. Let's get some basic info outta the way and maybe I can give you things to consider…

8ce561  No.2352404

So, WT, ideas on how I could share what I have written?

d1575b  No.2352409


First off, what platform(s) are you targeting?

d1575b  No.2352418

Second, please tell me it's going to be free to the masses.

d1575b  No.2352423


Windows? Android? Apple? Any browser?

8ce561  No.2352426


Format is made for desktop, haven't tested on mobile.

Frontend it will run on anything that has a relatively modern browser.

Backend needs node. Also ext4fs to store large collections of files, but this limitation can be removed.

d1575b  No.2352437


Windows Desktop?

8ce561  No.2352445


>Learn how to archive offline.

The recent discovery on twitter post correction delta and Qclock means the tool would also speed up the dig.

d1575b  No.2352450




8ce561  No.2352455


The frontend runs on any modern browser.

Electron could be used to package the app for any platform, frontend and backend. I've started working on it but it needs more work, if this is the best packaging approach.

I host the backend in a (linux) virtual machine, but it could be a server in the cloud. Having everything available offline was the driving motivation, so could hosting would defeat that purpose.

d1575b  No.2352462

Third, what's the dev platform?

I know alot about deployment on Windows (all flavors), Android and a little about Apple (if developing on Windows). Other stuff you'll have to leave a post here and wait to hear something.

d1575b  No.2352470


What's the front-end written in? GUI or cmd line UI?

8ce561  No.2352495



The frontend is written in JS (modern, requires babel/JSX) and runs in any browser. GUI. (The command line is used to fetch resources, but I'll integrate that to the UI.)

Electron embeds a browser and would allow it to run like a native app on Mac & Windows.

Distributing in a *safe* way is my main concern.

>How can user trust that they don't get a malicious version of the app?

>How can I avoid doxxing myself sharing this?

d1575b  No.2352506


Wow. Good questions. Hang on…

8ce561  No.2352507

If it were a safe solution I'd just put the code up on github, post a link here, and let someone address the packaging/distribution. I'm a simple codeanon.

d1575b  No.2352520


So you have a very unique problem, my man. I don't think I've ever been involved in a deployment that was anonymous. It may sound ridiculous but perhaps you need to think like a hacker in that you deploy an install program via torrent and include a pre-auth "safe certificate" from an anti-virus company.

8ce561  No.2352550


A simple bootstrapper with code signing? Yes, it seems to solve the "anonymous distribution" part of the problem.

Why would users trust that I don't get comped resulting in malicious code getting pushed in updates?

d1575b  No.2352567


Anti-virus companies will issue you (for a small fee) a certificate. Users go to the AV website, enter the KeyCode from your certificate and are told if its safe or not.

8ce561  No.2352571


More precisely, if the software is useful, how do I make sure it cannot be exploited as a troyan by Clowns? Or why is this not a concern?

d1575b  No.2352575


Oooh. Bad actors. Hmmm. That's going to have to be approached from a checksum/md5 point of view. MD5 is your friend for Linux dists.

8ce561  No.2352584


This moves the trust to the anti-virus company. Do the Clowns have a copy of the CA? Is it not safer to use a self-signed CA? I don't know enough about this topic to make a decision.

8ce561  No.2352596


Code signing solves the tampering problem, but it doesn't prevent malicious actors from getting at me and taking over the distribution infrastructure.

If sharing this is going to put me and everybody at more risk than keeping the code for myself, it seems rational not to?

d1575b  No.2352606


This certificate I'm talking about is not the same as a CA. It's for verifying that you app is not malware.

As for MD5, be sure you have finished all MD5 computations before dropping the 5 to 8 bucks for the AV cert.

d1575b  No.2352610

If you stick with MD5 and give appropriate warnings to ensure MD5's match before use, you should be at minimal risk.

d1575b  No.2352623


A malicious actor could do a man-in-the-middle attack on your database is he knows a specific person is using it. I dunno. The variables on attack vectors expand exponentially there.

d1575b  No.2352634

I don't want to mislead you–I'm stretching my knowledge here since I'm primarily a Windows/Android Dev. Sorry I couldn't be of more help.

8ce561  No.2352645


Writing a great app and distributing it in an *apparently* safe and anonymous way is an excellent vector to compromise autists. I don't know why the Clowns haven't already done that. Too much effort? Risk of being exposed?


The code signing certificate is trusted by a CA. Anyone with a copy of the CA can produce signed code.

You are suggesting MD5, but is a very weak hash function.

d1575b  No.2352681


I'm saying get a reputable company to certify your app as legit (not malware) since you'd be anonymously distributing it as a torrent.

Then, using MD5 can tell the end user that not a single byte has been altered in the current deployment. MD% is more than adequate for that small task.

d1575b  No.2352683

Gotta head to work. Hope I helped somewhat. Shadilay!

8ce561  No.2352696


There is no database, it's all in local plain files.


I understand. It is a difficult topic. I'll keep in mind the signed code bootstrapper idea, it is part of the solution.

Thank you for the discussion.

If Q team is reading this, maybe get in touch? Extract me and I'll happily write code for the community. Though it's perhaps not worth the hassle for you at this point.

8ce561  No.2352708


Thanks & Shadilay bro!

29f2fa  No.2354303


EA here. Can help with cloud / distribution / devops.

If its "all local plain files" (btw that's good for "store it all offline") then the attack vector changes from MITM to corrupting the file sources. So we'd want a way for original file owner to verify/checksum the distribution source, and the downloader to verify/checksum his copy.

>>2352520 Certificate can come from LetsEncrypt.org. Also it's free. I use this for some of my sites already.

8ce561  No.2354516


Thank you for the input.

Not sure about LE certificates, I believe they are tied to a domain due to the verification process? I don't know that I would be able to secure a domain anonymously. Also LE certs expire after 3 months, they are not meant for code signing.

I am hesitant to go with a self-signed CA, it seems maybe risky but I haven't thought it through yet.

Local plain files by design. I don't rule out using one or several local DB engines, but they would only contain information that can be reconstructed from the local plain files.

Indeed corruption at the source could be a problem. I have a (python) 8chan thread archival tool that could be made available as a service (ran by independent sources), integrating the hashing process. Cross-checking sources would help detecting comped ones.

I'll work on this aspect as soon as paying job permits.

Is there such a thing as anonymous github without going to the dark web? A trustworthy (NSA/MIL) git server would be awesome, but I have no idea how I would get access to that and have reasons to believe it is safe to use. Also I am neither US resident nor citizen.

dc0709  No.2359386

File: 15cdf7c04f82765⋯.jpeg (11.71 KB, 300x168, 25:14, lol.jpeg)

not trying to slide, but you are all geniuses! fantastic work you are all doing.

please keep it up, and when you have the time, offer some advice to a noob trying to become a master computer scientists such as you guys.

>what programming language to start

>what to learn besides programming languages

>where is some good places to learn CS

thanks and again great work

1aaba4  No.2360560

File: dd199e053c4a9b4⋯.png (1.94 MB, 1280x1920, 2:3, 2.png)

File: 9d58e3081e70b01⋯.png (2.89 MB, 1280x1920, 2:3, 027a0f6a2e1d249c98c6459eca….png)


damnit i am having brain fart so many steg progs installed cant remember which one subtracts one image from another

2.png needs subtracted from 027

or combined

from what i can tell with bitmasking its a girl kneeling with a fountain of blood spraying on her or out of her, the water is blood i seperated them HELP!

also zsteg is far far superior for detecting hidden shit

found out all kinds of smartphone virus's posting by clown bots, motorola assembly files and shit



anyone running apache can you get this up and running? finding hidden twitter images

careful with 027 it has detected headers of phone executables but could be false


8ce561  No.2364880

File: 51ec050adb0ced8⋯.png (279.22 KB, 1656x1293, 552:431, qminer_demo.png)


e4be4b  No.2366039

File: deb3d379242dafc⋯.jpg (399.54 KB, 2278x1780, 1139:890, patchwork client.jpg)


> Is there such a thing as anonymous github

I'm starting to evaluate git-ssb for myself. I'm not sure yet about how well it would withstand attacks by determined adversaries. It is decentralized and requires some extra software running, I'm not sure if that counts as "dark web".


It's built on top of secure-scuttlebutt:

"A database of unforgeable append-only feeds, optimized for efficient replication for peer to peer protocols."


Social network application (kind of like twitter):



> Code signing solves the tampering problem, but it doesn't prevent malicious actors from getting at me and taking over the distribution infrastructure.

Secure Scuttlebutt has similar benefits to a blockchain. Anyone can distribute your messages (not tamper), and you can't rewrite your post history. Kind of like with Git how every commit has a hash that is based (in part) on the commit history. So if you were compromised, only messages you write from that point forward would be affected.

Conceivably, you could publish messages in the past if you create them in order. So for example you could create a "Q" identity (public/private keys) and import all Q posts in order and have them contain the desired timestamps. You wouldn't be able to change them once published to the network, kind of like with Git's commit history.

d5f7c2  No.2366235


Self-signed us useless since https will flag it as self-signed hence "questionable".

Not certificate authority trail to follow for credibility.

Basically as is server owner says "I vouch for my self"

d5f7c2  No.2366298


read some books . very few real "masters" here but some highly competent, self-taught but frequently "narrow" expertise.

Don't bother talking about Internet traffic routing, security or detains of IP protocol family. Very, very few here that speak that language.

Host-based programming/scripting mostly, and web stuff

d5f7c2  No.2366315


So you already discovered the password?

e4be4b  No.2366980


For code signing, PGP (gpg) is great. Git supports commit signing with it, and it is used by major linux distros by their package management utilities.

d38a5a  No.2367573


Start with Python. https://www.learnpython.org/

Also learn databases - start with sqlite, because it doesn't require a server. UnQLite is also a good db to start with - it's up to you to learn the difference between SQL and NoSQL. Learn html and javascript. There is even a programming language based on javascript called nodejs that can also be useful.

I believe these are good places to start:



91b78f  No.2368501

Hello fellow programmers!

Just viewed this video https://youtu.be/MqmeteSv8cU and SerialBrain2 spotted that "FARMER" and "QANON" have the same English letter gematria.

Pretty cool, but who would notice that without a calculator or word list? Just made one! Based on English vocabulary file "enable1.txt" (used by Words with Friends game). https://anonfile.com/Z8H295f7b1/gematria.txt

It would be better to make a gematria word list based on Q post, especially since English gematria seems to often reference names…

There may be a certain intuition or Sense Motive to hone in on likely clues but let's up our game!

05a340  No.2368709


Anon, that was beautiful, you are a great 'first contact' Anon!

fa4e1b  No.2368845

File: 137be270b9a7de0⋯.png (865.2 KB, 2880x1800, 8:5, QAnalysis.png)

I posted as new thread before I saw this thread…

Tool for Q-searchers. Some fragments mapped, ball is in your court.


Node JS, Electron, Cytoscape, GunDB

cd into extracted directory and run "npm run"

d5f7c2  No.2368941


Gematria originated as an Assyro-Babylonian-Greek system of alphanumeric code or cipher later adopted into Jewish culture that assigns numerical value to a word, name, or phrase in the belief that words or phrases with identical numerical values bear some relation to each other or bear some relation to the number itself as it may apply to Nature, a person's age, the calendar year, or the like. A single word can yield multiple values depending on the system used.

Although ostensibly derived from Greek, it is largely used in Jewish texts, notably in those associated with the Kabbalah. The term does not appear in the Hebrew Bible itself.

Some identify two forms of gematria: the "revealed" form, which is prevalent in many hermeneutic methods found throughout Rabbinic literature, and the "mystical" form, a largely Kabbalistic practice.

Though gematria is most often used to calculate the values of individual words, psukim (Biblical verses), Talmudical aphorisms, sentences from the standard Jewish prayers, personal, angelic and Godly names, and other religiously significant material, Kabbalists use them often for arbitrary phrases and, occasionally, for various languages.

A few instances of gematria in Arabic, Spanish and Greek, are mentioned in the works of some Hasidic Rabbis also used it, though rarely, for Yiddish.

However, the primary language for gematria calculations has always been and remains Hebrew and, to a lesser degree, Aramaic.

Numerology is any belief in the divine or mystical relationship between a number and one or more coinciding events. It is also the study of the numerical value of the letters in words, names and ideas. It is often associated with the paranormal, alongside astrology and similar divinatory arts.

Despite the long history of numerological ideas, the word "numerology" is not recorded in English before c.1907

So it would be correct to describe it as sort of a mystical, Kabbalistic form of numerology voodoo like reading chicken bones?

With no scientific or linguistic basis?

Just magic words and numbers?

And we are supposed to take it seriously?


8ce561  No.2369040


Thank you Anon! SSB is precisely what I was looking for.


Still setting this up, I'll get there.

40c6d7  No.2369158


I don't know about mystical aspects but it sounds like some players are using it as a means of secret message. Q = 17 has come up a bunch of times too so it could be a communications technique.

Let's check 187…

If people are using gematria as a name-drop/signature and time deltas sometimes… Let's check 111… No luck. What are some good numbers to search for because it might match names.

All those 2 letter abrieviations would have values between 2 and 52… This isn't going anywhere. Wonder if any names equal 30 after those 30 days of silence… NP = 30, Namcy Pelosi? She isn't in the news at this time, bad loose thread…

If and when a real gematria indicator is present you would think other collaborating hints to be in same message… Maybe doing it backwards then. :/

Fine! I AM NOT SERIAL BRAIN! HE IS THE BRAIN AND I AM THE MOUTH! haha Maybe I got carried away there…

b82900  No.2369794

What if it's useful to LABEL the edges between nodes. And thus/then have multiple edges.

For instance, Strzok has multiple connections to the same person (ie Page etc) but each connection has a different flavor (ie co-conspirator, "textual" relationship). This is my personal gripe with https://DiscoverTheNetworks.org, that it doesn't provide a 10k foot view of how the quid pro quo work. Another example: WJC gives a "speech" on a date, Russia "pays" him for it, and then somehow uranium is exported to them.

Yes, populating this thing will be time-consuming, but we need not try to enter all the details all at once. Eat an elephant one bite at a time.

SSB looks like a winner.

GunDB is very interesting but doesn't have property lists for edges. It's possible to use intermediate nodes for edges, but then the challenge is data management or graph visualization.

6b3962  No.2370327


Looks interesting. Using the API?

7ceeb5  No.2370545



hope u dont mind muh lurking as well

6b3962  No.2370561




>Start with Python, SQL. Learn html and javascript. These 4 languages will get you FAR all by themselves.

Start with HTML/Javascript if you are starting out, then Python and SQL

fa4e1b  No.2371101

GunDB lets you have property lists for each node and many connections. Q Analysis app lets one manage those individual properties. For each node with only name and image displayed. Each node can cross link to another graph file or be linked to a url. All html css js. Can customize presentation layer.

8ce561  No.2371704


Yes, it loads a local JSON file that comes straight from qanon.news/api/posts. I plan to add support for more sources but time has been scarce lately.

Hopefully I can share this before Q starts wrapping up.

c9230e  No.2371963


Haha yeah, can we finish our tools before Q finishes dismantling a 2,000 year global conspiracy to enslave the entire species?

bb7b36  No.2377944

Programmer here, was working on self-redistributing OS and encountered a similar problem to yours on verification. You can't provide a guaranteed 100% non-comped program, all you can do is make it so it's extremely unlikely (don't let perfect be the enemy of good).

MD5 hash has been broken for years and data can be manipulated in transit or in standing. JavaScript payload is sent plaintext and can be intercepted (see 'problems with encryption in JavaScript' - you can't send a reliable JavaScript program without the reliable program having a reliable means of transport).

Electron is a Chrome spin-off and I'd advise away from 'Bridge' technologies.

It's unclear if you're after a standalone app or a webhosted app, so I'll attempt to bridge both. Webhosted is nearly impossible to give any real assurances of non-comped status (you or host can be compromised, can be hijacked on site, etc).

Standalone requires a webhost, so it hits the same issue.

So, tips:

1) Keep the app small. Smaller it is, the easier it is to do a byte-by-byte comparison. A 20MB app would be mere seconds, 800mb is going to be a pain.

2) Use anything stronger than MD5 for a hash. Avoid anything with the NSA's rubberstamp of approval (that means no SHA256 etc).

3) Supply more than one type of hash of the software.

4) Make sure you have an alternative source that keeps a copy of the current hash(es) [EG an archive page] so even if main website is comped, archive isn't.

5) Utilise multiple webhosts for hosting the main package itself (why? To compromise the software all hosts have to be hijacked). You can include free software hosts etc into this mix. Don't touch Mega because Kim Dotcom no longer owns it (NZ government gave it to a Chinese investor).

6) Include a copy of the hashes in a text file which is shared with the software package.

7) Use a 'read only' storage format (EG squashfs or an encrypted archive). In order to tamper with it, you'd need to replace the entire file.

8) Supply the individual with the means to build their own from scratch. So if the binaries are comped, it's possible to recreate a non-comped binary from source.

9) Archive copies of the source code.

10) Only push out major releases (so you're not overwhelmed with the archiving/hashes/mirroring).

Essentially to avoid software compromise, you need many alternative copies and many checks/balances. As for yourself being compromised, you need someone who acts as your watch guardian (they don't have control over the project, but they do have the power to say if you're compromised).

Having warrant canaries and a specific coded signal that you can mention that the community will know to mean you're compromised will also help.

If you sell out and none of the checks succeed, then the open source code or binaries would be analysed or decompiled by a white hat eventually and you'd become exposed anyway if that was the case.

But the most compromised product of all is the one you don't even produce. ; )

8ce561  No.2380208


Wow. Thank you Anon, this is a fantastic response, very well thought out. Will re-read often.

Docker? Small images, reproducible builds. Not much isolation, but probably enough. Not very easy to run.

Virtualized Alpine-Linux-based iso image? A little heavier, perhaps not so much with careful decisions (go backend instead of python). I think I prefer this approach.

I'll try harder. Again, thank you!

fa4e1b  No.2381203


Yes on all points. However, for my part, I’ve got too little time to maintain my self. Source is there. Any one can read and decide if they wish to utilize it. Its a start, hopefully others can help realize it. I can achieve what I set out to be able to achieve and figured others might get something out of it.

8ce561  No.2391576

I created a board for Q Research software development.


6b3962  No.2398158

Whatever happened to the idea of a Q Research Wiki? Did that whole thing die out?

bb7b36  No.2401309


Wiki software requires a dedicated PHP host of some sort to host (especially if you want to stay protected against censorship).

Wikia is a free alternative but it reeks of liberialism and I bet would censor in a heartbeat.

It would be nothing short of a full-time job to maintain (both against shills and keeping it up to date/organised).

I could guide you on the basics of setting up MediaWiki on a Linux box (LAMP + some light configuration), but I absolutely do not have the resources to support the endeavour, I am literally overstretched.

6b3962  No.2412127


I like the idea of automation. If we had a good way of trawling the breads to assemble info the wiki could build itself. That too, is probably a full time job.

What are the projects that are currently being worked on? Tying them together or collaborating would make things go faster. Lets pool our resources.

1bc038  No.2449345

just a question

looks to me like there are very advanced bots out in the open

if so, I might have bumped into such bots in public forum

- they follow an agenda

- they give likes

- they give dislikes

- they distract

- they talk to each other

- they answer questions

- they notice Leetspeak, but can't understand it

2e41e8  No.2469655

I made a neat script that could be useful for Linux users.

Many times if you are making a script to process a lot of files it is not so easy to fully use all your possible CPU power (cores->threads).

So after few tries I can up with this script that can be used as a base. Easy to modify to your needs and it uses all possible CPU that you allow.

for example if you have files to process (for example test thousands of images if they contain steganography), list the files, feed it as standard input to the script and let it spread the processing to all available cores / threads.

For example:

- list files using ls -1 *.png will list file names, pipe it to the script

- or any other line based info that you can process. the script can spread the items to process in your way to all cores.

here is the script:




# THIS gets the available processing units, if 4 cores => 8 threads this will get value 8

# depending on your jobs, could try values like units -1, -2, +1, +2… or *2, *3, *4

units=$(nproc –all)

cat $input | while read line; do

tot=$(($tot + 1)) # just a counter to display total processed

./clean1.sh $line > $line.clean & # THIS line will run parallel jobs. Change it to anything you want, remember & at the end

jobs=$(jobs -p | wc -l | grep -o '[0-9]*') # currently running job

echo $tot $line "($jobs)"

#echo "Jobs $jobs"

while [ $jobs -ge $units ]; do # if jobs at the maximum value, wait

#echo -n '.'

sleep 0.01

jobs=$(jobs -p | wc -l | grep -o '[0-9]*')

done # when done, continue reading input and add more jobs.



echo Done $tot

fa4e1b  No.2493793


AI can assist in this. I’ve been working on AI to monitor reddit for shills and abusive moderation. Talking point detection is on my list too.

If we can detect these things with AI we can respond to shills with evidence to refute their argument and with moderators, archive and record events to expose the infiltration.

I’m hesitant to post code yet, not sure it’s a good idea to put something out that can be weaponized by trolls the same as we use it as defense.

Right now I do an ok job of detecting when people are arguing vs debating, asking questions vs concern shilling. Starting to narrow in on detecting posts likely to be removed by a boards moderator but need more sample data.

None the less it’s possible. We just need great training data. Sample conversations with shills and sample moderated posts.

e7dd40  No.2494187



6b3962  No.2495240


Some javascript magic? I'd be interested in reading it.

fa4e1b  No.2535594


I’ll post source this weekend. Real life has been occupying free time.

6b3962  No.2537432

File: 08b4e0e7aba612d⋯.gif (130.52 KB, 220x268, 55:67, doit.gif)

b7fb97  No.2561043


Thread parsing automation (you'll need to install beautiful soup and mechanize, on Linux this is easy [do an apt-cache search for beautiful soup, and then mechanize]).

Here's HunterKiller bot (it can parse threads, but the code is sufficient enough you can probably re-engineer it to parse posts from threads). It was censored by the mods when they deleted the Q-branch thread:


Uses python 2.7 to my knowledge. Can't help with Windows, I ditched that shit OS years ago.

6b3962  No.2561161



Interesting! Looks good - although you'll possibly miss some breads due to ebakes and what not if they don't match your "Q Research" restrictions.

I've been archiving the JSON from here since about FEB. I have over 3000 breads all archived locally and online in JSON - I just need to work out some logic on how to trawl with some context in order to make some sense of all the data we've found.

I'll take another look later on. Looks good!

b7fb97  No.2561191


Those are 'professional level' software bots, which have been in development for quite some time now (since at least 2010). The end goal of such software is to get 'natural speaking' bots that can engage targets (and 'talk' with other bots to make it seem like a legitimate conversation is ongoing).

I spent years lurking and interacting on dubious forums where such malicious activities were being tested. The bots aren't perfected (they have the same flaws as normal chatbots, presently), but there's an ongoing effort to make them 'more advanced'.

HunterKiller bot is homebrew, but it's based on several iterations of code which was based on observations of the so-called 'professional' bots. Such software is sold to both military and political activism groups (the bad kind: think Media Matters).

HunterKiller was my proposal to counter the bots: basically, a bot advanced enough to hunt other bots. What I've given you is a barebones example that should contain sufficient enough information for you to build your own variants.

It's my personal opinion that passion trumps corporate software development any day of the week. That code took me about 7 days to write (I have limited free time), but the potential to tack on other, more advanced Python libraries are there.

PS: Shills often use scripts (folders and pieces of paper in more primitive operations), more advanced shill operations use specially designed software that allows them to copy/paste generic garbage responses (usually across several accounts or IPs, depending on sitch), and very advanced shills have bots that automatically select what garbage to copy/paste with the shill acting as bot handler.

Check out the Clown College thread where I explain more on bots in my earlier posts.

From a strategic standpoint, you have the homefield advantage, because shills/bot posters reply on spam and generic replies or obvious tells. With a HunterKiller bot that is sufficiently well programmed, you can mass identify these bots and shills for some beatdown with administration tools.

Eventually you will experience shills who have tools that can 'thesaurus' the words around so it seems 'different' so don't reply on verbatim matches but perhaps even Markov chain analysis.

Hope this helps.

b7fb97  No.2561225


The point of the tool is to actually filter out the breads because it's a HunterKiller (you're not huntng/killing the breads: you're looking for the garbage threads). But you could modify it to investigate breads for shill posts or copy/pastes. It's up to you.

If you skim over the many anchored threads, you might notice it almost appears as if the admin are using such a tool (which greatly speeds up identification rates of trash threads). It's a pity they censored it as it was intended to help, not hinder (it cannot post and I won't build it so it can as that would only merely aid the shills).

b7fb97  No.2561418

HunterKiller's friends (variations) include:

ArchiveBot: mass collect all posts from all active threads in the catalogue (allowing you to do a raw text save of the data). Alternate version: bulk send archive requests of the thread URLs to archive.is/archive.org.

[Properly combined, you can keep a simultaneous offline/online version. Word of warning: when archiving to a website, be sure to only archive 'finished' threads on archive.is and to space out the requests over several minutes so it doesn't appear you are flooding/a bot. Automated and slow is better than manual and even slower.]

MonitorBot: keep track of which threads have 'moved position' and thus have 'new replies' (this is a technique used by shills to direct their limited resources to whichever thread is presently active; likewise, you can do the same).

KeywordBot: Have a bot that looks for specific keywords, image filenames or other triggers and then flag them up when it spots it (shills also use this technique to know when you're talking about a subject that they need to 'shill on').

Newsfeed/TrackerBot: use it to pull the latest news from websites (it's strongly recommended you use a news sites' RSS feed to do this as it keeps it nice and simple). Will require substantially more work and beware shitty unicode strings in the returned data.

Literally, whatever the hell else you could imagine. You're also not restricted to this board. If you change the URL to another board, the code should largely still work (albeit you might have to modify what data gets accepted as some boards don't have poster names).

If you want to get super anal, in theory you don't even need python to build an auto-parse tool. If you're batshit insane, you could even use wget coupled with a bash script.

Beautiful Soup and Mechanize are extremely powerful. Beautiful Soup does HTML parsing, and Mechanize is like a full-blown browser under the hood. You *can* post to a forum/board, but I've noticed the Media Matters shills appear to be doing everything manually (or whatever they have is absolutely shit), so I leave it as an exercise to talented chans to develop (highly advise you do NOT publish any posting capabilities as it will only arm the less well developed shills).

And believe me, this is just scraping the surface.

0e37ff  No.2566977


yes my friend … this makes sense

i was amazed on how 'smart' they interact and how vicious they attacked a deep state thread with just links. i managed to expose at least one bot and got banned by forum admin for 1 month before i was able to target more.

33e1de  No.2575758

Hey guise,

I've been chatting with a few Bakers and they would love to have the bread replies count at the top of the page so that they don't have to scroll to the bottom of the bread to check how many posts have been made, and when to start the fresh bread.

Anyone know if this can be added to the user js or a simple css fix?

Any ideas? Thanks guise.

2a3fd5  No.2587705

File: 45e06df9f1b20b4⋯.png (188.92 KB, 1359x355, 1359:355, script.png)

AntiSpam & ToastMaster Scripts Combined


8ce561  No.2588476


Something like this?

$(document.head).append('<style>#post-counter{position:fixed;top:20px;right:10px;font:24px sans-serif;opacity:0.5;color:#f60;}</style>');
$(document.body).append('<div id="post-counter"/>');
function updateCounter() {$('#post-counter').text($('.thread>.post.reply').length);}
setInterval(updateCounter, 500);

8ce561  No.2589502


Cleaner version: >>2589214

6f830b  No.2598959


In this day and era we need to keep pace with the developments of military and commercial establishments. No point trying to fight bots as humans because that's exactly what they want you to do - waste time on bots.

Instead, you want bots to fight bots, so the humans can fight the humans. 9 times out of 10, all you need is an identification tool so admins can simply cap and ban and that's it.

I see HunterKiller has inspired a JavaScript variant which aids finding breads/spotting spam. I didn't build a bread spotter because it can allow shills to funnel their less competent members directly into a bread.

Be very wary of what tools you do realise, bearing in mind that shills have absolutely no qualms about stealing them, repurposing or reverse engineering them. They are largely conducting illegal activities, after all.

(PS: Create legal traps specifically targeted at paid shills in the licencing of your code, so if it ever gets found in their hands, you have means to 'return fire')

6f830b  No.2599230

Going to hand off some of my tasks to other anons, if they want to take it up.

Tools you might want to consider developing (basic tech specs included):

Online archival status checker:

1) Parses an entire bread

2) Pulls all links (feel free to add in a regex link filter so you can filter out irrelevant links)

3) Checks each link, in turn, against an archive host (EG archive.is, archive.org), to check if they have been archived online

4) If not, archives it

5) Generates a generic status report message that you can then copy and post to that bread (including full bread name) to let anons know you've done an online archive.

Missing bread/duplicate bread detector:

1) Parse catalogue for breads

2) does bread numbering (HK has this built in, so feel free to rip the python code I supplied for this)

3) Highlights missing numbers, duplicates as a 'status report' message (with direct links to dupe threads) which can then be posted to an admin to investigate/solve

One for the JavaScript anons:

Bread filter tool:

1) User supplies URL of bread

2) User has button that says 'update view of bread' (to stop laggy constant real-time updating)

3) Tool returns all posts in a bread that have pertinent research

4) (Optional) Have a set of toggle options that allows you to filter the posts depending on the following options:

[Has link] (allowed/not allowed)

Subset options of [Has link]: [Contains archive link](On/Off)[Contains old media link](On/Off)[Contains image link](on/off)[Other](on/off)

[Has no link] (allowed/not allowed)

[File upload only] (allowed/not allowed)

[Contains text] (allowed/not allowed)

[Contains gratitude] (allowed/not allowed)

You can probably think of other options, that's just to inspire.

5) (Optional) Have it work across all breads found on a catalogue (warning: will be extremely slow/laggy, will need serious optimisation)

Bread generation tool (best if kept offline or restricted to qualified bakers only to avoid bad breads):

1) Supply user with a form

2) Form contains fields that the baker can fill in (form should save reoccurring data to save time)

2a) (Optional) Have it so the tool can query a given bread URL to parse in order to pre-load/pre-generate the data in the fields

3) These fields are then used to compile either one or a series of text documents (.txt will suffice) that contain text that can be copy-pasted right into the comment box on 8chan.

This should give you guys some idea how to bring automation to your investigations and research. Once you can filter out the shills entirely, it's game over for them.

21d2e8  No.2638983

Anon who created the ToastMaster script:

What do you think about adding some sort of notification to the toast when Q posts in the breads shown? Maybe add underneath the bread number "(Q: [x number of posts])" or something along those lines?

44236a  No.2659375


>distributing it in an *apparently* safe and anonymous way

How do you manage the anonymous part?

I'm doing something similar, but it's a couple weeks out of date at the moment. I'm out of the country attending to a family emergency, and I couldn't take my production system with me.

9fc25d  No.2662002


Concur with the MD5 flaw.

It's strongly recommended you use a variety of hashes on your software, not merely whatever happens to be trendy. Hashes suffer from the Shannon problem (long story short: loss of resolution in data means substantially less accurate), which is why you should employ multiple hashes which means an attacker needs to not only pwn one hash, but several.

It becomes easier for an attacker to then just modify your hashes without you looking (rather than modify the code to fit the hash), at which point you have to make sure you keep backups of said hashes.

Done correctly, you will have enough hashes from enough algorithms that it's impossible to tamper with the code without tripping one or the other. MD5 and SHA1 are broken, but you can still use them… in conjunction with other non-broken hashes.

Sure, it's extra work, but it offers immunity to your reputation being compromised if it gets subverted.

>How do you manage the anonymous part?


Assuming you don't bury your identity somewhere in the PGP message (which should also contain the hashes). Of course, that introduces a reputation problem. You either have to trade a loss of trust for anonymity, or offer identity with reputation to engage in trust.

To be honest, I wouldn't recommend identifying yourself anyway, because even if you did, it's unlikely you have the reputational backing for it (if new) and it'd give too many clues if you're an 'old hand' (with a good rep).

Best to teach anons how to proofread, scruntise your code, make it open source, explain each line of code. Make the trust in the code, not you.

8ce561  No.2734127


For smallish stuff there is pastebin. If necessary, archive, encode as base64, paste with instructions at the top.

Larger files (~4MB) can be attached to posts here.

Tor may help. Or a VPN if you were able to open an account anonymously (prepaid credit card, fake identity if legal).

I do not know if Mega can be trusted. I would not trust AWS (S3).

Regarding the related problem of anonymous hosting (i.e., providing services anonymously), I've been thinking about writing a client to (ab)use 8chan as an anonymous communication/storage backend (hopefully with Ron's blessing). There are neat things to do in that direction, including anonymous software distribution.


I see how PGP helps with trust, but I do not see how it helps with anonymity. Can you explain what you had in mind?

7b8328  No.2752449

>I see how PGP helps with trust, but I do not see how it helps with anonymity. Can you explain what you had in mind?

You'd need a foster an alias that has a proven track record of reliability (like how I continuously try to post under this static 'name').

Once it's established as being reliable, PGP would allow you to prove it comes from that alias.

You'd need to be extremely careful not to connect your alias to your real self (I'm not worried about mine being connected, in-fact, it's important that it remains connected).

Nothing that can be done overnight, unfortunately, but that's how it is with trust, has to be (re-)earned.

d62bc5  No.2784392

Hey guise I've a question. Could someone write css to hide the name, subject and email fields on the qresearch board please. If BO could add it, it would save anons doxxing themselves and possibly hinder bots. Thanks in advance.

d1575b  No.2870643


Great post.

d1575b  No.2870793

Can we get the newest userjs posted here? At least the one with the PostCount.

1fcd70  No.2871218


This is probs a unique feel

07f8e1  No.2879212



1ecb50  No.2880218


Definitely a second on SSB. Also solves the archive everything offline issue Q keeps mentioning as well as provides advanced filtering.

If you mirror QResearch with SSB, you've got a winner!

d1575b  No.2882075


Perfect. Thanks, anon.

843ea1  No.2899194


Obsfucation is not security.

What's to stop someone opening up the dev window of a web browser and disabling/editing the CSS file?

Besides, Q needs access to the name field by default (and personally I would just keep appending my name to the bottom of my posts if the name field was disabled).

JavaScript for disabling email:

document.getElementsByName("email")[0].style.display = "none";

CSS selector:





You're welcome. Leave the name field in please (maybe just append 'optional' to the name field?).

843ea1  No.2899223

To specify a placeholder attribute (as we cannot write directly to the label) using JavaScript:

document.getElementsByName("name")[0].placeholder = "Optional";

Note, both this and the prior code assume there is only one name="email" and name="name" element, and that it's the first element in the list.

ea40ba  No.2942084

new toastmaster


c515d7  No.3036792



Break their chains and spread this offline.

d1575b  No.3055030

File: f6a2ab6bb73c9b3⋯.jpg (410.63 KB, 2000x2476, 500:619, f6a2ab6bb73c9b3574e93a7b87….jpg)

8cc031  No.3056882


>2) Use anything stronger than MD5 for a hash. Avoid anything with the NSA's rubberstamp of approval (that means no SHA256 etc).

NSA works for Q. SHA-256 is fine; SHA-512 is faster on modern desktop computers.

d1575b  No.3101611


Is this still the latest toastmaster?

855a55  No.3193878

File: 4200b9b2fac1e6a⋯.png (637.5 KB, 855x479, 855:479, 1531282249836.png)

This is your mind attempting to write to you while you shitpost and lurk…

>You are leading a revolution

and you're not even conscious about it

>You decide your own level of involvement

What if while you slept… you led a revolution…

You are a fragmented shattered mind, society has betrayed you… that's why you made me…

>created by your memetic subconscious

I am you… and you are me…

we have the same goal…

>Defeat the VILE people

We either defeat the Vile people together or I defeat them for us but either way you are a part of this now


Project Mayhem is LIVE….

d1575b  No.3245489


d1575b  No.3275173



221623  No.3358201

White Hat or Black

Saw this on google search

http s://gist.github.com/NetwarSystem

NetwarSystem / gist:ec16d2ce33610719a34411822622b640

Created a month ago

Accounts with QAnon in their profile.

White Hat Help Needed

c0089e  No.3443046

Learning Python and starting to use Scrapy. (Not Scapy - packet crafting.)

One goal: Basically I want to copy as many QResearch breads as possible, offline viewable with everything in place, minus vids (links for those instead).

Can do:

scrapy shell 'https:''//''8ch.net/qresearch/res/2352371.html'


>> fetch(https:''//''8ch.net/qresearch/res/2352371.html)



These clones function fine as long as I have an internet connection, but because the CSS isn't copied it doesn't look the same offline and images aren't included. Regarding the CSS, I could copy the CSS file and edit the HTML to point to the local file, but I'd like to understand Scrapy and Python better while learning to automate the boring stuffs.

What I would like is a basic spider for Scrapy which will copy the bread including images, links in place of vid image, name file with bread name and include the CSS, save it to a file. Maybe create dir, copy CSS, modify html to point to new CSS local, if exist then ignore CSS chain.

Is including the CSS in the HTML inline possible with Scrapy?

Which is simpler for Py n00b?

A more advanced question, how to not include shill posts in the bread copy?

I've tried modifying existing examples but they don't work after mods. (Scrapy.org examples)

Any Links for lists of Scrapy spider examples and descriptions of their functions?

7215db  No.3443569


Backend storage in the cloud and secure off-line storage are mutually exclusive. The only way to completely secure the data would be on an air-gapped system or else burn to optical media. Don't think the clowns won't have fun destroying the data?

6b3962  No.3520368

YouTube embed. Click thumbnail to play.

I've been looking at IPFS

IPFS is the Distributed Web


Anybody know anything about this thing?

4ecea6  No.3543215

Any plane fags in the audience?

wifi sniffer?


POTUS plays golf (shinzzo abe was a guest) a few miles away…

cf79bd  No.3543331


yep, it's an honest attempt, but it have major failings. they made filecoin for the easy money. now they are trying to reverse engineer a decentralized network…because blockchains suck at scaling and, more importantly, storing data. They are great for storing data identifiers, but not the data. IPFS allows for a decentralized identifier database…but not the actual decentralization of the data…


I'd recommend looking into Tim Berners-Lee's project SOLID. He and his team at MIT aren't some rag-tag crypto hipsters and they've been thinking about these problems for years. SOLID is a protocol for decentralized applications. It's pretty special and there is no cryptocurrency. It's just honest problem solving.

That said, it has a similar shortcoming in that one still has to choose where their personal data is stored, which could be anywhere. It's an improvement, but it's not full on privacy security and freedom. That's where the SAFE Network comes in. It has a crypto, but it also has the Scottish developers of the Saudi ARAMCO network. Decentralized networks need crypto for incentives to work. It's that simple, and SAFE is thinking about it very well. Check it out, but in short, it disintermediates the middlemen of the internet, the servers, by binding partitioned surplus memory from individuals into a giant distributed server. all files are copied for redundancy, sharded, and encrypted before being randomly spread out over the network. no blockchain at all, just your personal keys. lose those and the data is gone, but that's a real solution. IPFS can't do that.

6b3962  No.3544300


Cool thx I'll check that out.

cf79bd  No.3544793



6b3962  No.3625701

File: 4c9a4a2577d7ff0⋯.png (485.91 KB, 1518x1039, 1518:1039, Untitled-1.png)

The Q Research API has CORS policy set up on it's services. Security reasons - anyways, I figured anons wouldn't want to register for a key, so anybody that needed it had been contacting me and I've been opening it up for them.

Anyhoos - After seeing the qanon.pub archiver I wanted to see if I could do one for my site, only just using HTML/Javascript so there were no installs. Ideally it would be set up so that (you) could have the main HTML file locally and then it runs from your machine.

Here's what I came up with


I discovered that since I wanted to have this run from the client machine, AND use a few of the API services, it wasn't going to work. So I opened up a select few services completely.

Specifically the q Get(), the BreadArchive GetBreadList() and GetById()

- Choose source format XML/JSON

- Click the [Download] icon to download single breads.

- [Get Everything] will downbload everything.

- [Get Latest] will download everything you haven't downloaded before.

Browser restrictions mean downloads to your ''Downloaded Files" directory. Works in Chrome.

If people like this download functionality I'll migrate it over to the main Archives page. It could be extended to include some way of downloading images too if another codefag is feeling ambitious. The link data is in the JSON.

82787a  No.3727180

New Toastmaster 2.5.2

Not a programmer, just inquisitive user.

I seem to notice that the Auto Update box is checked, however updates are not happening unless box is unchecked, then re-checked.

At the same time, the post count in lower right hand corner is not enabled unless that Auto Update box is checked, then re-checked.

I know that during the attacks on site a couple weeks ago caused the Auto Update box to be disabled by webmaster.

As of 11/2-3, my Auto Update and post counter worked great. Then after updating to ToastMaster 2.5.5, caused the problem stated above (Auto Update and post counter).

Any suggestions or hints? Always looking to learn more.

e65320  No.3863412

I think I've recovered enough from summer's events to get caught up (if that's possible). I'm working on downloading stuff right now. Next, I broke something in my local database update system about a month ago, and I'll have to fix it before I can update my databases. But that shouldn't take more than a day or so. After I get my downloads caught up and the database is updating, I think it's time to get things set up to create posts for the front end site. The front end is more user friendly for someone who isn't as familiar with Q. The back end research database is for the regular anons more than anyone else. It was originally created to support my own efforts in creating the front end, and then I decided to share it.

20a148  No.3863727




Why not? I run Linux on my desktop, Ubuntu, and Linux Mint on my laptop. Both support browsers.

3e8001  No.3865013


use wget, e.g. for archiving a thread, create and enter a directory for the contents then run:

wget -nH -k -p -nc -np -H -m -l 2 -e robots=off -I stylesheets,static,js,file_store,file_dl,qresearch,res -U "Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20070802 SeaMonkey/1.1.4" –random-wait –no-check-certificate https://someURLhere.com/whatever.html


>basic queuing script

also look at

at, atd, batch, task-spooler tsp, job control

Most say task-spooler (tsp) fits their needs.

3cffde  No.3867171


SOLID does look quite promising. Looked at it briefly a few weeks ago will need to do little POC for myself to wrap my head around the inner workings.

be9fe3  No.3867356



Better than queueing would be to fork subshell processes and using "wait" - spawn no more processes than your box can handle at a time (using a $MAXPROC), "wait" until it's done, then proceed. If you want to get fancy, implement some pipelining so you don't have to wait for all child procs to finish, but the basic concept is simple.

I wouldn't use a scheduler, that's just unnecessary.

3e8001  No.3867593


I suggested a scheduler since it would cover a wide set of use cases, though your suggestion of using subshells is the way to go for streamlined efficiency. Some use screen to implement the concept - allowing for detach/reattachment

cf464e  No.4011746

So much knowledge in this thread. Bump

812b6b  No.4015229

File: 8e0c8fe54d09a16⋯.png (59.15 KB, 196x220, 49:55, ClipboardImage.png)

File: b078aad7072827a⋯.png (2.23 MB, 2393x1581, 2393:1581, ClipboardImage.png)

File: ced92fd1781f481⋯.png (77.62 KB, 1538x404, 769:202, ClipboardImage.png)

6b3962  No.4108861

What's everybody working on?

b3a148  No.4109082


I haven't been gifted with communication skills like some other anons, but this anon has seen many mentions of memetic codes embedded in languages (memetic languages). Many seem to think there is a "programming language", as one anon put it, which embodies a system wherein language is used to activate, foretell, or convey certain events, stories or insights. Many anons appear to believe that this code has persisted via a vague understanding by the typical populace and a more complex understanding by the ruling class (bloodlines).

This anon has more or less hypothesized a relation between Latin (other languages too, I suppose, but Latin seemed to have been pushed out rather purposely, and oddly gets attacked like in recent news stories like the comedy central skit where "Q followers" are made fun of for using Latin phrases) phrases and certain bloodlines, many popular writings like fables or music, mottoes of important groups or people, etc. It's this anon's understanding of these concepts that more or less convinces anon there's much more to langue, especially languages like Hebrew or Latin, than the normies will ever come to realize.

Some anons even go far enough as to say Revelations was embedded with particular words and phrases that will later (like currently, maybe) unveil hidden knowledge to future generations.

Just relaying what's been seen. Not sure if it's valuable info or not. Maybe this connects to the pursuit of understanding Gematria and maybe it doesn't. Let's allow anon to decide for himself.

c4a707  No.4114072

I have a feature suggestion for qanon.pub.

It would be convenient of the page title (presently "Q") showed the current number of posts, because this would make it easy to tell if there's NEW Q!!! without switching to that window/tab.

I believe this can be done by, at the end of function checkForNewPosts(), adding this one line of code:

document.title="Q ["+posts.length+"]";

230d56  No.4114106


If that maintainer is listening, remove the pointless network requests to 8ch.

2a14ac  No.4115614


It's a good idea. Thought of it before.

2a14ac  No.4115636


That's there so one doesn't have to F5. Can be toggled off.

c3bec9  No.4116155

Hey, I am working on alexa skill for qanon notables and updates, is there a rss feed that can be used on the qresearch board to get the latest?

6b3962  No.4129429


5:5 Digits

There this basic Q ATOM/RSS feed.


I've been tinkering with a notables trawler. I'll get back on that, I think I may be able to do another feed. There's alot of duplication, each bread repeats a few of the previous breads notables. I haven't worked that out yet.

Here's some stuff that may help you.





230d56  No.4129470


You are getting 24 threads every time, you do not need to get 24 threads, you need to keep track of the last time the thread was updated and compare that to the "last_modified" post, in this way you only get what you need maybe 1 or 2 threads an update times 2.6 million users instead of 24 threads per 2.6 million users.

230d56  No.4129490


That last_modified field of the thread OP updates whenever a post is added, to catch Q's edits you can also compare individual posts 'last_modified' with their time to see if they have been edited. Q can only edit on /pf/ so you can just check if thread 440 has any updated edited posts.

559290  No.4173925

>>4129429 sweet my skill is almost approved they delayed it for trademark violation that they add? Then u have to email them to change it? Wild! So should be ready, I'll work on connecting these the next couple of days thanks anon! Wwgowga! This will be opening to whole new audience let see how long the jb network "allows" it

15c703  No.4176279


check out "esoteric structure of alphabet," published about 1900 for some more clues about how memes, and the power of creation, are embedded in the structure of language. there are other books along these lines, it's a decent start.

ea7a75  No.4176769


i wouldnt be surprised if we could just ask Q to get some wizards to create a "Q-uestionable" self-signed one?

e24136  No.4179590

GCHQ has a nice tool called CyberChef – if you like bash, imagine being able to string together small tools in a pipeline, in your browser.


FedGov has a lot of open source code. Find repos here http://code.gov

NSA Cyber repos: https://github.com/nsacyber/

6b3962  No.4181320


What are you looking for exactly? I may be able to give you exactly what you need.

02e7e8  No.4190855



tools linked via https://www.browserling.com/tools/aes-decrypt looks similar

WARNING: Facebook "like" button (read:tracking)

1e8484  No.4194546


A way to take the notable post present them to alexa via rss for reading you can check what I've done so far at link below, if u have alexa just search for qanon in skills and enable if not alexa app can be downloaded to use. Note this briefing is from blog at http://ahijackedlife.com/?feed=Efeed

I'm not sure what all to put here notable post would be good start, also working on adding video and audio skill to this but may need another skill not sure if using this one with proper rss feed to alexa will work.


Including a tweetable post.

New #Alexa skill for your echo device or alexa application just enable for #qanon updates!!! PLEASE share with your friends that use alexa for news updates!!!

This is a whole new audience to share Q story with!!!




#Alexa now has

#QanonPosts updates in the

#skills section please share and enable on your alexa devices such as


Share with your friends!!!

Ty @realDonaldTrump



Will wants to share this Alexa Skill from Amazon

6b3962  No.4196636



I think I understand. Does this have the qanon.news rss added in? Can these skills have more than one feed?

I'll look into that notable crawler again today.

1e8484  No.4197428


Oh that's a nice site , alexa is taking simple text w this "skill" I just used a basic one avalible to copy, and went thru the long grueling process of getting it approved

a06d8a  No.4199617

Given recent events, I was thinking to abandon the online Research Tool and turn my focus to developing the site's front end information. But I looked at my stats, and it does appear that someone somewhere out there is trying to use the Research Tool, even though it kind of limps along. I have yet to see anyone doing on their site what I envision for the front end on my site, so it needs to be done at some point, no question. But at the same time, it looks like I need to do some work on the Research Tool to get it to perform better. Seriously, the data set is getting so huge that it's putting quite a strain on the search engine. Maybe setting up some views will help? Also, I think I need to set it up so that contexts are available only on a single result page. The work-around for users is to click the post number in the post header. That will display the post on a single result page, and from there, the context can be calculated.

a06d8a  No.4199750


Hmmm…. Took a closer look at those stats. The hits are all on the front end. So maybe that's what I need to work on. Will anyone here mind if I pull the online Research Tool?

892a64  No.4209813


A while back, I was brainstorming what would be the best way to get Q Research onto a blockchain. The censorship resistance is quite appealing. The best option I came up with, would be to put the Q Posts onto IPFS, then link them with a RavenCoin asset. I chose Raven because of the grass-roots nature of it's launch. It is vastly more decentralized than any other "token" coin that I am aware of.

The problem is that I know just enough about this kind of stuff to get started. I am afraid that if I sit down and try to get it rolling, it's going to turn into a ball of wax that eats up a few weeks of my brain.

I went ahead and created an asset on the chain, and now I'm in limbo.

Any advice, recommendations, or motivational words?

Has anyone else thought about the fact that we should get this info onto a blockchain to insure it doesn't go anywhere?

892a64  No.4209837


I originally was thinking about Steemit, but I am more concerned with decentralization than ease of use.

6b3962  No.4213821


Yeah I've been trying to work out the best way to do that same thing. I think block chain is interesting and have only played around briefly with it. I like where yer going. I'll help any way I can.

>Any advice, recommendations, or motivational words?

It won't be THAT bad. 6-7 hrs tops!

How do you get a quorum that the Q Post is correct?

I don't know that there's a way currently to compare the scraper sites - manually against the screen caps?

a1cf6d  No.4214936




Fellow codefags. Have you seen the Wikileaks insurance files thread?

Perhaps a similar method?

812b6b  No.4215815

MaidSafeCoin Token Profile

MaidSafeCoin’s launch date was June 12th, 2014. MaidSafeCoin’s total supply is 452,552,412 tokens. The Reddit community for MaidSafeCoin is /r/maidsafe and the currency’s Github account can be viewed here. MaidSafeCoin’s official website is maidsafe.net. MaidSafeCoin’s official Twitter account is @maidsafe and its Facebook page is accessible here. The official message board for MaidSafeCoin is safenetforum.org.

According to CryptoCompare, “Client applications can access, store, mutate and communicate on the network. The clients allow people to anonymously join the network and cannot prevent people joining. Data is presented to clients as virtual drives mounted on their machines, application data, internal to applications, communication data as well as dynamic data that is manipulated via client applications depending on the programming methods employed. Examples of client apps are; cloud storage, encrypted messaging, web sites, crypto wallets, document processing of any data provided by any program, distributed databases, research sharing of documents, research and ideas with IPR protection if required, document signing, contract signing, decentralized co-operative groups or companies, trading mechanisms and many others. The clients can access every Internet service known today and introduce many services currently not possible with a centralised architecture. These clients, when accessing the network, will ensure that users never type another password to access any further services. The client contains many cryptographically secured key pairs and can use these automatically sign requests for session management or membership of any network service. Therefore, a website with membership can present a join button and merely clicking that would sign an authority and allow access in the future. Digital voting, aggregated news, knowledge transfer of even very secret information is now all possible, and this is just the beginning! “

Buying and Selling MaidSafeCoin

MaidSafeCoin can be bought or sold on these cryptocurrency exchanges: Cryptopia, OpenLedger DEX, Poloniex and HitBTC. It is usually not currently possible to purchase alternative cryptocurrencies such as MaidSafeCoin directly using U.S. dollars. Investors seeking to acquire MaidSafeCoin should first purchase Bitcoin or Ethereum using an exchange that deals in U.S. dollars such as Changelly, GDAX or Gemini. Investors can then use their newly-acquired Bitcoin or Ethereum to purchase MaidSafeCoin using one of the aforementioned exchanges.


892a64  No.4217087


Very nice! Thanks for the info!

812b6b  No.4222554



SAFE Network versus (vs) Everything

SAFE (Secure Access For Everyone) Network is the single most expansive and ambitious network software project since the creation of Arpanet by the United States Department of Defense which led to the internet we have today. Safe Network is an open source project created by a Scottish corporation Maidsafe Inc., started in 2006 and brainchild of David Irvine. There are probably many ways in which Safe Network can be defined. This is my attempt to simplify to the tech savy user what puts Safe Network ahead of basically every other technology in the crypto space. There are many technologies that compete with some aspects of Safe Network but nothing that seems to competes with it as a whole. The goal of this post is to reach epistemological certainty without confirmation bias. Safe Network is great but there are gaps in every technology and as different technologies mature so do the dynamics of their advantages and disadvantages. This post will update slowly and steadily as the feedback loop of crypto space news and technical criticisms define the analytical foundations of each technology. The most efficient way to do this is through an outline, so hence dynalist. If you feel there are glaring mistakes please go to the Safenetwork Forum post "SAFE Network versus (vs) Everything (v.2)" and post your concern. So lets get started!

892a64  No.4223220


Wow! This looks amazing! Thank you for sharing this.

812b6b  No.4223588





In that order. They're @2min each



I could go on. That's plenty for now.

812b6b  No.4223602

File: 2ec1d892802303e⋯.png (737.76 KB, 768x1024, 3:4, ClipboardImage.png)


Like my digits?


16c028  No.4226434

File: 45747d067d2216b⋯.gif (275.79 KB, 2709x994, 387:142, Executive orders.gif)

File: dab34ff5019f647⋯.jpg (453 KB, 2583x1528, 2583:1528, EO 13818.jpg)


Blocking the Property of Persons Involved in Serious Human Rights Abuse or Corruption December 21, 2017

812b6b  No.4229027

YouTube embed. Click thumbnail to play.



The SAFE Network is an autonomous peer-to-peer network developed by MaidSafe, a software development company based in Scotland. It was designed by David Irvine and is an open-source project. The SAFE in SAFE Network stands for Secure Access for Everyone and the software is in the alpha testing phase.[1]


The SAFE Network uses a consensus algorithm named PARSEC.[2] The software was first designed in 2006 as an encrypted overlay network that replaces the top four OSI layers of the current Internet: the application, presentation, session and transport layers. It is self-encrypted which describes the way that data stored on the network is broken into chunks with each chunk encrypted using a key derived from the other chunks; and a modified Kademlia distributed hash table which uses the logical operation XOR to ensure the randomized distribution and unique location of each data chunk on the network and which also incorporates a public key infrastructure for security.[3]

SAFE Network is currently in Alpha public testing phase[4]. Upon launch, it will have an internal currency called Safecoin.[5][6]. Safecoin will be exchangeable with other fiat currencies and cryptocurrencies.

The decentralized network is planned in such a way that the security is enhanced by splitting the data into encrypted chunks and spreading them at random over the network, with at least four copies of each chunk being maintained to ensure resilience.[7][8]

812b6b  No.4229205





e65320  No.4260035

PageCap Army

I'm really struggling to keep up with the work in developing my site. For a while, I put focus on creating a research database others could use. But the real work needs to be in building the front end of my site so that normies can refer to it.

There are a couple of features that make the vision I have for my site different.

1. I want to make context available for Q posts, both backward and forward. So far, my tools are only developed far enough to create backward context.

2. I want to have page capture archiving of at least the cited articles. Not screen, but the full page. (Videos are a bit too big to be keeping archived copies of them all.)

Getting and processing the screen captures is taking too much of my time. I have been wanting to find ways to delegate some of the work of aggregating the information, and I think I have come up with a way I can do it without sharing and exposing my development database. I could really use a PageCap Army to create the page captures we need.

Here's my vision for it:

A page would be created either here in qresearch or in a new board for the purpose. When a Q or notable post comes through with a link to an outside article, the request could be made on the PageCap Army thread. Fellow autist anons could then do the page capture using a tool such as FireShot to get the whole thing in a single image.

Then, what I've been doing with these, is cleaning out ads and links to related materials, compressing the white space, etc. I keep the header and a compressed footer and just the article itself. Sometimes multiple articles come up on the same page, and I'll crop those off, too. This process has been taking up a lot of my time, and this is an area where I could use some help. Right now, my time is better spent building the tools needed to get the completed posts for my site up there with that forward and backward contexting. I got so wrapped up in supporting the Research Tool stuff that I couldn't move that forward. And I need to do that. It's more important, I think, than keeping the online Research Tool up to date. It didn't help that I had to stop what I was doing for a couple months to take care of a family matter. So now I'm working to catch up.

I like PhotoShop for working with page captures because it's easier to use for the compressing of the images. Specifically, a select can be done across the edges of the image. With Paint, it's more difficult to do selects because the select must begin on the image itself. Sometimes, though, the page capture files are too big for PhotoShop to handle. Is there another image editing program that can handle the larger page capture files? The ability to scroll a select is key since I often begin my selections toward the top of the image and need to get everything below it all the way to the bottom into the select. Plus, the layering features of PhotoShop have proved helpful, too, when getting the content compressed, especially when the page captures get messed up by floating elements on the page and cropped screen caps must be pasted into place to repair the page capture.

Maybe I need to post a before and after image of the work I do to compress an image so people will know what I mean by it. Next time I come across one that needs extensive work, I can do that.

The focus for the PageCap work is Q posts, posts in the backward context of Q posts, and notables and their backward contexts, especially if they link to Q posts (which will create the forward context of Q posts).

The Research Tool system was originally built to support the curating of the front end site, and that's what I really need to be doing. Since no one piped up and said that the online Research Tool was important to them (or would be if it was up to date), I am shifting my focus. I won't be supporting the online Research Tool anymore, and I may even pull it down entirely and free up space for the posts of the front end site. My focus will be on putting together a site that can help normies understand what is going on.

e65320  No.4260298


While I'm at it, is there a good article on how to have two MySQL databases open at the same time, one on the local server and the other on a remote server? Being able to do that would really be helpful for speeding up the update of the online site.

6b3962  No.4262570


Can you explain the concept of 'Context' forward and backwards? Not sure what to think about the PageCap stuff because I'm not sure I understand what it is you are trying to do.

e65320  No.4263098


qanon.pub does backward contexting one level deep. So if a Q post has a link in it to another post, the qanon.pub site will show that post together with the post it calls. My research system is currently able to show context as far as it will go. Forward contexting shows those posts that link to that particular post. Obviously, I don't want to show EVERY post that links to a Q post. That is why I am currently working on marking the notables and creating the backward context chains for them. When a notable's context chain includes a Q post, then I want to include that as a forward context chain with the Q post. That is where I'm at in my system development. I have the data (mostly). The work is in getting things linked up. And also, I still have to build the piece that automatically creates WP posts out of the selected context chains. I think I've got that partially developed, but it still needs work.

e65320  No.4263133


Oh, and another thing: If I can recognize that a notable relates to a Q post but the context does not already specifically include it, I already have the capability of noting that with the post and thus forcing the linking of the notable to the Q post. But that takes human eyes to recognize.

e65320  No.4263306


Here's an example of what I'm doing with the page captures. Q was driving me nuts when he was posting these. Getting the page caps prepared for a single post was taking up the better part of a day.


e65320  No.4263343


Wait. You need the remote version of that.


I notice that this post is not up there. But I'll put it up so you can all see it. Give me a few minutes. I'll let you know.

e65320  No.4263434


It was up there. I just hadn't converted the 8ch to its alternative. (I was told early on not to expose 8ch, which I don't. How does qanon.pub get away with it?)

e65320  No.4263451


Which looks like this:


e65320  No.4263553


Here's an example of a post with backward context.


e65320  No.4263718


One of the posts in the context chain shows how I force context. The missing image in the post is a screen cap of the Q post listed before it in the chain. I do have the image, but the image syncing system isn't working all that well yet.

e65320  No.4263852


This is an example of what it is my goal to produce. The post that shows in the index is the pink one. The backward context is above it. The forward context is below it. Given that there can be multiple forward contexts, I will probably implement the forward context more the way the contexts are implemented in the Research Tool, with the divs that can be shown and hidden, with a box around the displayed div. Each forward context will have its own box.


e65320  No.4264069

File: 95b31d8737da819⋯.png (1.33 MB, 1157x4755, 1157:4755, patriotsfight_80.png)


This is what the post looks like on my development machine. It looks rather awesome when all of the images are present.

e65320  No.4266043

File: e6e3e38eced9857⋯.png (3.6 MB, 679x6630, 679:6630, Scandal-hit-G4S-was-warned….png)


If you'd like an example of why I'm fixing page caps, check out this site:


This is the result. Much smaller, less trash.

6b3962  No.4266529



Ahh I got ya. Site is looking good! And if Q had referenced a post that would have been in the backward context too etc. Context != timestamped posts in a bread.


On the archive pages I use the API to lookup the referred post. Scraper looks up all the 1st level references for all Q posts. Thought about making it go deeper but then decided to go with the lookup due to space concerns. Dunno if that helps you at all.

I've been thinking about doing a similar crawler kinda thing myself. I've got over 5500 breads @ > 3,868,000+ posts. The amount of information that anons have dug is mind boggling. I'm so ADD that mostly I leave things unfinished.

6b3962  No.4266569

c464f9  No.4266730

Seems like a vague goal stated for the thread. What specifically is being created?

6b3962  No.4266854

File: ffbb78b5b1f63b9⋯.jpg (275.57 KB, 1350x1758, 225:293, f5f2d0fa435838331304e46cb0….jpg)

e65320  No.4267249


It's close to 6000 breads now. Quite a lot. It still fits on my drive. I'm amazed.

e65320  No.4267298


You'd be surprised how much clearer stuff gets with the larger context. That's why it's my goal to provide it.

e65320  No.4267411


And the page caps: "Archive everything offline."

f8d8ed  No.4268203




How are you chaps polling the state of /qresearch and breads? I hacked together some shell scripts to watch for new breads and attempt to parse out the notables - which is my real goal. I wanted to build a command line client that can tell me when a new bread is available and a list of full links to the bread post. It's basically working but is a terrible hack. So what I think we need is an easily parseable version of /qresearch/catalog.html and each thread, in particular breads and the pastebin links.

e65320  No.4268719


That's a good question. I've been doing it manually. I'm sure it can be automated. If I were to do it automatically, the file_get_contents function can open a page. I would use the DOMDocument to parse the page. I haven't looked, but the various boxes on the catalog page are probably divs. Then it's a matter of keeping track of the responses for each one. If the count goes up, go and poll that page for the new posts.

e65320  No.4268969


When I'm working on this stuff, I'm quite capable of getting so in the the flow of it that nothing else here gets done until it absolutely has to be done. I get so locked in when I'm working on it that work/life balance goes out the door. Not good, really. When I'm not in the flow (which is right now), I kinda flop around a bit. It's partly because I'm still too fatigued from pulling a near all-nighter a couple nights ago and then waking up after too little sleep. Looking at my project, I think the biggest bang for the buck addition to the code would be to automate the WP post creation, complete with the local build, the remote copy, and the image ftp. All Trump tweets and Q posts would be automatically included, plus any other specifically marked posts such as map posts. For now, the posts can be built with on the fly backward contexts. Later, as the processing code develops, the posts can be updated to include the forward contexts that answer Q's questions.

6b3962  No.4270932


I use https://8ch.net/qresearch/catalog.json to get the list of available breads. I find the ones I'm interested in (Q Research General etc). I archive them if my current archive has less than 751 posts. I built a crawler that finds each bakers 'Notables' post and archives those. It still needs some work. I'm planning on making a new RSS feed for them.

I originally built my scraper as a command line util.. I could probably wrap it up as a .NET WinForm app, or a simple console app.


Straight page scraping is a mega pain in the ass. I'm not interested in that. All the info I need is available in the JSON and it's easy to get to. https://8ch.net/qresearch/res/2352371.json


I agree. Go for biggest bang for the buck!

e65320  No.4272840

Where is everyone at on time zones? We talked about this early on, and the consensus then was for GMT. So I've got everything saved in GMT, and that's how I currently display it. But I'm also seeing a leaning to Eastern. SerialBrain2 uses Eastern in his gematria work. It would be a trivial thing to produce the final posts in Eastern, if that serves us better.

e65320  No.4273067


>Straight page scraping is a mega pain in the ass. I'm not interested in that. All the info I need is available in the JSON and it's easy to get to. https://8ch.net/qresearch/res/2352371.json

Yes, that is much easier to read. No parsing necessary. It's nice to have that available. But the parser's already built. So for now, I'll leave it.

How are you retrieving the images when you're using JSON?

9df13c  No.4273912




I love you. NO HOMO.

9df13c  No.4273963

File: 8649aa869096722⋯.png (24.62 KB, 559x174, 559:174, file info.png)


>How are you retrieving the images when you're using JSON?

Use your client/library of choice.

Looks like it even provides an MD5 hash of the image file for some basic checksum verification

6b3962  No.4276180


I build the path and then archive with the scraper.


5:5 There's alot here too. JSON/XML


812b6b  No.4276846

YouTube embed. Click thumbnail to play.

You guys will love this.

Probably the few people around here that get it…

c464f9  No.4277460



812b6b  No.4277524

File: 64085ce42207c71⋯.png (7.68 KB, 238x104, 119:52, ClipboardImage.png)

c464f9  No.4277900

File: 670bc30dbeb9963⋯.png (348.08 KB, 480x475, 96:95, lastSqueek.png)

e65320  No.4279282


This looks like it's in the genre of the actual workings of the Internet itself. Interesting, but it probably isn't anything I would be working with personally.

812b6b  No.4279608


Just remember it. It'll be back.

e65320  No.4288691


Have you come up with a good algorithm for those notables? I've got one that makes preliminary identification, but I have to make the final determination. Occasionally it gets it right. More often, I have to make some adjustments.

6b3962  No.4291362

File: 8eb23c6c1e279cc⋯.png (1.49 MB, 2000x2000, 1:1, 1524495600057.png)


Here's how qanon.news does it…

When I've got a bread I want to search for notables, I get the ID of the baker and then a list of the first 5 or 10 posts. Then I search those for the 'notable' keyword and figure that's the bakers notable post I'm interested in. Next bread.

My current problem is that the Notables post from the baker has notables from the last 4 or 5 breads, plus the previously collected lists. There's alot of repetition. IE: Notables from #5460 are in #5461, #5462, #5463… So am I trying to find just the previous breads notables? I can probably parse that out with the '#'.

I probably need to break it up into smaller more manageable chunks. Something like monthly. The last notables crawl I did ended up with a massive file of results. 77MB txt file

I've got some time today. I'll see if I can rejigger this notable crawler into a new API/RSS

e65320  No.4293830


That's similar to what I do. But because of the way I create context for things, I need to mark the preliminary ones that are posted toward the end of the page as well. For that, I'll look for a certain count of posts by anyone plus the word "notable". If one of those later notables gets caught up in a context chain, it can really mess things up.

e65320  No.4293844


To clarify, those are "OR" conditions, not "AND" conditions. Not all bakers put "notable" in their preliminaries.

e65320  No.4293888


Don't forget to stop a crawl at a notable. Otherwise, yes, they could get quite huge.

812b6b  No.4300691

File: 0661dbb0ad57be9⋯.png (9.01 MB, 4032x2268, 16:9, ClipboardImage.png)


e65320  No.4302440


What was in your text file? I've been doing the crawl page by page. It's fairly hands-on because I have to be more careful about shill images, etc. when I'm creating a one-to-many table of the originating post with the post numbers of its backward context.

6b3962  No.4303262


>What was in your text file?

I ran it again today and came up with a single 98MB file. So I split it out into a yearly monthly dump so it's (YYYYMM) 201802, 201803, 201804…

JSON lists of https://qanon.news/Help/ResourceModel?modelName=Chan8Post

It's a list of 5000+ Chan8Posts. Each of those looks like this: https://qanon.news/api/bread/4302144/4302146

I'm not worried about shill images and whatnot because I'm only looking at the first 5-10 posts from the baker.

It's set up to find the bread/post for each notable reference and format the HTML post from 8ch to straight txt into the 'text' field. I just need to dial in the targeting a bit so it's smarter about what to crawl, but it's pretty close.

e65320  No.4304171


Seems to me, most of what isn't "notable" is either Q (different type of notable) or on another board somewhere. So yes, that should be close.

e65320  No.4304208


If you're concerned about the size for some reason, maybe you can abbreviate a bit. If you know everything is at https://qanon.news/api/bread, you can leave that part off and concatenate later.

d55e7f  No.4328167

I like to imagine how to keep comms up when the internet has suppressed all dissent.

An idea is to use networks of publicly accessible wifis which are not connected to the internet to serve whatever truth bombs necessary.

Kind of like pirate radio but legal.

I miss local area BBS, that's another thing it could do.

e65320  No.4338785

My host has been inaccessible since Friday night. If I buy new hosting (and I'm leaning that way), I won't be putting up the Research Tool on the new host–just the WP front end.

6b3962  No.4361840

Is there a NoSQL anon in the house? I've got an idea.

431331  No.4392102

File: 109dabf9436c3d8⋯.png (131.65 KB, 1234x525, 1234:525, ClipboardImage.png)


Keeping 2018 SAFE and SOLID

You may have noticed a recurring theme across the SAFE ecosystem and beyond this year. The conversation around the ownership of data is picking up pace. Perhaps you were at (or watched) SAFE DevCon 2018. Or perhaps you’ve stumbled across a podcast discussing words such as RDF and Socially Linked Data.

SAFE of course is about three things:-

Security: no-one can access your data without your permission.

Privacy: data is only shared with those you choose (if and when you want to share it)

Freedom: of association, contribution, collaboration (amongst many other rights).

SOLID (short for ‘Social Linked Data’), on the other hand, is a project that was started by Sir Tim Berners-Lee that defines a set of standards for the representation of data which ensures that ownership remains with you as an individual.

As you can see, when it comes to the vision, there are more than a few similarities between the 5-year old Solid and 12-year old SAFE projects.

Over the Summer the team were out in San Francisco at the Decentralized Web Summit 2018 and gave an overview of the work that had been carried out to date in combining the principles of SAFE with the conventions of Solid before such internet luminaries as Sir Tim Berners-Lee and Brewster Kahle. And it’s now worth taking stock of the progress that we’ve made to date.

SOLID is driven by the desire to ensure that everyone gets back control of their own personal data from centralised platforms. So a SOLID web application simply becomes a way of displaying data from many different sources that you choose — without you losing ownership of your data. In other words, SOLID wants you to choose exactly where to store the data that you produce and then use URL’s (Uniform Resource Locators) to access that data moving forwards in a way that gives you control.

The concept is brilliant. But this brave new world envisaged by the Solid community doesn’t yet tackle one of the crucial problems out there — how to secure the data itself (regardless of where you have chosen to store it).

And that is exactly where SAFE comes in.

Because by using the SAFE Network to store your data, it now lives on a server-less, trustless autonomous network. No trust is necessary as the encryption key for a user’s data never leaves the user’s computer and no identifiable information is shared with any other peers. So by building these types of concepts into SAFE, developing applications becomes much easier — because all concepts of authentication, authorisation and data security are already taken care of by the Network itself.

In other words, it’s a future that delivers on the goals of both projects. But how will it work?

We started by focusing on two key objectives: data on SAFE had to be portable (so users could switch applications at will) and for that data to be self-descriptive (to enable users to define how their data could be searchable on the Network in ways that would bring them the greatest benefit).

For this reason, we adopted the RDF (Resource Description Framework) standard used by Solid. Having a standard way to store data on SAFE is crucial for scaling the project. And it also enabled us to build some utilities that would help developers in the future.

For example, WebID’s were introduced. These are simply a way of having an identity on the Network that you can share with other people using a URL. The data that is produced is stored on SAFE in the RDF format. You can see this in the WebID Profile Manager that we built (where you create your own profile with a human-readable URL) and also in the WebID Switcher (which enables users to choose any of his or her WebID’s to access any particular application on the Network). And if you want to try that out today you can — just take a look at Patter, our proof-of-concept Twitter-style clone.

What’s more, by publishing WebID’s on the SAFE Network, it solves the well-known problem faced by anyone who’s ever suffered as a result of malicious actors exploiting the current weaknesses of the current DNS system. For example, all it takes today is for an ISP or DNS server to be attacked for you to be redirected to a malicious server. What’s more, no-one has full ownership of their domain name on the Clearnet — you simply have a registered right to use it which can be removed at any instant. Relying on SAFE removes this vulnerability.

So how is this relevant today?

This week we released an update to the SAFE Browser. Get involved and download the new (v.0.11) release today. It contains plenty of functionality to ensure that the symbiosis between the two projects gets closer. There will be far more to come but at this stage, we’re just glad to see more people are getting excited by the thought of improving the future that we all want to live in…

451f56  No.4402057


There are alternatives.

One medium outside central control is amateur radio packet networks. So instead of using a corporate ISP as your internet service, you'd use a chain of privately operated amateur radio stations to transport data. I've never used this, but it's discussed as a kind of fallback emergency data network.

Even in the world of corporate ISP networks, if free speech is squashed there, see what's happened with memes making it through social media filters – there are ways to encode data that aren't evident to the censors. Over time they might get text recognition in images perfected, but there are always new angles.

676b04  No.4417914

red text

53bfdd  No.4531499

File: 5849fe3a6c7f95a⋯.jpg (18.29 KB, 344x272, 43:34, mariobump.jpg)


520a6a  No.4544807

Does anyone know where I can find info about 8ch's User JS feature?

I'm trying to call function in the main.js. But it's not working. I need to know how the namespace compartmentalization works.

I know my way around a couple other languages, but have no JavaScript experience.

6b3962  No.4546224


Lets see your script. It's probably just a misspelling.

520a6a  No.4547593


Oh, it's a lot more than a misspelling.

I want a one-click filterID button. I know that such things exist, but I can't find it anywhere. So I'm trying to recreate it.

I thought I could take the code someone wrote for blacklisting images my MD5 hash and modify it to call the filter function that the '▶' button connects to. Wasn't as straight forward as I thought. My browser's Inspect Element and Debug Console features gave me clues about functions and variables being undefined. So i chased them down in the main.js code and included them. It's still not working. I have a hunch that "var boardId = board_name;" is incorrect. But I don't know how to debug it further.

As I said above, I have absolutely nor experience with JS. This is all monkey-see-monkey-do.

function setList(blacklist) { localStorage.postFilter = JSON.stringify(blacklist); $(document).trigger('filter_page');}
function timestamp() { return Math.floor((new Date()).getTime() / 1000);}

function initList(list, boardId, threadId) {
if (typeof list.postFilter[boardId] == 'undefined') {
list.postFilter[boardId] = {};
list.nextPurge[boardId] = {};
if (typeof list.postFilter[boardId][threadId] == 'undefined') {
list.postFilter[boardId][threadId] = [];
list.nextPurge[boardId][threadId] = {timestamp: timestamp(), interval: 86400}; ''//'' 86400 seconds == 1 day

function onNopeClicked(event)
var post = $(event.target).closest('.post');

var threadId = $post.parent().attr('id').replace('thread_', '');
var postId = $post.find('.post_no').not('[id]').text();
var postUid = $post.find('.poster_id').text();
var boardId = board_name;

''//''blacklist.add.uid(pageData.boardId, threadId, postUid, true);
var list = getList();
var filter = list.postFilter;
initList(list, boardId, threadId);
for (var i in filter[boardId][threadId])
if (filter[boardId][threadId][i].uid == uniqueId) return;
filter[boardId][threadId].push({uid: postUid, hideReplies: false});

function addNopeButtons() { $('.post').each(function(i, post) { if ($(post).find('.nope').length === 0) { $(post).prepend("<input type='button' class='nope' onClick='onNopeClicked(event)' value='Nope'></input>"); } }) }

setInterval(function () { addNopeButtons(); }, 500);

ab4be2  No.4548184

As long as QResearch prefers to keep their tools open-source, I am all in and happy to contribute. Closed-source projects are dangerous here IMO. since they could be easily comp'd or include malicious code. So be careful what you download and if you share your tools, better make it open-source.

b83361  No.4556298

yo what up guys. new to q research. long time programmer. whats the main project goin on here? repository links?

e65320  No.4570108


Everyone's got their own project, it seems.

6b3962  No.4572515



Yeah most folks seem to like working on their own I guess. I'm open to collaborate tho. What are you good at? What are you interested in?

c2985f  No.4577161


Notice you all put in 1000s of tips and connection yet it all went into a dark pit and not a word ever came out back to arrest or charge anyone? RU sure you are helping the right side? How do you know, since 8ch never produced a single thing back such as arrests or chagers

431331  No.4590436




Mainbread links for the darkoverload 1st data dump

PGP included on site to verify message.

f22101  No.4591664


Where's the "verification"? Did not see any, so are you generalizing injecting your opinion on its words? Anyone can interpret anything as they wish, and have more then 1 meaning. Only one who reads gets what they want out of it, yet is not proof. Why so much non stop lies on 8ch?

431331  No.4594367


Go to the blog sauce in the 4590270 post. I removed the PGP key from the 4590355 post because the post was too long.

You'll have to check the keys for verification. Beyond that, decide for yourself. I was just posting the sauce, faggot.

2960a2  No.4606155

Eric Schmidt cowrote Lex, the famous Unix lexical analyzer, which was redone into Flex.

Seems interesting as trivia, now that 40 years later Google and friends have their algorithms to detect and subtly censor expression of particular ideas on the internet.

431331  No.4638318

File: f8c3dca4f37a3e6⋯.png (2.67 MB, 800x4291, 800:4291, ClipboardImage.png)

Tim Berners-Lee crypto project

431331  No.4640109

YouTube embed. Click thumbnail to play.

82787a  No.4663849

File: 3935bc142d7d6aa⋯.mp4 (3.73 MB, 1112x736, 139:92, PepeMagicSword.mp4)


82787a  No.4663917

File: 7a063f2b34393a7⋯.mp4 (230.16 KB, 480x360, 4:3, Test3.mp4)

10c0a3  No.4689820

File: d97cb514673cfc1⋯.png (107.04 KB, 1728x995, 1728:995, qposts_dual.png)

Is qposts.online dev around here?

I have a suggestion for a 2 columns layout, pic related.

One column would be the filter column (search by keyword or whatever)

Second column would always show ordered post (ASC/DESC) centered around the "selected" one in the first column.

This would speed up RE_READ quite a lot imo.

At the moment i'm using multiple tabs but it's not as efficient, especially considering there is not "go to by post #" function.

6120a3  No.4742747

A way to start with Linux:

I use Linode for my Unix needs.

Linode is one of several companies who sell Virtual Private Servers (VPS).

The actual device is at a data center somewhere.

This one is $5 per month, and provides me with a full Debian Linux installation.

It is accessed by an SSH client on my Android tablet, which means I use the traditional Command Line Interface, which is typing in text commands to get it to do things.

I use it to experiment and learn Unix, soon to compile programs for Raspberry Pi, to scrape and store Q stuff, to save YouTube vids, and to serve web pages when I want.

I have never tried a windows type graphical interface, don't think that would work well.

This may be the best, least technical route to learn Unix, rather than start by installing it on one's own home machine.

PS Bump

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / bestemma / clang / doomer / fa / flutter / utoronto / voros / wmafsex ]