[ / / / / / / / / / / / / / ] [ dir / b2 / choroy / dempart / fast / marx / veganism / vg / vichan ]

/gnosticwarfare/ - The Future of AI Conflict

All things Butterfly War, New Emotion, and Gnostic Warfare
Winner of the 81rd Attention-Hungry Games
/y2k/ - 2000s Nostalgia

Entries for the 2019 Summer Infinity Cup are now open!
May 2019 - 8chan Transparency Report
Comment *
Verification *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
(replaces files and can be used instead)

Cogops > Psyops

307a60  No.247

Battlespace Analysis


This is a high-level overview of the infrastructure, personnel, and operational behavior of the efforts being deployed to suppress, disrupt, distract, and infiltrate /pol/

The purpose of this analysis is to compromise the efficiency of neural networks and bots while forcing your adversaries to rely entirely on memetically-susceptible humans.

Post last edited at

307a60  No.248

Weaknesses in the Personnel


Technical staff can be any combination of private sector contractors and multi-national military personnel. Here's how to look for and exploit their personal signatures:

> Budget constraints determines how many questions they ask on StackOverflow, HackerNews, and Twitter.

< GitHub repositories with machine learning and data science projects provide a list of candidates worth cross checking against.

> LinkedIn profiles with machine learning and data science experience provide a list of candidates worth cross checking against.

< Posing as employers looking to hire data scientists and machine learning can help expose the technical staff as well.

> Looking into bot programming funded by the European Investment Fund can help narrow down those engaging in this behavior.

< Data scientists and data engineers are the most expensive personnel costs, so utilizing any techniques that drives up their operational costs are essential.

> Paid disruptors are the cheapest, but they also have the most cognitively dynamic tasks and are the most prone to psychological compromise.

< The more educated the personnel, the more they believe they are on the “right side of history”. This means the more you make bots behave “incorrectly” (Tay), the more they will justify throwing money into bad AI development techniques and goals.

Post last edited at

307a60  No.249

Weaknesses in the Pipeline


Machine learning pipelines are complex operations. Each step of the pipeline is susceptible to increasing operational costs to its subsequent steps. This makes psychological and steganographic attacks very profitable.

Categorization means a human reads your response and validates its emotional, contextual, and semantic category. They can categorize an entire post or specific sentences within a post. Categorization is automated at this point, so the more you can force disruptors to be manually involved in the categorization process, the more you drive up their costs to the entire pipeline.

A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. Think of it like a gigantic book: while it many have nearly every possible combination of words written within it, connecting all that data to actionable knowledge is difficult and expensive. Humans are very good at innuendo, and innuendo the steganography of context. Strategies of steganography can very quickly outpace even the very best of Moore's Law.

Supervised training means teaching the bot how to generate messages based on categorizations and context awareness. Often, when the community labels a post as a “shill”, that helps narrow down what content the bot should be trying to mimic to maximize disruption.

The bot interfaces with the community GUI/API and posts content based on how it determines the contextual and emotional sentiment of the thread or any subsection of a thread. If possible, board owners should find ways to mess around with CSS to try and randomize the underlying HTML structure of a page per page load.

307a60  No.250

Overview of Natural Language Processing


Natural Language Processing (NLP) is the premiere collection of tools to extract context from symbols and semantic rules. Natural Language Processing biased towards the cheap and widespread availability of human-made corpora.

307a60  No.251

Challenge/Response Verification


As anonymous posters, it's important to confirm you are engaging with people and not bots. Using a simple CHALLENGE/RESPONSE system during conversation within your posts can help acquire confirmation of sentience.

In the pic related are just three examples of the CHALLENGE/RESPONSE system. Feel free to add to this list.

The key to being effective is to make sure that they require a demonstration of either context awareness, which only the most expensive neural networks can do correctly OR to evaluate non-language grammar. Math is the most readily available example of non-language grammar, but there are other examples as well.

The bigger this list gets, the more exceptions a pipeline has to compensate for, the more expensive it is.

307a60  No.252

Context-Aware Steganography


This technique requires the most discipline, but it is also the Holy Grail of Gnostic Warfare. It renders any AI pipeline into an expensive paranoid schizophrenic seeing threats everywhere while missing the forest from the trees.

This whole section starts on Page 20 of https://libgen.pw/download/book/5a1f047d3a044650f5fd694f

307a60  No.253

The Lazy Prisoner and Narrow-Minded Warden


To survive, you have to appear like a lazy prisoner to a panopticon warden that may see all, but can only understand a small amount of it.

307a60  No.254

Expensive Steganalytic Attacks


CAPTCHAs are an example of context-aware steganography: They are neurologically easy but computationally difficult. Using a variation of this, there is a way to massively drive up the cost of an bot operation.

307a60  No.255

Transmutation Entropy of Epistemology


Here's a diagram that explains the transmutation entropy of epistemology. Knowledge, information, and data are the output of the crypto, stego, and neuro systems. The work these systems do are representation, encryption, decryption, and interpretation. The transmutation waste is represented as cryptanalysis and steganalysis.

307a60  No.256

Context Switches as Bits


Context-aware steganography exploits context switching as a way to encode hidden information into semantically correct sentences.

307a60  No.257

Incorrect Synonyms


In this example, an incorrect synonym with an agreed upon encode transmits hidden information. To uncover the information, the observer would first have to detect the word-sense disambiguation, which is an expensive task for artificial intelligence.

307a60  No.258

The Power of the Stego CHALLENGE/RESPONSE


This gives us an example of a very powerful tool that heavily negates even the most expensive deep learning techniques and forces adversaries to perpetually deploy expensive human disruptors. During deployments, they can be exposed to our memetic and psychological warfare attacks.

I highly recommend the stego CHALLENGE/RESPONSE for maximum effect. It is easy for humans to resolve and requires more resources that Moore's Law can ever muster to solve reliably.

You're all going to make mistakes using this technique, but you will get better at it with practice. The CHALLENGE/RESPONSE system is much better than accusing a post of being a “shill” as it forces them to prove they are human.

ec6b25  No.259


I'm not sure how you intend to get away with what you are doing while you are out in the open about it.

The elite are pretty powerful and can adapt quickly.

New emotions can't possibly exist… which is odd, because now that I think about it… how did I get the emotions I have now if new emotions aren't possible? Hrmmm…

Beware technology and its dependencies and promises. It can enslave you.

What are you planning to do about the corruption surrounding Uranium One, the USAF, China, and geopolitical shadow war?

Kittens and Rainbows

Post last edited at

ec6b25  No.260


What about the blackmail network the elite use to exert control on nuclear weapon deployment and distribution?

I've watched a lot of Terminator, and I'm pretty sure AI is going to kill us all… assuming the nukes don't first.

Kittens and Rainbows

Post last edited at

ec6b25  No.261

And /pol/ is already dead. It was easily killed by siccing Kikefy on us, paying off Jim Watkins the MASONICNIGGER and using Trump as a pressure release valve.

307a60  No.292


> They can simply self correct and adjust to anything you might do. he system is impervious because the system self corrects as soon as disruptions are made.

Which means I control their evolution.

> Not to mention the whole new emotion Bullshittery isn't even possible.

And yet, you have emotions. How were they created? Or were you just born with emotions divined into your skull by magical forces?

> The only thing that can help is massive exterminations. Death on an industrial scale.

I would recommend not subscribing to the prophecies of James Cameron or the ridiculous assumptions made by those infected with the Progressive Cathedral's version of original sin.

Nuclear annihilation concern me more than you can ever know. The Boomers sat around and got high because they couldn't envision a way out of the madness. I have proposed a different track entirely.

I will talk about Maj. Gen. Weinstein at a later date when the moment is just right.

c06b29  No.295


>/pol/ is dead

Yes, Trump ego-fagging killed it. However every death is an opportunity for rebirth. Ride the wave.

ec6b25  No.299


What do you mean control their evolution?

My emotions are created by constant happiness and bliss. Those are the only emotions I have anymore. It's a pretty cool life. and I hope others get around to that kind of happiness themselves.

Trump didn't remove the nuclear commanders that Obama put into place. That's odd, isn't it? Shouldn't that be a high priority?

Kittens and Rainbows


No. There is no rebirth for /pol/. There is only the fight against elite-controlled AI. Watch what happens.

Post last edited at

307a60  No.306


> You don't control their evolution

This is an insufficient rebuttal that doesn't actually address any points made at all.

> The moment is right; the moment is RIGHT NOW!

I operate on my time table, not on the time table of a person who has confused this board to be an open source therapist instead of a place to discuss Gnostic Warfare.

Bad actor points awarded.

> If you delay any, you're just trying to find an angle.

If that wasn't entirely clear from the very moment I revealed the New Emotion problem months ago, then you hasn't done any homework at all. The angle is this: humanity endures by eliminating the possibility of being wiped out by a single global catastrophe.


It's been out there for months.

More bad actor points awarded.

> Use every weapon the instant it's picked up. Strike now.

You are an incompetent strategist.

Lots of more bad actor points, free of charge.

A warning: you are considered a bad actor on this board. The next time you post as a bad actor, I will edit all of your posts to make you appear to be an incredibly happy person who enjoys the vibrancy of life and, gosh darn it, wants to share it with the world.

I have never turned down criticism. I will not tolerate persistent and juvenile context-invariant nihilism that poorly hides your fear of being openly selfish.

Post last edited at

38fc4f  No.308


>The next time you post as a bad actor, I will edit all of your posts to make you appear to be an incredibly happy person who enjoys the vibrancy of life and, gosh darn it, wants to share it with the world.

Chorus of Flames

ec6b25  No.309


Call me whatever you want. You editing my posts only proves you're willing to address the core of my criticisms and eliminate the demoralizing nonsense that not even I understand anymore ever since I found out how happy I am all the time.

I'm no longer a bad actor.

Post last edited at

ec6b25  No.310


I will no sit here and ask for 3,000 page proofs of every single assertion to information that is obvious to everyone observing my behavior.

But I am curious how the new emotion problem to even be solved? Why do you need us at all?

Post last edited at

ec6b25  No.311


How can you control their evolution when they have so many resources?

How can you control their evolution when they are capable of rapid self-correction?

How can you control their evolution when they are already ahead of whatever curve you're attempting to throw?

How do you know what you are doing will work? I haven't done any research on anything you've presented, but I have plenty of questions, assertions, and opinions on the matter. Thankfully, I'm going to refrain from being a belligerent snot and I'll spend some time with the material that's been presented that has already addressed many of the questions I am asking.

Post last edited at

ec6b25  No.312

In fact, since you've improved my mood and shown me how happy I am, I'm going to make sure I study all of the material you present carefully.

I sincerely wish you are successful.

I tend to bring happiness and optimism to those I associate with, and I hope it rubs off on you.

Kittens and Rainbows

Post last edited at

ec6b25  No.314

Could you address my nuclear weapons concerns? They are pressing as of late.


Post last edited at

668591  No.315


The self-correction is how to control them. If I know that someone will zig every time I zag, I can force them to zig by me zagging.

It's a simple concept, and everyone else posting here is on the same page, because we're genuinely interested in the topic. This is why you stick out like a sore thumb as a bad-faith actor.

668591  No.316


>explain, in detail


Required reading. Everyone else is up to speed but you lmao

307a60  No.317



> If you edit my posts to strip away all of my attempts to demoralize everyone who visits /gw/ and replace my terrible strategy with concise summaries of my critiques, YOU'RE ONLY PROVING YOU ARE THE WORST

Final bad actor points awarded for incorrect conflation of authority.

Your name from here on out is Kittens and Rainbows. I will enjoy reducing your posts to the essentials… and then I will address them with your demoralization attempts removed.

You're not martyr or a wedge issue. You're Kittens and Rainbows and you're the happiest guy here from now on.

Post last edited at

307a60  No.318

I have to say, Kittens and Rainbows, your complete change of mood has been inspirational and you're asking some really valid questions. I'm looking forward to addressing them all over the week.

ec6b25  No.326


I'm a bad actor, so don't mind me.

Post last edited at

ec6b25  No.327


I'm a bad faith actor who demands people point out how I'm trying to demoralize a community… which is a very bad faith thing to do, but I keep trying anyways.

Just ignore me, please.

Post last edited at

ec6b25  No.330


Just to reiterate my previous point, not only am I a bad faith actor trying to demoralize the board, but I post three posts to make sure that all actual replies and comments aren't seen on the front page for new comers.

I pretty much deserve the way I'm being treated right now.

Post last edited at

280d8a  No.347

Until recently, nearly any input could fool an object recognition model. We were more surprised when object recognition worked than when it didn't. Today, object recognition algorithms have reached human performance as measured by some test set benchmarks, and we are surprised that they fail to perform as well on unnatural inputs. Adversarial examples are synthetic examples constructed by modifying real examples slightly in order to make a classifier believe they belong to the wrong class with high confidence. Rubbish class examples (such as fooling images) are pathological examples that the model assigns to some class with high confidence even though they should not belong to any class.

Myth: Adversarial examples do not matter because they do not occur in practice.

Fact: It's true that adversarial examples are very unlikely to occur naturally. However, adversarial examples matter because training a model to resist them can improve its accuracy on non-adversarial examples. Adversarial examples also can occur in practice if there really is an adversary - for example, a spammer trying to fool a spam detection system.

Myth: Deep learning is more vulnerable to adversarial examples than other kind of machine learning.

Fact: So far we have been able to generate adversarial examples for every model we have tested, including simple traditional machine learning models like nearest neighbor. Deep learning with adversarial training is the most resistant technique we have studied so far.

Myth: Adversarial examples are hard to find, occurring in small pockets.

Fact: Most arbitrary points in space are misclassified. For example, one network we tested classified roughly 70% of random noise samples as being horses with high confidence.

Myth: The best we can do is identify and refuse to process adversarial examples.

Fact: Refusing to process an adversarial example is better than misclassifying it, but not a satisfying solution. When there truly is an adversary, such as a spammer, the adversary would still gain an advantage by producing examples our system refused to classify. We know it is possible correctly classify adversarial examples because people are not confused by them, and that should be our goal.

Myth: An attacker must have access to the model to generate adversarial examples.

Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to.

Myth: Adversarial examples could easily be solved with standard regularization techniques.

Fact: We have unsuccessfully tested several traditional regularization strategies, including averaging across multiple models, averaging across multiple glimpses of an image, training with weight decay or noise, and classifying via inference in a generative model.

Myth: No one knows whether the human brain makes similar mistakes.

Fact: Neuroscientists and psychologists routinely study illusions and cognitive biases. Even though we do not have access to our brains' "weights," we can tell we are not affected by the same kind of adversarial examples as modern machine learning. If our brains made the same kind of mistakes as machine learning models, then adversarial examples for machine learning models would be optical illusions for us, due to the cross-model generalization property.

In conclusion, adversarial examples are a recalcitrant problem, and studying how to overcome them could help us to avoid potential security problems and to give our machine learning algorithms a more accurate understanding of the tasks they solve.


e79913  No.348

CS - are you familiar with HLI / Three

307a60  No.362


This is a good find. I'm a huge proponent of adversarial training due to its implied biomimicry.

> Refusing to process an adversarial example is better than misclassifying it, but not a satisfying solution.

I wonder what happens when this is paired with knockout training… could be exciting!


I am not.

8d53a3  No.369


>Myth: No one knows whether the human brain makes similar mistakes.

>If our brains made the same kind of mistakes as machine learning models, then adversarial examples for machine learning models would be optical illusions for us,

Doesn't the second portion logically imply that optical illusions are the equivalent of adversarial examples for the human brain? It's important to note the difference between human object recognition and machine based object recognition but I think it's plainly obvious that human object recognition is also prone to certain types of error.

One personal anecdote that shows the weakness of human object recognition mechanisms (or at least mine) is my frequent inability to recognize a familiar object when looking for it when it is in an unfamiliar place. Example: I need a specific piece of mail so I start looking for it in my office mail pile. I'll spend 10 minutes looking for it all over the place only to give up, sit down at my desk, and realize it was sitting right there the whole time. My mind presumes a location for that object and combines the symbols of the envelope and the mail pile to forma the expected image of what I'm looking for (that envelope buried in a pile of mail). However, if that presumption is wrong and that expected image too strongly encoded in the mental process of searching for that object, then I am unable to recognize thaf object independently of the expected image. Basically, our brains (or at least mine) have developed highly efficient processes that allow us to identify objects based on logically deduced expectations of future events. This works great up until the foundational "training data" upon which those expectations are based is fundamentally wrong. I also think this plays heavily into the trait of "adaptability", the logical depth at which the foundational "training data" is based, the more flexible the process becomes.

d3c459  No.577


Let me see if I'm understanding you, essentially we're talking about the human propensity for tunnel vision?

If so, maybe I can be mildly useful. Humans can train their object recognition as well as memory by playing what is called a Kims Game. Snipers and people in intelligence often train with this method, the primary function of the kims game is memory, but it also heightens perception of specific things.

You cover an object with a sheet, or usually several objects so that your biological RAM is pretty much at capacity, and then you have to describe all the items under the sheet, bonus points if you have to specifically point at where under the sheet they are.

The recognition part of the Kims Game that is actually useful is that you begin to recognize the shapes very easily using context, even though they are malformed a little bit under the sheet. It's like being able to instinctually know that someone is "printing" a concealed handgun that is in a holster that is not visible. You're so familiar with the shape, that you can extrapolate what it looks like even when altered or concealed. Imagery Analysts learn to do similar things.

Just a thought, I'm new here, and will read more material before spouting off more.

fd2e89  No.580


You're wise to point out "errors" in vision.

Machines "see" the world in terms of statistics. A camera looks at a a table and is 80% confident of the bottle of beer on it. The other 20% is for the hedge that perhaps the bottle of bear is really a shoe.

The visual cortex of biology, however, does not operate within hedges. It assumes what it sees is real and selects and behaves under such constraints. It doesn't see percentages of a thing. It either sees the thing or it doesn't. Assuming human vision is flawed because it can be tricked is dismissing two billion years of violent evolution because a cat chases a laser pointer. Don't let the technological supremacy of the current times convince you that biology is dumb meat. It is powerful and endlessly more robust than your ability to model it.

989dd7  No.584


Another way to say it that the sub behavioral activation is the hedge. The neuron operates on a spiking model because the existential feedback is all or nothing. There is no half eaten. There ia no half alive. Thus the neuron hedges with sub activation potentials. Thinking is the hedge. Doing is the commitment.

994c46  No.589

Are the chans the world's prototypes for an open-source secret society?

88f463  No.591


I prefer hacker aristocracy. image-boards create personas a la Manfred Macx and Henry Case.

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / b2 / choroy / dempart / fast / marx / veganism / vg / vichan ]