[ / / / / / / / / / / / / / ] [ dir / agatha / animu / ausneets / fur / g / leftpol / sonyeon / sw ]

/tech/ - Technology

Comment *
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Show oekaki applet
(replaces files and can be used instead)
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 3 per post.

New Nerve Center Combination

File: 56845a737a89611⋯.jpg (16.19 KB, 195x260, 3:4, Fowler's Universal Calcula….jpg)

File: 55051482d8f20dc⋯.jpg (44.66 KB, 500x394, 250:197, a3071ddf553147e2b5706cb58d….jpg)

File: 9f52f973ee2a664⋯.jpeg (7.05 KB, 272x185, 272:185, images (1).jpeg)


What would computing look like if we redesigned everything from the hardware up? with no attempt at backwards compatibility? Reinvent computing in 2018


>no attempt at backwards compatibility.

A dead project.


Mill CPU


There wouldn't be an OS and the hardware itself would handle things like scheduling, memory management, garbage collection, I/O, virtual memory, files, and access control. There are also HLLCAs, a non von Neumann architecture that actually executes source code of programming languages. The CPU has components for parsing statements and expressions instead of decoding instructions. There are books on this from the early 80s that described what was supposed to be the future before x86 and RISCs took over everything.

In an alternate universe where these designs became popular a long time ago, we wouldn't even be able to imagine what their 2010s computer would be like. The most recent thing I can find is TIARA from MIT, which is a descendant of the Lisp machine, but that looks like it was abandoned (or the NSA wanted to keep the technology out of the public's hands).






That's the dumbest shit I've read on 8chan in my entire life. The hardware does all that shit? Do you have even a modicum of knowledge of the complexity of these subsystems you want to implement in unmodifiable silicon


I'd turn the 0 and 1's into 0 and 8's, just because it would make AI more plausible.

0 8 88 8888 etc


I don't know what it would look like, but for sure the hardware would give the user absolute control, such as the ability to halt execution, inspect & modify registers & memory, set breakpoints and step through... all done at the hardware level, and at the most priviledged level (everything else would be less priviledged, including any firmware).



You can already do that



If you enjoyed Intel CPU bugs that don't implement an OS, you will thoroughly enjoy your CPU buts that implement a full OS.



Not if you're using current-generation CPUs. There'll absolutely always be black boxes with higher privileges than you.


File: 5764734394f1062⋯.mp4 (656.34 KB, 320x240, 4:3, squidward future.mp4)









Use trinary just to see what happens



Everything is reconfigurable configware/flowware running on FPGAs.



I think there was a Soviet computer that did that.




The Soviets played around with trinary for a while, that could have some interesting benefits over binary but the hardware issues were tougher so they copied the West instead.



Is trinary not what quantum computing is?




Quantum computing has three states. Trinary computing has three states. How are they different?



Instead of 1's and 0's you have -1, 0 and 1 as well. Immediate benefits appear such as not needing to sign numbers with tricks like we use now, also it seems that code density should improve somewhat as well. You could also implement more flags and features in a given trit width architecture for instance.


What if current day quantum computers are just tricky forms of trinary and it's all snake oil? I wouldn't put it past (((them))) after knowing what we know now.



Doing some Wikipediaing, the number of states goes up the more qubits you add. So a 1 qubit quantum computer has three states, but 2 qubit would have 4 and 3 qubit 8 states.


I kind of had a branching fractal ideas that would replace Von Neumann architecture back in my edgy "im cool mom, fug off" phase but kinda forgot about them and lost the notes.

Would like to see some new type of architecture tho. Von Neumann is too rigid in my opinion.



> So a 1 qubit quantum computer has three states, but 2 qubit would have 4 and 3 qubit 8 states.




I'm not claiming to know anything about quantum computers, but I don't think a superposition is a third state. It just means the state of the bit is uncertain, with some probability associated to it. When you have 2 bits, they can have 2^2=4 states. (00 01 10 11) When you have 2 qubits there are still these 4 states, but the difference is that you can only manipulate the probability that the qubits are in one of those states. I think.



Yeah and it didn't do anything spectacular compared to binary. It just stores data in a different fashion and all that does is makes it use less bits per one decimal place or text chracter. That is all.



Tertiary not "trinary" you faggot = 3 possible states: -1 = OFF, 0 = MidRange (e.g. 5v), 1 = HighRange (e.g. 10v)

Quantum = 2 Possible states: 0 = OFF, 1 = ON

Quantum computers however ever bit both 0 and 1 at the same time and is undetermined until such time as it is observed ala Heisenberg's uncertainty principle



wow I butchered that.

Every qubit in a quantum computer is both 1 and 0 at the same time. This allows you to perform certain functions, like prime factorization, extremely fast as you can compare outputs a magnitude of 2 quicker per qubit



the bigger question is weather or not the quantum computer will come with ME or PSP



Why not both?





Replace von neumann with harvard architecture or make purely functional CPUs like urbit but in hardware and less kooky


how about no?



I binged on the lecture videos about the Mill CPU recently, they gave me an immense tech-boner, but it's still very much based on being backwards compatible.


>>850315 Curious how long it would take for an original x86 type design.


Analog CPUs ftw.



That's because in the world of business sits upon using old but good working software that long lost its parent company let alone source code and cannot be ported to any other platform, so they stick with hopelessly obsolete OSes and hardware just to keep using it. And it's not your cookie cutter office suite type of software either, they're all purpose built so you can't just replace them with something else.



Intel does implement a full OS, and apart from a few bugs it’s been running smoothly in tens of millions of devices for literally decades.



Zilog processors have been running on tens of billions devices for few more decades than Intel did, never had any problems. You can just admit that Intel does substandard job at CPU design.



This was new to me. I'm familiar with VLIW as used in the Itanic architecture. How does Mill differ? Is it VLIW done right?

I note one advantage of VLIW is that it eliminates speculative execution and hence is immune to SPECTRE vulnerabilities.





And no. Ternary means that instead of 0 and 1 there is 0, 1 and 2, thusly enlarging the informational capacity and speed of processing said information.

Basically with same amount of transistors and little bit of electrical engineering trickery you could speed up your processor. Thing is, this shit needs to be built from obsidian bottom to top, which is why nobody fucked with ternary but Soviets in 70s (and they only made ternary computers transistor by transistor, not this silicon dye shit)



It's basically VLIW with variable-length instructions. ~33 instructions/cycle/core for a 32-bit design as of 2013.



Fractal computation? What does that consist of?



It's not faster. A binary computer evaluates data bit by bit, a ternary computer evaluates data trit by trit. You can scale the register size however you want to process more data units at a time, but the principle stays the same. Using wider words is only a speedup in a sense that computer only capable of short words needs to carry out multiple operations to evaluate a long word. Using wider words doesn't allows you to compute multiple short words at a time, you need vector processing for that and again it scales the same way, it doesn't have to do with the type of data storage. Nobody makes ternary computers because nobody can get transistors to work well in linear mode under such extremely tight constraints and huge voltage tolerances - it's hard enough to make it work in binary.



The Soviets claimed that they were getting better state transition speeds with the ternary transistors. Americans couldn't replicate it, then spent billions of dollars making fast binary transistors. One of the reasons that it wouldn't be faster is because ternary transistors are still in the 70s/80s.


Just passing to say that Quantum computing will never happen because the theory behind it is wrong and therefore its predictions.

That must be why all the hype so far has been from simulations and nobody actually managed to make it.



Soviets were claiming a lot of horseshit. Anyway to get a transistor to work with ternary data it needs linear mode, but at current scales quantum effects dominate the physical process of transistor operation, making it difficult to distinguish between two opposite states, let alone tell some third state in the middle.



Also in linear mode transistors' efficiency plummets and they start dissipating shit ton of heat while drawing shit ton of power. Hence transistor based power supplies all work in PWM mode, not linear mode. For all the drawbacks of having power only going either zero or full blast, it beats using 10x as many transistors and batterries/electricity and putting huge radiators on them to boot.



Enjoy the safety of a dual security system.* *At the cost of 1970's performance. Inovation that excites and is more diverse



Nigga what , they are real. Just look up dwave!



He's suggesting dwave is just using a simulator.


I want to redesign the entire world and be paid in food, clothes and a room to live in.

Everything else is BULLSHIT.



You dumb faggot

Trit has more information stored in it than a bit

It takes 2 trits to store information of around 3 bits

Dumb cuck



Also, linear mode is not needed

Only negative and positive voltages

t. EE student with uni project based on ternary computers



>muh politics dictates mathematical truths

This is why, even though you are somewhat intelligent, you will never look further than "I know the best" attitude. Your potential bright future will be denied by your bloated ego.

If you looked up maths behind ternary computers then you would see what I am talking about

go sesrch for texas masters paper on ternary computers



>what is D-wave

>what is IBM quantum processor

Shiggy the diggy



Marketing brands



Dwave is not a quantum computer.

IBM Q is a research product, it's not something that you can just buy and run turnkey.



And just how do you drive a transistor with negative and positive voltage, to get it to output negative or positive voltage?



And decimal has more information than both, it takes just over 2 trits to store one decimal place. Your point is fucking retarded and is not relevant.



Draw a diagram of the operation, blanket explanation will not suffice.


File: b93ed5cab46c740⋯.png (13.41 KB, 600x488, 75:61, glXad.png)


Dumb nigger


Moving the goalpost

Also, yea Dwave is quantum



I won't teach you basics of electrical engineering, but here is a pic of ternary not gate for you, dumbo


You do not know how decimals in binary compiters work

Or what is informational capacity of trit is




Also, here is truth table for ternary not gate because you are an idiot

Input: -0.5v 0v 0.5v

Output:0.5v 0v -0.5v




Good. Now show me how you pack this gate into millions of transistors.



Here is further explanation for your dumb ass

M2 is pchannel mosfet

M1 is nchannel mosfet

M2 connects output to 0.5v when gate is lower than source (ie -0.5 < 0.5)

M1 connects output to -0.5v when gate is higher than source (ie 0.5 > -0.5)

If 0 is in the input then you have direct line between 0.5 and -0.5

if high capacitance hanging input then resistors make a voltage divider with 0 on output



NOT gate is trivial, post XOR and NAND.



>into millions of transistors


>logic gate into transistors

You have no idea of what you are talking about, do you?

Do you know what planar technology even is?

Do I need to explain to you how silicon dyes work? Wtf? Are you this pompous and dumb at same time?



There is a whole book on maths and electrical circuits of logic gates of ternary computers

You do not want to know about it, you want to win this discussion

You are too lazy to learn for yourself, I am too lazy to copy paste shit for you



>hurr durr educate yourself

Opinion discarded.



Yes Yes Yes No



Nice non-arguments, with non-knowledge.


File: 3732fb1294bff5b⋯.png (10.17 KB, 344x600, 43:75, and.png)


Actually nevermind, 3 seconds on google revealed that you yourself pulled that shit from google. Here's an AND gate. Evidently it takes twice as many transistors to build a ternary gate than for binary gate. It is my assumption that it takes twice as many elements to make a ternary latch too. So it halves element density, die of any given size will have twice as many bit-wise shit in it than trit-wise. Meanwhile using ternary does not halves element count, it only goes by the factor of about ⅓. Therefore you're at a net loss of computational power per unit of die area just for using ternary instead of binary.



Yea I pulled it off jewgle because you are too dumb to do it yourself and why would I waste any more time on you than that?

Also, why make a picture when you can just download it?

Next thing, your assumption

You are fucking wrong

Other than that pretty good, except not realizing that trits have bigger info capacity, therefore bigger speed for equal amount of transistors



They use the same transistors, therefore they switch at the same rate. Therefore the gates operate at the same speeds. You need log3(X)/log2(X) = 0.63 as many gates to compute a number of X length, but they're double the physical size, so you're at a 26% net loss of performance when space constrained or at 26% increase of die size otherwise. Come up with transistor that can actually handle three states, then we'll talk.



>but they are double the physical size

Wrong assumption again

Go away and come back when you learn more about the topic

you never heard of how ternary logic would work with negative voltages few hours ago, stop acting like know it all



Two transistors are double the size of one.



What teo transistors are you talking about?

Just based of one AND gate you got to that conclusion?

You do realize they don't use logic gates on processors to and do optimization to utilize more space?

You know NOTHING of what you are talking about. Nothing.



You have two current directions. Transistors don't work in two directions, so you need a transistor for each direction at each junction. Therefore two transistors are needed where binary system only needs one. It doesn't even have to do with gates, just basic physics.



I'd probably go with a -1 to +1 ternary for the sake of memory (0,1,2 strikes me as more error prone) and place a greater emphasis on distributed computation. We might be capable of optical computing at this point, as well.



dumbest /tech/ post today. congrats. Garbage collection in the hardware? Really? I think /r/programming would be more comfortable for you.

Thinking about it a bit, I really only want one thing: fully documented hardware and standards. Far more strict interface and communications standards between any components that are meant to be interchangeable. An open and documented boot process that is completely under the end-user's control. Black-box hardware can be interfaced into the system but it is quarantined to a restricted bus and separate memory pool with bottom-bitch status DMA onto the main memory bus. Any black box hardware that can't adhere to the open standards gets the functional, but slow lane to account for their probable inclusion of street shitter firmware coding complete with NSA backdoors. Back on the boot process - a real BIOS with reasonable, if unoptimized, system functions in firmware - somewhat like the Amiga. Also none of this new bullshit where critical parts of the peripheral hardware's firmware is absent until this new convoluted bullshit boot sequence loads it up from *somewhere* (most definitely an ease-of-use feature for NSA).

I guess what I want is control of my system back.


Something that addresses all points made in https://blog.invisiblethings.org/papers/2015/state_harmful.pdf.



>>muh politics dictates mathematical truths

Given that what I said was that they were getting better speed out of the machine due to its fundamentally different hardware, I think that you are the only one whose politics dictate their mathematical truth. While the things the Soviets said should be taken with a grain of salt, dismissing them out of hand only reveals your own political biases.

> If you looked up maths behind ternary computers then you would see what I am talking about

> go sesrch for texas masters paper on ternary computers

After a quick Google search, I'm going to have to assume that you made "Texas Masters" up. Props to you on sending me on a wild goose chase for a person with an unfortunate name that fills search results with tons of unrelated shit.

The only things that I can find on ternary computers is a bunch of Computer Scientists (applied mathematicians, not electrical engineers) saying that it would be awesome to have ternary computers because they have better information density.


File: e15066438600907⋯.jpg (823.12 KB, 1080x1920, 9:16, 2018-01-14 13.11.45.jpg)


>>I made my shit up

>even though I glued up the paper to read for my project

Go kys faggot

Your weak Jewgle skills are your own problem, not mine



take notes from >>850441 on how to not be a huge retarded faggot



I mean that's cool and all, but you're basically trading compromised firmware on the chip for compromised firmware binaries in your "trusted device". I don't see the security benefit at all.



The point is to have the root of trust be an owner controlled key versus a hardware vendor's or Microsoft, whether the owner supplies it by toggling it in from a piece of paper on every boot or some other difficult to compromise system. If we're designing a computer from scratch, we don't need to rely on USB firmware.



What does the key have to do with the firmware still being compromised? You still have to download the firmware for your wifi chip from somewhere, and nothing forces them to make it open source.


2018? Nothing.

What we are stuck on is von neumann architecture, the problems decades ago are still the problems today in many ways.

We should redesign when memresistors are real.



Since you would control the root of trust, you would also control what firmware and software etc. the system trusts to run, all the way up through bootstrap. Keep in mind this is an ideal, from scratch computer. If you don't trust the firmware, don't infect your system with it. Probably isn't a way to accomplish this on today's pozzed x86 designs.



>What would computing look like if we redesigned everything from the hardware up? with no attempt at backwards compatibility?

Nobody would use your products and the wintel machine would carry on as usual.



HP cucked out of memristors, the whole thing is total vaporware.



That's not an answer to him question you cock sucking faggot



> Doesn't give a university name

> Doesn't give a human name

> Expects someone to find anything with the vague-ass answer he gave


> How can a ternary ALU compete with the new generation of binary ALUs? The key is in the size of computing numbers. A four bit number can compute a number the size of 16. The four trit ternary ALU can compute a number the size of 81. This number is approximately five times as large. Therefore, the ternary ALU can be as slow in circuit speed as five times slower than the binary ALU and compute the same size of number faster than the binary ALU. This means that if the state-of-the-art for addition is 7 ns, the ternary ALU must only add faster than 35 ns in order to compete with the binary ALU. The design of the ternary ALU constructed utilizing multi-valued logic and, compares, if not surpasses, the computation time of a binary ALU. There are additional advantages in implementing ternary logic instead of binary logic.

-- Page 95 (101 of the PDF)

Your arguments so far for ternary being crap are:

1) Soviets (ad hominem fallacy)

2) Maths (which you provided a source that directly contradicts you)

As for the Soviets, did they have success because:

> Someone's family would go to a gulag if they didn't claim success?

> Their state-of-the-art was so shit that it wasn't difficult to improve upon, and the machine came at a fortuitous time in history?

> They really had success?

From ( http://mason.gmu.edu/~drine/History-of-Ternary-Computers.htm ) it appears that the answer is the second. I had remembered reading somewhere that the Soviets found the ternary to be faster: apparently that was only true because any new machine would have been faster than its contemporaries due to how shit their machines were.



What's spectacular about binary then?



But it's the factually correct answer.



tbh we should have ternary storage devices if nothing else. Converting binary to ternary would reduce the amount of digits the device has to store by a full third, which is significant enough to be appealing to at least some consumers.



is that what multilayered ssds are? the bits on hdds overlap too iirc



No. A multilayered cell is a cell that can hold more than one bit. A ternary cell would store trits, not bits. Multilayered SSDs increase the storage by increasing the amount of bits each cell can store. Ternary would increase storage by decreasing the amount of digits that need to be stored in the first place. Eg, 11011011 base2 is 22010 base3. 8 bits becomes 5 trits.


File: 74dae1f889da4a2⋯.jpg (16.25 KB, 480x360, 4:3, scuttle.jpg)

Question. It's generally agreed that quantum computers won't be useful for normal day-to-day computations, but will probably be necessary for advanced number crunching, crypto, etc. Sort of like how we need a graphics card to render 3D. Does this mean one day our computers will contain QPUs?



>Your arguments so far for ternary being crap

You are a fucking autist

In whole ITT I fucking defended ternary computing as it being faster and more efficient than binary



Except the complexity of ternary means each cell is so much larger physically that you can't fit as many on the silicon.



>Does this mean one day our computers will contain QPUs

You do realize that for it to work you have to cool it down to 0 Kelvin, right?



>In whole ITT I fucking defended ternary computing as it being faster and more efficient than binary


I said:

< The Soviets claimed that they were getting better state transition speeds with the ternary transistors. Americans couldn't replicate it, then spent billions of dollars making fast binary transistors. One of the reasons that it wouldn't be faster is because ternary transistors are still in the 70s/80s.

You replied:

>muh politics dictates mathematical truths

> This is why, even though you are somewhat intelligent, you will never look further than "I know the best" attitude. Your potential bright future will be denied by your bloated ego.

As far as I can tell, you were accusing me of being a /leftypol/ shill for commies, and that any Soviet success that were had were lies because the Soviet Union was the Evil Empire of subhumans that your Fuhrer failed to destroy. I assumed that I was talking to a /pol/lack that had been exiled from his glorious dictatorship for saying something the BO didn't like. If however, what you intended was that ternary use the same technology as binary and that there would be no speed lag, well, you could have just fucking said so. You demeanor came across as if you agreed with this guy: >>851466



0 kelvin is impossible you dumbo



And I thought you said that Soviets "claimed" (aka faked their results for politics)

Srry for miscommunication



Quantum computers are more reliable at colder temperatures, you don't need to have 0 K(impossible).

There could be any number of breakthroughs that make quantum computing more reliable at higher temps.


>People are still talking about quantum nonsense here




you can't have hardware without software


File: 30869d76a79ebf8⋯.pdf (14.86 MB, Computer-Architecture-A-Mi….pdf)

<Ultimate meme parallelism chip

>OICP (RSSB has some interesting characteristics)


>Belt machine

>Network on a chip




>An equilibrium scenario is not a possible future state of our universe.

Why? Won't at some point entropy be at maximum, or at least arbitrarily close to it?


File: 49cf9d75bcfef4a⋯.jpg (112.24 KB, 1024x683, 1024:683, setun.jpg)




It was called the Setun and had two models. In the period of Krushchev's Thaw, the USSR saw considerable liberalization of arts and sciences, and this included experimentation in cybernetics. For a while, the government, which under Stalin denounced cybernetics as bourgeois nonsense and thus made it effectively forbidden, now showed an interest in its potential use for increasing productivity, as the planned economy was a mammoth task whose solution obviously was far from being optimal. Meanwhile, among other oddities such as a mechanical computer moved by hydraulics, the ternary computer was conceived, and thanks to the technology of the time, a ternary computer was much, much cheaper and easier to build, as it used far fewer componenets and consequents had fewer hardware malfunctions. It was so reliable and cost-effective that the factory assigned to build it tried to sabotage production, preferring to sell big old binary machines because the bigger cost and price were better for their bottom line. So sadly it only sold about 50 Setuns.

There were also I think at least 4 pitches to create what would essentially be a first internet, connecting factories, universities, government buildings etc. in order to optmize production and organization, but as you might guess, none were approved, partially for being overly ambitious, partially due to... political carelessness. And that was even before Krushchev was ousted. Before that happened, the government's brief flirt with cybernetics ended rather abruptly, in some measure because it worked too well. Early government-sanctioned experiments with optimal planning, and the results were very encouraging. The most likely conjecture goes, the bureaucrats that ruled the country realized that such efficient automation of management meant they would lose power. So cybernetics lost the interest of the State, but at least it wasn't verboten as it was under Stalin. Later on, he and his team made a new iteration called Setun-70 (tho the project started in 67), but by then the wave of liberalization was gone, and this new project had its budget and location fucked around with by its university, and the very original Setun was destroyed with a fucking axe and thrown in the dump. The Setun-70 project survived and reached completion, but it was never mass-manufactured. It didn't help that the transistor was becoming the norm, putting binary computers almost on par with the Setun. It still had its advantages, but not enough to escape the enmity of the establishment.

And even in that short timespan when the authorities wanted to hear what cyberneticians had to show, they still had to wrangle bureaucrats and managers every step of the way, not to meantion deal with politicking among their own peers. This reached its apotheosis in 1969, when the government decided to flat-out abandon all native computer development, and instead rely on Western technology, at the time, the IBM 360. This not only guaranteed that they would necessarily always lag behind America's computer science, but wasted a million possibilties that had cropped up in the brief blooming of Soviet cybernetics, including the ternary computer and the automated management. Just imagine it: you're an academic who spent your entire life trying to reasearch computers on the downlow, and when finally you were given a chance, you showed that you and your peers might be able to fix the mess that was the planned economy... only to be chased back into the darkness, as the same bureaucrats that gave you a chance now threw your entire life's work down the drain. It's maddening.

Lastly, an odd note. To his dying day, Setun's creator maintained that a ternary system was superior not just because of computer architecture, but from an epistemological angle as well; it was how the human mind operated (yes, no, maybe), whereas binary's everything-nothing principle "excluded the middle". He would refer to Aristotelian syllogistics as the philosophical base, and Lewis Carroll's (yes, he of Alice in Wonderland fame) symbolic logic. I confess this is way out of my depth, however. Here's his explanation: "Aristotle‘s logic is ternary and that ternary is a necessary, but not a sufficient, condition for adequate logic". yeah, I have no idea either. Regardless, the Setun models remain the only electronic ternary computers ever built, notwithstanding some garage-made novelty.



Base 3 computing. It should be focused on efficiency and security.


File: c3d4b6ec0e903e7⋯.png (130.88 KB, 1256x735, 1256:735, fukt.png)


RISC without the risk


Analog computers. They could theoretically run way faster and in real time.


File: df99d65634fe4ca⋯.gif (1.52 MB, 480x328, 60:41, bq-tell-me-more.gif)



Theoretically, everything analogical is better, but also extremely hard to tame and work with in higher levels.



There is no "0s" and "1s" really, it's just two states which are distinct from one another. What one state or the other means in a specific context is relative to that context, and labelling them as "0" and "1" is just an arbitrary abstraction done by human beings.





According to Wikipedia, an analog computer or analogue computer is a form of computer that uses the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines. Unlike digital signal processing, analog computers do not suffer from the quantization noise, but are limited by analog noise.


File: dfec8b934362579⋯.png (174.89 KB, 553x502, 553:502, untitled.PNG)



Zilog didn't have as many bugs because of die size, and the bugs were retroactively labeled as features afterwards anyways


We would half assemble a Babbage machine, then give up because one group hates the logo, and the other group is too busy masturbating to it.



It wouldn't even get halfway. We couldn't even collect all the pieces of the machine let alone assembling them in its proper order.



holy fuck pajeet stop posting

I'm a CE student and reading your post is painful

I can upload some books for you, if you want



our computers are analog

it's just that we use digital to save us from the small variations in voltages that can happen for almost anything



Sometimes I think I should have gone to Texas Tech. Too late now.


That's fucking bullshit. Memristors have so much potential.


Stack machines as far as the eye can see. Everything would be running a Forth-based OS, and all graphics would be blitted to the screen.



>ternary ALU

How would ternary be better? Binary is most elegant because it's the smalest base that's useful for a number system (i.e. n^2 != 1). Any larger base is just as arbitrary as base ten or anything else (bases like 4, 16, 256 etc. are somewhat of an exception as they are shorthands for binary with the equivalent number of binary bits per symbol itself a power of 2). In particular an odd-numbered base like three (all powers of which are odd as well) seems particularly ill-fit for the purpose. Also there is no equivalent of the convenient coincidence of 2^10 being very close to 10^3 (a power of 3 comes withhin a proximity of less than 5% only at 3^21 = 10460353203). Finally, all physical data storage and transmission implementations and encodings become more complex and less reliable (and/or require more overhead to maintain reliablility) with more than two distinct symbols.



is this a joke or reference?



Wouldn't that require the universe to have a fixed size? If the universe spans outwards in all directions then entropy can never be at "maximum" because energy can always keep spreading outwards.



>universe spans outwards

"outwards" meaning where exactly? what is the carrier the universe is implemented on? how is it encoded?



Outwards from the original point it expanded from.




Please point me to that specific direction.

>from the original point it expanded from

How would you define the location of such a point?



Quality post anon.


File: 50c4d698d892ad9⋯.jpg (796.33 KB, 1000x667, 1000:667, K'NEX Computer Front Close….jpg)




There doesn't have to be a single point. In fact, we don't know if there's such a point or if the universe is infinite in all directions. You can imagine the expansion of space as tiny bits of new space constantly being created between all the other bits of space. Thus the distance is being increased between everything (and at an accelerating rate, because the more space is between something the more space gets created there!), although nothing has to move. This appears like a force drawing everything apart. On the other hand, electromagnetism and even gravitation are much stronger than this pseudo force, so you and everything around you isn't torn apart in a slow explosion. They keep everything together. Only on a much, much larger scale (think galaxy clusters), expansion of space becomes stronger than natural forces.

Another (probably inaccurate) way to imagine expansion of space is slowing down of the speed of light. Nothing moves and yet stuff appears more and more distant from another because force, including light, takes longer to travel the same distance.


I have no tell h knowledge at all but an idea occurs..what if instead of programming code normally ,if you instead program in 3 dimensions? Think 3D cube instead of flat paper square.

May need a computer to design this kind of coding though with that many calculations...






Is different source files stacked on top of one another 3D enough for you?



Galaxy superclusters are predicted to be torn apart eventually, while each regular cluster is to converge into a supergalaxy. And do you know what? The beings who will many billions of years live in the supergalaxy that will result from the merging of the Milky Way, the Andromeda Galaxy, and all the smaller ones from our local cluster will think that that galaxy is all of the universe, for they will not be able to detect the presence of any of the others in any way. I mean, they will be able to consider that something else might exist, but they will never be able to prove it. The informational bridges to the universe's past will be burnt forever from any local standpoint. Will those supergalaxies which are hopelessly separated from each other at that point (with the expansion of space surpassing the speed of light before any other supergalaxy could be even spotted, let alone reached) still be fit to be considered to exist in the SAME universe? Or will this mark the split of the universe as it was into separate universes of their own, even though they are still all parts of the same space which once united them?



>with the expansion of space surpassing the speed of light


>before any other supergalaxy could be even spotted, let alone reached

Of course they can be reached (given enough time).




Yes, as unbelievable as it may sound at first, space far enough from a given point expands faster than the speed of light. Space itself is not being prevented from that in any way. And once the superclusters are torn apart enough and the local clusters have all collapsed into supergalaxies, that cutoff radius beyond which the universe ahead of you is being moved away from you faster even for light to keep up will be smaller than the distance to any other supergalaxy at any given point.

>Of course they can be reached (given enough time).

No (see above).



You do realize there's a difference between the universe and the observable universe? The edge of the observable universe from our point of view is where space expansion becomes fast enough so light from beyond there can never reach us anymore. Average space density at supercluster level is too small for gravity to be able to hold it together against the expansion of space, and thus they will be torn apart, with each local cluster becoming a "bubble" of its own eventually, with the other "bubbles" moving forever out of each others' observable universes. Thus each local cluster collapsed into a supergalaxy will become an observable universe of its own, with no possibility of matter, energy or information to ever cross from one to another ever again.



the following retarded concepts would no longer exist:

>console/terminal/terminal emulator or whatever they're calling it this decade

>parsed text for source code

>special snowflake characters such as metacharacters

>dns or anything to do with it, such as hostnames


>having 20 different programming languages which ostensibly have different uses (muh right tool for the right job) but are really just slight variations of the same thing

>shell or any type of "scripting"

>hierarchical file systems

>mac addresses

>cache/branch prediction

>millions of lines of the TCB written in assembler/C



>the mind informs reality

I don't have a positive comment so I'll just refrain from insulting you




Anything that is reachable now will continue to be reachable for all eternity. It is not intuitive but true, consider that the way from the start to the goal can be seen in percentage and that the space behind you expands too. The percentage of the way you have traversed will continue to increase, and what is more you will accelerate in terms of %/t the closer you are to it because more of the expanding will be behind you, so you are guaranteed to reach 100% and therefore the goal at some point.



You look at it from a flawed perspective. Space always expands "away" form you, it makes distances for you to cover larger but never smaller, its expansion can effectively make your way longer but never smaller. At some point things far away enough are carried away from you by the expanding space faster than you could possibly travel, than even light could. As said above, that is the reason why the observable universe is a sphere-shaped portion of the universe which is smaller than the whole of it (how much smaller we will most likely never know, unless we discover some strange laws of physics which we currently know nothing about).

If you are still skeptical, try the commonly invoked model of the baloon with dots on it which is being inflated. As you move across its surface, distances to all dots around you are growing constantly larger. If you want your distance to a certain dot to remain constant, you have to be moving towards it at a certain speed to make up for the growing distance due to the expansion of the inflating baloon. If we assume that your maximum possible velocity is, say, 1 inch per second, then there will be some dots far enough away from you which will be becoming more and more distant anyway because they move away from you faster than 1 inch per second as the baloon is being inflated.



The relative percentage of the way already covered does not matter, what matters is the absolute distance still to cover. The farther an object is away from you, the faster the expanding space is carrying it away from you. If the absolute distance still to cover keeps growing faster than you could ever move (and the speed of light is such a barrier - practically attainable speeds for massive objects are of course still lower that that), then you necessarily cannot ever reach that object (of course unless the expansion of space would plateau at some point and then reverse, becoming a contraction which reduces absolute distances between objects over time).



>The percentage of the way you have traversed will continue to increase

Only if the speed the target moves away from you never exceeds your maximum attainable speed. If it does, the percentage of the total effective distance possible to cover will approach a certain limit.


1. Only supports Lisp, Haskell, one Go-like language to replace Java, one Pythonic language that is better than Python

2. A better Jupyter Notebook with languages from point 1, for all the data science and common programming needs.

3. A better reStructuredText, CommonMark or Org-mode that compiles to HTML and does not look like Reddit faggotry.

4. One standard SQL infrastructure with better, simpler english, extra features and DBs that can be decentralized

5. Shell language based on Python or Go. Bash is a hell to use, and is not flexible enough to do tougher jobs.

6. IPFS and Context Dependent Chunking for everthing P2P, only keep BitTorrent for private trackers and JoyStream.

7. An alternative to TOR and I2P that is easy to host, such that anyone can host an exit node without worry.

8. Ethereum alternative that does not have a broken Pay2Code system. and combine that with point 4 and 5.

9. RISC-based no-Intel computers with x86 emulation for compatibility, GPLv3 hardware declared by government.

10. Printers, scanners, Microphones, Cameras, "gaming peripherals" that actually works on Linux.

11. Linux, but with a more modular design than SystemD and lighter libraries like BSD.

12. Matrix to replace TeamSpeak, Mumble, Discord and other Instant Messaging and VoIP applications.

13. A better looking GNUSocial (Mastodon without cancer) that replaces Twitter and Instagram.

14. Brave browser that can handle more tabs, have diverse add-ons and less bugs than current versions.


File: 9bff7ba81ebec00⋯.jpg (14.57 KB, 272x300, 68:75, tech-computer.jpg)


File: 23e4783b8369428⋯.jpg (72.8 KB, 492x500, 123:125, kr.jpg)



Add RISC and UNIX to that list.

    Look, those guys at berkeley decided to optimise their
chip for C and Unix programs. It says so right in their
paper. They looked at how C programs tended to behave, and
(later) how Unix behaved, and made a chip that worked that
way. So what if it's hard to make downward lexical funargs
when you have register windows? It's a special-purpose
chip, remember?

Only then companies like Sun push their snazzy RISC
machines. To make their machines more attractive they
proudly point out "and of course it uses the great
general-purpose RISC. Why it's so general purpose that it
runs Unix and C just great!"

This, I suppose, is a variation on the usual "the way
it's done in unix is by definition the general case"



You ever used EXEC.EXE?

there's a reason the UNIX haters died out



My ideal CPU would be based on super-intelligent bacteria that used lasers to relay information to eachother in neural net shtoyle but also transistor like functionality.

And the medium they do this in is home-brewed beer. The alcohol keeps other bugs out and they use the calories for energy. This needs to be replaced about once a day(12 fl oz) on average but will decrease if your computational needs increase.

Beer Bottle Computing.



Attach sensors to everyones dick and use the cummies for computational purposes



>>what is D-wave

A scam


File: c3c82461c8d7cd3⋯.gif (2.53 MB, 384x288, 4:3, mandlebrot.gif)

File: fd6e0e0af785d3d⋯.jpg (97.42 KB, 937x515, 937:515, neuromorphic computing.jpg)


This. Would this be theoretically classified as a "3D computing" structure? How can we make operations that are relative to a bits previous state? Considering that neurons in the brain become active, inactive and removed (1, 0 and -1), the brain "learns" from the death of a neuron that impacts future neurons properties. Neural networks already achieve this virtually but often at random with painfully slow improvements due to uncorrelated weights. I've always been intrigued by Mandlebrot and Julia sets because they "turn a 2D plane into 3D" with time being the 3rd dimension. The number 3 might somehow be linked to evolution as a floating point where 2 is a mere constant (for the sake of oversimplification; 1 / 1 = 1 [Non-recurring], 1 / 2 = 0.5 [Non-recurring] but 1 / 3 = 0.33 [Recurring]. Further calculations would just be the addition {or subtraction} of recurring numbers to non-recurring numbers). Correct me if i'm mislead, but I think the future lies within infinite recurring realms of computation. To conceive and create this relative computation in practice with successful outcome is ineffable to me, nevertheless interesting.


File: 81bd95b8fac40ef⋯.png (547.41 KB, 4060x2430, 406:243, trits.png)


a trit is about 1.6bits.

im learning steps to build a ternary computer.

and trying to convert measurement.

if you used a ternary computer a 56k modem would be equivalent of 3mb internet speed.



That is very fucking detailed.


This made my day.


Go on.





No machine will ever simulate the human brain.

100 billion Neurons x 200 trillion synapses.

Let's say neurons are Cores and synapses are threads.

(100B) x (200T)= 1 septillion Calculations per second.

and that's only basic calculation.


Yes, you can store more information in a trit than you can a bit. compression is easier on a trit than a bit.

computers would be faster, hold more storage and use less electricity.



the brain is doing that much calculations at only 20 watts.



Main problem: Everything from data structures to compression will need to be rebuilt from the ground up (with many proven to be as less effective than binary)



actually no, ternary is backwards compatible with binary. you are pulling facts out your ass about it being less effective. there hasn't been any study or improvement since the 1950's from the early ternary computers.

binary only has a couple of logic gates while ternary has thousands.



This is because it is more difficult to design circuitry to deal with ternary levels of signalling. Binary levels of signalling is the simplest, most cost effective form of electronic signalling for humans to build.



It'd look remarkably like what it does today, except all of the standards would be slightly different.

Computing conventions that we have today (CPUs, RAM, storage, PCIe bus, text-based interfaces, GUIs, "Web", etc.) have been retained because they're pretty much the best compromise between capability and cost of implementation that suits the most use cases.

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / agatha / animu / ausneets / fur / g / leftpol / sonyeon / sw ]