[ / / / / / / / / / / / / / ] [ dir / agatha / ausneets / fascist / hikki / miku / sonyeon / tot / vichan ]

/tech/ - Technology

Winner of the 45rd Attention-Hungry Games
/s/ - Make 3D Great Again

Comment *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Show oekaki applet
(replaces files and can be used instead)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 3 per post.

New Volunteer

File: 53eac902e05dd53⋯.png (2.82 KB, 200x200, 1:1, questionmark.png)


Bring all your hardware, software and other troubles here.



> my 500G library on SSD is scanned and updated in a few seconds.

Bull fucking pussy it is, this shit takes a minimum of 5 minutes spinning its wheels.




At that point install cmus.



Again, you probably have not properly configured the database. If you're using sqllite then of course it's going to be slow.

Using a real database after the first import of collection, update collection takes less than 10 seconds to run.




No, I mean the entire software repository hosted on an onion address. I want to install apt applications through downloading them from onion address because this hides the contents of transmission and authenticates the server without relying on https certs.



> hides the contents of transmission

It still is using http/https, you're just obscuring the routes.

An SSL connection is based on 4 keys, a local private one, a local public one (which you provide ot the other server), that server's public one (which it provides to you) and that server's private cert.

This means the traffic of what you are downloading is already obscured. The only thing TOR would buy you is that folks may not be able to figure out which repository you are downloading from.

The same contents are coming over the same kind of connection between you and the first node. It's not like it's extra encrypted in any way.

So if someone is watching the traffic, it's literally going to be the same traffic with all the same protections, jsut instead of seeing the destination as repo.distro.com they see some tor node's ip with the same traffic.



>The only thing TOR would buy you is

The problem is that the server will know who you are. This means that they can send you a "special" version of packages than the one they send to everyone else.



>It still is using http/https, you're just obscuring the routes.

Not really. Connections to hidden services are end-to-end encrypted with perfect forward secrecy so at that moment plain HTTP doesn't really matter because the very purpose of SSL is built into the protocol.

.onion repository is desirable because an adversary cannot intercept packages that it otherwise could over a unencrypted HTTP connection and deliver an out-of-date package to the desired target and potentially prevent him from installing a security update that fixes a remote vulnerability.

Another advantage I can think of is that an adversary cannot mount a passive surveillance attack on users and observe the package transmissions in transit order to figure out the setup of specific users. For example, a user who downloads pentesting tools is far more interesting to an adversary than a user who downloads the numix circle theme.


I have a C program that's compiled with -std=c99, but I also want to use some POSIX functions (in particular random and srandom). On macOS I just include stdlib.h and that's it, but on Linux the compiler decides that -std=c99 means C 99 and nothing else. Is there a way to "link" against POSIX like with other non-standard libraries?



I don't quite understand the question, but maybe you're trying to use gnu extensions to c99. In which case use -std=gnu99




Oh, maybe I should get another file manager then, I hate how thunar doesn't remember the different views I have for different folders I'm constantly changing from tiles to list view. I've already dragged and dropped files into folders and there's no ctrl+z.



I think I may see what's going on, you're talking about this warning?

[tim@slimsilver tmp]$ gcc test.c -std=c99 -Wall -o roll

test.c: In function ‘main’:

test.c:10:5: warning: implicit declaration of function ‘srandom’; did you mean ‘srand’? [-Wimplicit-function-declaration]




test.c:12:14: warning: implicit declaration of function ‘random’; did you mean ‘rand’? [-Wimplicit-function-declaration]

result = random() % 7 + 1;



[x tmp]$ ./roll

You rolled a 2

[x tmp]$ ./roll

You rolled a 6

[x tmp]$ gcc test.c -std=gnu99 -Wall -o roll

[x tmp]$ ./roll

You rolled a 1

[x tmp]$ ./roll

You rolled a 6

$ cat test.c

#include <stdio.h>

#include <stdlib.h>

#include <unistd.h>

#include <time.h>

int main(int argc, char* argv[])


int result;


result = random() % 7 + 1;

printf("You rolled a %d\n", result);

return 0;


So you see, for me this works with c99 or gnu99, but with c99 I get warned for using srandom and random instead of srand and rand.

So either use -std=gnu99 to get rid of the warning, or change from srandom -> srand and from random -> rand. They are the same.



nautilus has control+z support. It also has "Trash" support (for undoing deletes, because it actually is just a move to your trash folder)

nautilus is the file manager that ships with gnome.



What? You need to research how the internet works. It's the same damn encryption either way.

If you're using https or sftp nobody can intercept what packages you are download, tor or not. Tor is actually LESS secure in this manner, because at some point in the route you are NOT using https and are using http, so if some evil person has more of a chance of reading what packages you're downloading using tor and not using tor.

SSL is end-to-end encrypted period. TOR uses SSL. HTTPS uses SSL.



The only thing TOR is doing is obscuring the route. There is no "TOR web server" it's the same web server serving the same thing over the same protocols.


You guys have really been fooled into thinking TOR is some sort of magic extra-encrypted completely anonymous thing, haven't you guys?

NSA wrote TOR. There's a reason tor browser is just based off firefox and you can connect to non-onion urls on it the same as onion urls. All it does is obscures the route when you use a .onion address. It's a fancy distributed proxy, that's it.


Obscuring the route isn't even really for your safety, it's for the safety of the endpoint. It makes it difficult to discover where 1234567blah.onion ends by obscuring the request through several hops. That means the drug dealing silk road 9.0s can keep from being taken down by the FBI.

Somebody can still see that you are connecting there if they have all your traffic. But without controlling a large number of nodes they can't find the endpoint server.

It also by that same nature helps prevent any random packet from revealing the destination. They just see you connecting to your first node and getting data from there versus the hidden end service.


Is there a general consensus on whether static or dynamic linking is "better"? I know most distros use dynamic linking, but are there any good arguments for the use of static binaries instead?

Maybe I should start a thread but whatever



Static binaries are more "portable" because they are compiled into the executable. They are almost always a really bad idea though because they are fricking huge, and you never get any updates or bugfixes to the underlying library without recompiling.

So if you are trying to distribute a standalone app, like some tool you wrote that you want to run on an SGE/UGE cluster or another farm, static makes sense.

Otherwise dynamic makes a ton more sense all around.



Static the libaries are compiled into the executable. Like you include the entirety of libc and libm and atk, gtk, pango. Then, you will have a 300 meg executable but it will run on a system that doesn't have those libs installed.



So in summation, all distros should use dynamic for all things. There is no advantage whatsoever for static except as distribution of an application onto a foreign system where you can't guarantee that the dynamic libs you need will be available or at the correct version.

Dynamic libs are much faster to use, get updates when you update packages, use much less space overall, etc.



Dolphin is objectively the best file manager ever made. Try it.



I really want to be happy but my autism won't let go of systemd.




So you like systemd or you don't like it? It's the standard now, like it or not. Applications ship a systemd unit as their means to be an init. If your distro is still writing init scripts for every application because they aren't provided anymore (if they were even compat in the first place), well.... sorry.

But hey, it's still okay. You can be an autismfag and still appreciate systemd. I am. Being able to go from power button to login shell in 3.1 seconds is possible with UEFI and systemd. Not with SYS-V init scripts. How can you like to be less efficient?



i installed arch + openbox and immediately switched back to debian lol


i hear go and rust are the successors to C/C++ but what about D? have never programmed in life but D looks like its been here for longer and does the same things



Your options are Gentoo, Void, Artix, Devuan and a bunch of other smaller distros.

Artix is arch without NsaD so you might want to look into that.


File: 316a737f4ffff21⋯.png (77.8 KB, 240x265, 48:53, 316a737f4ffff2194a48dafa42….png)


>lel just submit to shitstormDisease

>like it or not, goy, accept it, it's the standard now!

>writing your own initscripts is HARD!

>what is gentoo

>what is alpine

>what is devuan



> I installed arch and switched back

No you didn't. Don't lie.



Writing your own init scripts isn't hard, but it's specific to every distro in order to properly manage the services.

SYS-V init scripts have no dependency tree and don't really support parallel.

I rather liked the old arch (back when /etc/rc.conf existed) init. You'd define a list of services, and prefix the name with '@' to run in background. This was a simple two-level approach, so you could have

SERVICES=(syslog @someapp @otherapp network @netapp @netotherapp @net3app nfs @nfsapp)

etc, so it would start syslog, then in parallel start "someapp" and "otherapp", then wait for 'network' to start before doing the next in parallel, then wait for nfs to start, etc.

The question is, should you have to manage your own out-of-band init scripts for literally EVERY service (meaning you don't get the upstream testing and updates or the immediacy of installing a new app), get rid of parallel startups and advanced dependency trees, standard distro-agnostic everything including stuff like cgroups, performance etc. etc. because.... you don't like a small handful of features of systemd?

Basically, you choose the less-optimized option which has no standards and isn't natively supported and tested by all apps because you heard some other people complain about systemd and you think you should care?

That's some dumb nigger shit. But hey, be a dumb nigger. If that's what you need to be wasting your time doing, nigging out and nigging around in order to be different and incompatible, then do it. You're only limiting yourself.



i literally did.

why did i switch?

1. an update fucked up my font config

2. an update fucked compton's transparency

3. have to use an external package manager to use the AUR

4. main repository is missing a fuckton of programs meaning the AUR is basically required

5. community is full of vim users


By the way, I have written my own custom distro including all the init scrips and literally built every single package. Some very important USA infrastructure still uses my custom distro. It was beautiful and fit me perfectly.

If I redid it today, I would either reimplement systemd such that I could use their unit files and provide systemd-compatible wrappers (so subprocesses and applications can actually use them without patching the entire world), or I would just use systemd base if I didn't have the time or drive to do so.

I would not go back to the old days of init scripts, even though they are familiar and followed UNIX philosophy better. That would be stupid, a waste of time, and a step backwards.



p.s. arch is just a meme



I'd like some FUCKING SOURCES on that



1. Didn't happen

2. Didn't happen

3. There's plenty of scripts out there that provide a unified interface for all things, including aur

4. No it's not. Maybe you didn't have "community" repo added? There are 10,144 packages in the "primary" repos right now. AUR is user-managed, anyone can put anything there.

5. I see, you don't use vim. I didn't realize I was talking to a literal retard this whole time.



1. did happen lol prove wrong

2. did happen lol prove wrong protip: youcant

3. example pls k thx uwu

4. 10k packages lol that's fucking horrible


>he doesnt use emacs

I didn't realize I was talking to a literal retard this whole time.



I'm not that anon but here is my distro progression, as far as I remember:

Ubuntu > Xubuntu > Kubuntu > Ubuntu > Lubuntu > Linux Mint > Antergos > Debian > Ubuntu MATE > Funtoo > Devuan > Void > Gentoo.

I consider myself fairly experienced specially because around 2 years ago I had some mental breakthroughs and shit that I really think made me a smarter person and made me properly understand many parts of my knowledge. I think after 6 years distro hopping I'm stopping on Gentoo.

And I'm never looking back to Arch. That one I quit after pacman broke the 9999th package. If I am to install a binary distro on a computer I'll install Void or Devuan or if its use case allows a *BSD.



There's video of an early version of it I presented at the first Archcon. You might be able to still find it, was many years ago.

I started based off archlinux at the time, but within a year of that it was changed to be completely specific. I inherited a fair amount of PKGBUILDS from there, but again, after a time had to take over everything. It's a stable OS so using the rolling release repos wasn't going to work for our very specific things.

I designed it with speedy recovery in mind. There was a flash drive which you pre-build hardrives. When these were first plugged in, a questionnaire would pop up asking you a few basic questions which would determine how the system would then automatically configure / finish installing itself and automatically join the appropriate cluster and export the appropriate services.

It was designed back before VMs and containers were big in order to have 99.9%+ uptime (as required, we were actually allowed by law no more than 15 minutes of outage per year).

I could keep going on and on but I take it you don't really care and just wanna bitch at me, so here's your (you).



I've been around a while. I remember when "Fedora" first came out and installing that. I've used gentoo and debian, slackware, redhat, centos, all the things.

I've been using arch as my primary for well over a decade now. My system has never broken by any package. It doesn't work like that, there's all kinds of checks that prevent that. I do believe that you fucked up your system messing around without knowing what you were doing and an upgrade broke some special unstable link you had engineered at the time. That is possible.

The arch philosophy is that YOU have to understand how your system works. It doesn't hold your hand. It is designed well and as simple as possible. I don't know how long you've been off the horse but arch doesn't even ship with a graphical installer anymore (including curses). You have to know how to format and mount and type pacman -S base --root=/mnt/arch and all that good stuff.

If you know what's going on and how things work it's the best. If you don't want to know that and you'd rather be crippled in order to maintain ignorance, Archlinux isn't for you.


The benefit of not trying to hold your hand is that it DOESN't break with things like package updates and shit. There's no super complicated layer of network managers and all kinds of dumb shit to support some non-standard file somewhere to configure something or another.

You just configure your network like a sane person. Ect.


File: 101466f9f4132e3⋯.png (357.6 KB, 796x597, 4:3, 101466f9f4132e392d53750c2b….png)


>durr hurr what is openrc

gentoo and arch both use it, btw, and it supports parallel + advanced dependency stuff, without being a creeping blob of interlocking dependencies, binary logs and giant security holes.

OpenRC supports cgroups and is fairly distro-agnostic (same init script works for both alpine and gentoo)

>muh upstream testing and updates

come on nigger, writing a new openrc init "script" for whatever daemon takes 2 minutes and you won't ever need to update it unless the parameters to the daemon change (which almost never happens)

>less-optimized version

yeah, because systemd's 900000 lines of code is "optimized" amirite, and because LE BINARY LOGS make a huge difference in speed



I mean look, use your inferior shit if you want. That's perfectly fine. I mean hell, windows 98 still works AMIRITE?

You're not going to convince me that for my usage there's any viable alternative right now. And if you're all about alternatives because "OMG SYSTEMD SUX LOLZ" meme pushed by microsoft - go ahead. There are reasons 95% of the Linux world switched to it practically overnight. If you don't get those, and you don't want to get those, there's nothing of value in further conversation on this topic.

I'm done with this circle jerk though. Call me a fag and say "I win", hell tell your mom to give you 2 scoops of meatloaf tonight to celebrate, I don't care. I know the difference between my "1st place" and your "participation" trophy.


I wrote the disadvantages of systemd starting here: >>941058 it's not like I don't get them. But I'm able to accept that those couple disadvantages don't make it worth it to abandon the plethora of advantages.

So on my personal system and the thousands of professional systems I manage, I will use the best and most unified / standard thing out there right now: systemd.

Hopefully something viable does come along to replace this with backward-compatible wrapper scripts, but that day has not come.


File: b90f9d2182a88fe⋯.png (285.08 KB, 498x451, 498:451, b90f9d2182a88fe61519e1d01b….png)


>windows 98 still works AMIRITE?

Windows 98 is unironically better software than wangblobs 10.

>meme pushed by microsoft

Why would macroshit criticize pajeeteringware when his attitude and programming style is so similar to their own?

>95% of the linux world switched to it

because it's cancer and once you get one small component of it installed, it will become more and more "convenient" for the distro maintainers to bring the whole cancer in. and then GNOMEshit started depending on parts of it, and then failfucks started using pissaudio by default, etc...

systemd is only good for ubuntu and linux mint users who don't give two fucks about understanding their system or doing advanced configuration. For them, it doesn't matter, because it's still better than wangblows, at least on the outside.


pulseaudio is utter trash, I agree. There is no reason for it at all except to be a burden. The conspiracy of gnome folks to require it was a travesty.



>Anon gives an apposite response

>Pssh... I'll be the bigger man and walk away... because I'm number #1; and you live with your MOM!

I know you only have your daily crywank to look forward to, but please take your self-aggrandizement elsewhere.



I'm in this support thread to give support to those who need help.

It's not the place for long arguments about why Arch Linux and everything it does by default is the best ever, or why someone is too brain damaged to understand it.

I was maybe more into arguing such things 15 years or so ago, but it's not my prerogative anymore. My comments and knowledge on matters stand on their own, but when it delves into more opinion-based, fine. But that's not fruitful for me to argue.

Also, you're really upset at me using colorful language? Welcome to image boards! I hope you enjoy your stay.



>Also, you're really upset at me using colorful language? Welcome to image boards! I hope you enjoy your stay.

No, it was the way you repudiated what he said by bringing up Microsoft/Windows 98, saying "there are reasons" (you didn't expound on these "reasons") -- same with the following:

>If you don't get those, and you don't want to get those, there's nothing of value in further conversation on this topic

The burden of proof is on you, fogey. I don't care how doddering/senescent you are; that isn't my prerogative :^).

Your other post >>941058 made some cogent points; sure. But most of it boils down to boot times and standardization -- neither of which have all that much gravitas (okay, standardization does -- but it's hardly an elegant "standard").



don't change ram timing manually. you want to enable an xmp profile in the bios instead. you may have to fuck with this switch if you don't have an option for it: https://www.asus.com/us/support/FAQ/1015262/ i've never owned an asus board so idk


potplayer plays everything, even realvideo4 shit that everything else spits out


i want to cum in lains mouth



your earlier post died, see >>941837 if you didn't get the (you)



Do you still release security updates for it, or all these systems vulnerable?


Why would my internet connection only work through a VPN?


>come to see if i got an answer

>i never posted the question

How long can I get away with using an old kernel on Artix? Specifically I'm using 4.9.77-1-lts from the Arch package archive.



You're in China?



>Once you get on archlinux you never switch away.

I did switch away. Having a system easily adaptable through the simple use of USE flags, is something I would really miss.



You can't use a VPN if your internet connection doesn't work. It's probably fucked up DNS. Are you able to visit in your browser without the VPN? Try pinging a website and see if it gives you an IP. If that's the problem you can set your browser to use https://www.opennic.org/ for now.



>If that's the problem you can set your browser to use https://www.opennic.org/ for now.

I meant to say router, there should be an option for it in its control panel. There's a setting for DNS in your pc's network settings as well.



>going to (((stallman)))'s website

go to instead



Oh huh, I thought 8chan didn't let you use direct IP. Stallman's site is just the first one I could think of that's bare bones and pro-muh freedoms enough to probably allow it. Should have realized works




Gonna try both. Scared to install a another file manager though, how does it work? Are any of the old file manager files left behind? Is reverting back to the original as easy as reinstalling it?



Yes, that's it. My problem is, I don't want everything that GNU 99 adds, I only want those functions. I know that if you want to use for example POSIX threads when using C99 you have to link agains libpthread (-lpthread), so I thought there might be a library like libprandom or something like that. Are srandom and srand always the same, or is it just that GNU implements them to be the same, but they don't have to be?


File: a559bd51bbc8c06⋯.jpg (80.81 KB, 640x480, 4:3, 14.jpg)

I know my C pretty well, and now I started to know Python. I know how to write procedurally.

But I feel like I'm lacking the whole OOP way of thinking. What book could fill me in with the concepts the quickest?


File: e1ab4c0f6f512b4⋯.png (36.6 KB, 365x186, 365:186, Screenshot_20180713_011647.png)

What is this shit, it keeps popping up occasionally? I am new to Kubuntu and I don't want to just click "yes" when I don't know what this does. I usually pops up after I have used GPG.



You don't have to remove thunar, it's not like it will waste space. All you have to do is in terminal:

>sudo apt install dolphin

Installs dolphin


Go to utilities, change fm to dolphin. If it's not in the menu then navigate to it's desktop file under /usr/share/applications.


Clean Code book



Huh, it was my DNS. Thanks for the help, m8


File: 261ee8e27f6861c⋯.png (89.34 KB, 961x524, 961:524, Screenshot from 2018-07-13….png)



I settled on decibel-audio-player. I had to install a dozen fucking gstreamer0.10 packages before the fucker would work, but it doesn't require a god damn database just to browse and select music and doesn't shit the bed every time I open it up because the shitty database has to rebuild.



Ah ok. I just don't want extra packages lying around and I'm scared if uninstalling thunar will remove package dependencies that other packages require.



I'm still learning, I feel like a babby using synaptic as a lot of pros use the terminal for removing and installing packages.



>remove package dependencies that other packages require.

The vast majority of package managers will not do this and will warn you before doing so. In the case of APT, it will tell you it is removing everything which depends on thunar, which would result in an incredibly long list of stuff. In this case just abort the apt program.


I need to transfer a file containing my passwords (unencrypted) from one machine to another. Obviously any solution involving the internet is out of question. What other choices do I have?

> Use a USB stick

This has the danger of being unable to completely wipe the stick, and I would also need a stick that has nothing important on it that I cannot wipe

> Transfer over WLAN

This would circumvent the need for any external media, but it would also run over their air. Is there a chance that the file could be intercepted? I don't think there is anyone out there trying to grab my passwords specifically, but if someone is just collecting anything that's being transferred they would end up getting my passwords by sheer luck.



Use a male to male audio cable. Use minimodem to transfer the data and to receive it on the other end. It's perfectly viable; just make sure not to set the speed to over ~2000 baud (depending on the length and quality of your audio cable) or you will get corruption.



Alright I'm messing around in synaptic and if I were to uninstall thunar, it would remove 3 other packages, doesn't list anymore, it seems Thunar's essential to XFCE as it would remove xfce-places-plugin which from my understanding is the backbone of the xfce desktop. Alright I'm gonna install my current distro in a VM and mess around with it without fear of breaking anything. Thanks.




I think I'm a complete idiot: the file is exported unencrypted, but I can just encrypt it myself using PGP. Basically I have the file on machine A and I want it on machine B. B has a public and private GPG key, so all I need to do is encrypt the file on A with the public key from B, send it over to B and decrypt it there with the private key of B. Is this correct?



Sure, that's probably a much better way to do it than my autistic method with the audio cable.



Not to mention it's less gay than using a male-to-male cable :^)


File: f200ee7f42ea4bb⋯.jpg (54.38 KB, 620x350, 62:35, Alpine_Linux_logo_web.jpg)

Is picrelated good for a daily driver?

Looking to switch from my bloated noobuntu that got filled with useless and broken installs from my past mistakes while learning linux.



very gud, try it

its only disadvantage is that it doesn't have as many packages available (so be sure to enable the community repo), so check on their online package list to see if everything you want is available

also, the installer is a little fiddly if you're not used to it.



Does this translate to having to compile from source?

I'm still relatively new to linux, so I'll also have to ask what musl vs libc is all about and what consequences the choice of using the former has.



>compile from source

You don't have to do that, although you can if you want to.

>musl vs glibc

Musl is an alternative C library. It is basically smaller and uses less memory than glibc, and is compatible with most things. I haven't yet encountered a problem with it, and I have used Alpine daily for over a year.


I want to convert some wavs to opus, they're in multiple directories and have spaces in their names. What's the best way to do it? I want for example "test sound.wav" to become "test sound.opus".

I would normally do something like

for i in $(find . -type f -name *.wav); do ffmpeg -i "$i" -b:a 64k "${i%wav}opus"; done

but it doesn't work with spaces in the name. I could do `find -exec` but I don't know how to make .opus replace .wav automatically.


Dumb video editing question that I can't find an answer for readily:

Is .MTS the same as .TS? Are these file extensions simply synonyms?



for f in *.wav; do ffmpeg -i "$f" -c:a libopus -b:a 64k $(basename "$f" .wav).opus; done
You may need to fuck with quoting to make that last part work; I'm on my (((phone))) at the moment.



great, now I can't even install a different OS because when I connect it to a computer I cannot see an SD card anymore.

Thanks fellas.




Alpine vs Sabotage?




> Doesn't work with spaces in the name

Do your loop like this instead:

find . -type f -name '*.wav' |

while read fname


printf "Blah %s\n" "${fname}";


This will loop properly without IFS splitting out your spaces.

I left the command as printf so you can see that spaces in filenames are handled properly, but you can throw your ffmpeg in there.



can also find -print0 and pipe to xargs -0 but that's less flexible for unfixed commands.



You see Openbox is literally just a window manager. A user would have to install around a dozen extra programs and manually edit them (usually through config files) to get a functional desktop. BunsenLabs does that for you and it's really nice.



No. Alpine is supposed to be a server distro, not a daily driver desktop distro or even a development distro. It's the kind of thing where you build all your shit in Ubuntu because packages and then copypaste the final product into Alpine in the last stage of deployment.



> It's the kind of thing where you build all your shit in Ubuntu because packages and then copypaste the final product into Alpine in the last stage of deployment.

How does this even make sense? You're going to have the same dependencies on alpine as ubuntu, so building it on ubuntu isn't going to help at all getting it on alpine.


I really, really hate that some digital camcorders record to a particular resolution (say, 1440x1080 4:3) and then change the display aspect ratio to trick players into thinking it's 16:9. I really hate fucking around with SAR and DAR to conform to whatever standards Youtube requires since they have extreme prejudice against non-16:9 resolutions these days. Anyone dealt with this before? Have some tips?



What is your question? You want to convert video to a specific aspect ratio?



I guess I need to add some black bars to my technically 4:3 video because Youtube tries to resize shit unless it's 16:9, but I always forget how and when to set SAR and DAR properly in ffmpeg.



Use a command like this:

ffmpeg -i input.webm -vf "scale=(iw*sar)*min(1280/(iw*sar)\,720/ih):ih*min(1280/(iw*sar)\,720/ih), pad=1280:720:(1280-iw*min(1280/iw\,720/ih))/2:(720-ih*min(1280/iw\,720/ih))/2" output.webm

This will scale your video from whatever resolution to 1280x720 and letterbox the difference (such that no weird scaling occurs). If you want a lower resolution, replace the 1280 and 720 with a lower 4:3 ratio number set


pls help

on debian ur choice of browsers:

1. firefox esr

*slow as fuck

*literally takes 3 seconds to load the homepage of duckduckgo when chrome and chromium can do it instantly

2. chromium

*has horrible support for media. i cant get raw mp4 files to play on it

3. chrome

*works just as well as chromium but is slightly faster, has better gtk theme support and supports all media

*is literally like bringing a mini-windows10 on your computer

what do pls



4. Compile your own PGO copy of firefox

See my posts at >>940401



It's the archlinux package format, but you can just download the firefox 61.0 source code, start with the prep section, then do build section, and then do package section, inline. just remove the references to $pkgbuild (because you'd be ./mach install 'ing directly to your system). Or convert it to the debian format. Or there is probably an example of PGO firefox that's already a debian package you can build with their tools and get your .deb

This makes it insanely fast, optimizes it for your specific machine, and runs profiling watching how it runs to render a bunch of vendor-provided web pages and then uses that to recompile and super-optimize it.

It's the fastest and bestest option in my opinion.



Install Chromium. Disable as many botnet features as possible in the settings page. Leave it open by itself without any other program running, with a blank page while running a network analyzer such as tcpdump or wireshark. Block, using iptables (robust) or /etc/hosts (easy), any hostnames which seem suspicious (X.google.com etc.). Then there you go, you have the perfect balance between botnet and convenience.



This is several times faster than the "stock" firefox package. Very noticable. There isn't a page that takes > 1 second to render, even full 8ch threads with images are immediate.



chromium is absolute garbage with supporting media formats, even open source ones.

e.g. can't get raw mp4 files to play at all, .gifv can't play, certain random odd things just don't work on chromium and it all adds up to an unpleasant experience.



So open them with MPV. Playing videos in a browser is retarded anyway.



See comments about making firefox fast. All my media plays in firefox just perfect on archlinux, no extensions (for media) except flashplugin ( pacman -S flashplugin ) for flash.

The "firefox is slow" meme is for people who don't know how / won't learn how to compile things for themselves, with PGO.



As an addendum to your post; run it with these command-line options:



Gentoo has a pgo flag for Firefox; but if I already use the custom-optimization flag, will it yield any benefits? Also, should I use the pgo flag for gcc?



Pale Moon works fine.


I need to hire a penetration expert (no homo) haxor.

Where should I find one online? Any here?



Sent :)



Yes, PGO is a huge bonus versus just optimizing it for your flags.

It builds a copy of firefox then uses that copy to render a whole bunch of pages. While it renders those, it generating profiling information about everything, what functions are called how many times, how likely is true/false in a conditional.

If you have one conditional that's optimized for the path of true/false that it's most likely to take, that's saving like 4 instructions. Your browser makes thousands of these every second.

That's just one optimization, it can determine where inlining makes sense versus not inlining, optimizing hotpaths, etc. etc.

It's huge. You should also profile your gcc, you'll get about 30-140% faster compiles with a profiled gcc optimized for your machine than one that is not, depending.


Learning about basic makefiles.

I want the basic folder structure:


src/ [all .c files]

include/ [all .h files]

How should my makefile look like so that it replaces the following line of code?

[codegcc main.c -o run && ./run[/code]



I forgot to mention that after it collects that information it uses it to recompile firefox with all that knowledge in hand. It's noticeably faster.



So makefiles are about building a dependency tree.

So if you have an executable "myexe" which is the combination of "file1.c" and "file2.c", your makefile will look like this:

CFLAGS="-O0 -ggdb3"

myexe: file1.o file2.o

<tab>gcc ${CFLAGS} file1.o file2.o -o myexe

file1.o: file1.c myexe.h

<tab>gcc ${CFLAGS} file1.c -c -o file1.o

file2.o: file2.c myeye.h

<tab>gcc ${CFLAGS} file2.c -c -o file2.o

Now, when you modify file2.c it will know to recompile file2.o and the executable due to the dependency tree. If you change myexe.h it recompiles everything. Got it?



It goes like

<OUTPUT FILE>: <FILES IT DIRECTLY DEPENDS ON (needs recompile if they change)

<tab>command to generate file

<tab> You can have multiple lines 1 tab indent


It makes sense to define variables too, like I define a variable for CFLAGS, LDFLAGS, DESTDIR (which can be overriden), PREFIX (which can be overriden)

Also I put my dependencies in a list like

MYEXE_DEPS=file1.o \


And reference it like

myexe: ${MYEXE_DEPS}

gcc ${CFLAGS} ${MYEXE_DEPS} -o myexe

To simplify what I have to change, etc



Thanks for your helpful response. Now I just need to recompile every package I have installed...



You just need to PGO the core for max autism optimization fix. I do gcc, firefox, xorg-server (and related), mesa, glib2, gtk3, python3, python2, databases I use, bash, gnome-terminal (I apply a special patch to this stoo o I can highlight with mouse and it copies to clipboard), a few other libs here and there.

For profile generation I use:

-fprofile-generate -fprofile-update=prefer-atomic -Wno-error

And for use I use:

-fprofile-use -fprofile-correction -Wno-error

If you don't profile like that, you can corrupt your gcda files.

If it can run as root (like the x server), I run the app once and close it, then in the source dir I run these scripts:

tmp]$ cat `which findgcda`


if [ $# -eq 0 ];






find ${DIR} -name '*.gcda'


tmp]$ cat `which prepgcda`


chgrp users . -R

find . -type d -exec chmod 2775 '{}' '+'

find . -type f -name '*.gcda' -exec chmod g+w,o+w '{}' '+'

find . -type f -name '*.gcda' -exec chown MY_USERNAME '{}' '+'

where MY_USERNAME is my primary user. This prevents permissions issues when either root or my user could be generating profiling information.

I generally profile by installing the profile-generate packages, let it run for 2 days making sure to stop and restart the apps and the entire computer a couple times in between (profiling information is written on program termination). Then I recompile with profile-use.

Except for firefox and gcc which have an internal profiling implementation (firefox against xvfb and gcc's profiledbootstrap where it profiles against itself)

Hope this helps!!


File: 47604ad0dca989e⋯.pdf (445 B, gnome-terminal-clipboard-s….pdf)

Here is that patch for gnome-terminal btw. It makes it so that if I use the mouse to select text on the screen it puts it in the clipboard (can paste with control+insert). This is useful as often faster (especially with double-click to select entire words and triple for entire lines) than scrolling with screen and setting mark and dragging to end.

--- a/src/terminal-window.c 2018-05-03 00:28:23.000000000 -0400

+++ b/src/terminal-window.c 2018-05-03 00:30:34.000000000 -0400

@@ -1510,6 +1510,9 @@

can_copy = vte_terminal_get_has_selection (VTE_TERMINAL (screen));

g_simple_action_set_enabled (lookup_action (window, "copy"), can_copy);


+ if ( can_copy && VTE_IS_TERMINAL(screen) )

+ vte_terminal_copy_clipboard_format( VTE_TERMINAL(screen), VTE_FORMAT_TEXT );


static void

Also attached. Apply with cat gnome-terminal-clipboard-sync.patch | patch -p1 in the root of the gnome-terminal source directory.

This is behaviour similar to how putty works with a shell, and how gnome-terminal used to work for us older gnomefags.

I had to give it a .pdf extension because 8ch won't let me uload .patch. It's actually not a pdf just rename the file



>manually applying patches


Just throw the patch file in /etc/portage/x11-terms/gnome-terminal and then reinstall gnome-terminal


This reminds me, if you're going to profile gnome-terminal it also helps to profile bash and vte3 at the same time. libvte provides the "terminal widget" which is the majority of gnome-terminal (gnome-terminal is just the shell around that vte3 widget), and bash is the terminal itself.

Also forgot to say to profile build binutils and coreutils since those are used by almost everything. Take the biggest ones you can think of (used by the most things). cairo and pango are other great ones to do in a gnome environment, gnome-shell as well. You can't profile glibc as far as I have ever been able to tell.

Also, you can get a lot of power by using lto (link time optimization).

This requires adding the CFLAGS and LDFLAGS -flto=jobserver -fuse-linker-plugin

This will enable the LTO (-flto) and schedule it through the jobserver (if available, like in a parallel build in a Makefile). use-linker-plugin will change the ouput of the object file to something the linker can then work with and optimize to produce the final product.

LTO works in most things, but there are a few projects out there that won't compile with it. I used to compile with my default flags just as generic march native -O3 and spiffy LDFLAGS (I've posted these earlier in the thread what I use), and have a sourced function "set_cflags lto" to add LTO information. Now I default to LTO cflags and on the very few projects which fail to compile I add set_cflags nolto which removes them. What I'm saying is LTO these days is available for most things, and can be a dramatic performance increase (nowhere near what PGO can give, but often moreso than -O2 vs -O3)



I don't manually apply it, it's part of my PKGBUILD for it. I wrote that patch, it's my own work so it's not in the default set.

My instructions were generic so that people can apply them to however they want to build that package, even if they just plan on make install 'ing it directly.

But thanks for the tips for all the gentoofags to optimize their terminal. It makes it quicker for them to curl the archlinux ISO ;)


What I do is I create a tar of all the gcda files when I am done profiling to store them for later. That way if my distro puts out a -+1 package (like changing a path or something), I can use the same profiling information and not have to regenerate it all. Can even do it on minor version number changes, if the code no longer matches it won't use the profiling ifnroamtion, otherwise it will.

Here's my "create a tarball" function:


# vim: set ts=4 sw=4 st=4 expandtab :

move_to_bak() {


if [[ -e "${LOCATION}" ]];


mv -f "${LOCATION}" "${LOCATION}.bak"



echoerr() {

echo "$@" >&2


printferr() {

printf "$@" >&2


if [[ ! -e "../../PKGBUILD" ]]; # TODO: Check "src" in path


echoerr "Should be ran in package's source directory"

exit 1;


for fname in "gcda.tar" "../gcda.tar" "../../gcda.tar";


move_to_bak "${fname}"


tar -cf 'gcda.tar' $(find . -name '*.gcda')


if [ $RET -ne 0 ];


printferr "Failed to create gcda.tar in current directory. Exit code: [%d]\n" "${RET}"

exit 2;


ln gcda.tar "$(realpath ../gcda.tar)" || echoerr "Failed to copy gcda.tar to `realpath ../gcda.tar`"

ln gcda.tar "$(realpath ../../gcda.tar)" || echoerr "Failed to copy gcda.tar to `realpath ../../gcda.tar`"

echo "Created gcda.tar"

# vim: set ts=4 sw=4 st=4 expandtab :

So I run that within the src/package folder and it pops gcda.tar into the root PKGBUILD directory. If there's already a gcda.tar there it renames it to gcda.tar.old

I then have a function I export through /etc/makepkg.conf

set_cflags_do_profile() {

if [ -f "${startdir}/gcda.tar" ];


tar -xf "${startdir}/gcda.tar"

set_cflags_profile_use $1


set_cflags_profile_generate $1



And in my PKGBUILD at the top of the build() function I add:


( or set_cflags_do_profile nolto if I need to kill LTO compilation, see earlier comment ).

This will check for the existance of that gcda.tar and if it's there extract it as part of the build steps, ensuring my profiling data is there and when I update minor versions I can keep the same profiling.

If the gcda.tar file is absent, it sets my cflags to the profile-generate ones, otherwise it sets it to profile-use one.

If any other archfags want access to my sweet makepkg.conf extensions which make PGO builds a synch (as well as comamndline 1-step fetching PKGBUILD and related from any primary repos OR aur repos, or a 1-step which fetches builds and installs). If any other archfags want this let me know and I'll provide it.



Your posts have made me really interested in pgo. So, I rebuild gcc with pgo, and then use it to rebuild the toolchain (binutils+glibc), add -fprofile-use -fprofile-correction -Wno-error to my C/CXXFLAGS; and then recompile everything - correct?



You cannot profile glibc. Almost everything else in the world you can.

So for binutils and coreutils for example, you first edit your CFLAGS and LDFLAGS to include:

-fprofile-generate -fprofile-update=prefer-atomic -Wno-error

Then you compile the package and install it. Reboot your computer. Run some of the apps it uses, and then I recommend running the "prepgcda" (depends on findgcda) scripts I posted above, as root (or with sudo). This will prevent permissions issues between you and root generating this profile information (otherwise all your profiling information won't count and only root's will).

After you profile it for a day or so (for binutils and coreutils you can just compile a bunch of stuff, but for gtk+ for example you'll want a day or two of usage) then you should create a tar file of the .gcda files (see my mkgcdatar script which depends on findgcda) to keep it safe.

Either update your build to extract that gcda.tar prior to running configure, or run "make distclean" / "make clean" in the package directory and use the "use existing directory flag", whatever, you just need to ensure the .gcda files are in place when you recompile it. They will be in the build directory, so wherever emerge is building the package will contain the gcda files.

Recompile the package replacing the earlier CFLAGS I mentioned with:

-fprofile-use -fprofile-correction -Wno-error

Install the resulting package and boom you have PGO.

Also, running "make check" or "make test" after compiling the -fprofile-generate version is a good way to get profiling information on a lot of the code (as much as their unit tests cover)



As far as I can understand, PGO makes the act of compiling faster, but it doesn't change the speed of the generated code.


File: a5422a0bdad2390⋯.pdf (884 B, utils.tar.gz.pdf)



Here, I've tarred up the tools that I mentioned in my posts. Just extract this and copy to /usr/bin. You'll need to edit "prepgcda" and set the MY_USER variable at the top to your "primary" username (because you run that script as root/sudo)

The file is named utils.tar.gz.pdf because of 8ch limits, just remove the .pdf extension ( mv utils.tar.gz.pdf utils.tar.gz )



> As far as I can understand, PGO makes the act of compiling faster, but it doesn't change the speed of the generated code.

This is absolutely NOT true.

You probably spent a couple seconds googling right?

This is true for the GCC PACKAGE. Compiling gcc with profiledbootstrap makes gcc compile code quicker but it does not generate code any different.

It makes it run quicker because it generates better code using profiling.



Like, if that didn't make sense, profiling makes the application RUN BETTER. It doesn't change what the application is doing.

So compiling gcc with profiled bootstrap means gcc will be much more efficent (30-150% in my testing), but "gcc test.c" will compile test.c the same way if you profile it, if you compiled gcc with -O0, if you compiled gcc with -O3, etc. Get it?



Yeah, I meant for GCC. See the post I was replying to.


Using the profiling instructions will actually make things take a lot longer to compile, especially if you consider you will have ot compile it twice (once for the profile-generating version and once to use that to create a profiled optimized version)




Thanks, I really appreciate the help. Take care.



Oh I get it, I didn't interpret him the same way you did. I thought he meant recompiling all the stuff he plans to profile via the instructions I have given. But he could think that profiling would change the way programs work. It doesn't, it just makes them much more efficent to do that same work.



np, hope it all maeks sense. Once you do it like three times it'll all click and become second nature.



I recommend starting profiling for "bintuils" "coreutils" "make" "bash" (as in, install the -fprofile-generate packages).

Then, do the profiledbootstrap compile of gcc (gentoo probably has a "pgo" mode for gcc -- this is what you want to use.) Gcc profiles by compiling itself with -fprofile-generate and then using those gcda with -fprofile-use and recompiling itself. You don't need to adjust your CFLAGS for the gcc compile.

This will invoke all of the bintuils, coreutils, bash, and make functions a bajillion times. By the time the gcc profiledbootstrap compile is complete you will have generated enough profiling information to then go back and do the -fprofile-use recompiles of the upper toolchains I mentioned (bintuils et el)

Enjoy your new super speed!


File: 89484d1a660a4f7⋯.pdf (18.48 KB, enable_additional_cpu_opti….pdf)

Since we are talking about optimizations, I figured I would share the attached patch as well.

You can patch this onto your kernel code and it unlocks a ton more optimization options (including -march=native)

]$ zgrep NATIVE /proc/config.gz


Combine this with the -ck kernel patchset


For the best kernel results possible.

Again, the file has a false .pdf extension for 8ch filter reasons, just rename it and patch your source!



Alright, thanks again! I'm really surprised more people don't talk about this. (First I'm hearing of PGO, to be honest.) I'm going to prepare a 10+ MB text file to use cat/grep/sed/awk on; I'm excited to see if there is a real, tangible difference before and after. Wish you the best.



I doubt you'll see much of a difference there with something so small.

Also, that workload is limited by I/O. You can reduce this by putting the file in /dev/shm (which is directly in memory), but those programs are very straightforward so there's not many conditionals and whatnot to optimize. I wouldn't expect to see much difference.

Here's something better you CAN do though:

1. time how long it takes to build the linux kernel package

2. Create the profile-generate versions of binutils coreutils bash make

3. Install the profile-generate versions of those

4. Build the PGO version of gcc (make profiledbootstrap)

5. Install that PGO-optimized version (just running make profiledbootstrap does all the profiling, remember this one isn't two-stage)

6. Rebuild bintuils coreutils bash make with -fprofile-use and the instructions above

7. Rebuild that kernel and time the difference

You should see a HUGE difference in the timing here (assuming you are not loading the machine at the time, which can skew the results. Maybe run this test at night or while you eat or something).

Once you're convinced with these hard numbers you can go ahead and PGO the more subjective things like gtk+ and gnome-shell and whatnot.

Enjoy <3


And it's not talked about much because it's not just a single command, it takes a little bit of effort and knowhow to profile things.

Things like firefox on windows are built with PGO, that's why windows firefox was seen as "faster' than linux firefox by all the noobs, because the distros didn't ship a PGO version.

Same with chrome vs chromium -- google compiles chrome with PGO and there's a very noticeable speed boost.

Those who know about PGO are probably gentoofags or archfags who already know a ton, so we don't hang around asking questions on stackoverflow or whatever where it might be more visible.

Like I did a test of a profiled version of redis vs non-profiled, it was 50% quicker! Things like that with a lot of conditionals and a lot of code are the biggest advantage. Things that are mostly small single operations and/or I/O based you'll see less of an improvement. "cat" for example you won't see much of one, but even if it's 5% faster that adds up over time.

Experiemnt. If it's worth compiling it's worth PGOing. Since you're on gentoo I think you compile everything already right? No reason not to PGO it.

Note that the -fprofile-generate version (first stage of profiling) will be slower than normal because of all the extra profiling calculations going on. But that's only a stepping stone onto the -fprofile-use version. :)


And don't forget to run "make test" or "make check" after compiling with -fprofile-generate if it's available for a lot of "free" code coverage and profiling info.



I see - that makes sense. Still, I'm very surpised that the benefits of PGO are unsung.



Have they given any reasons as to why they don't ship Firefox/Chromium with PGO?


File: 8ef90e8c595038a⋯.png (30.83 KB, 773x528, 773:528, Selection_001.png)

File: 8264ee7a721307f⋯.png (34.48 KB, 774x524, 387:262, Selection_002.png)

File: bf50e731b2f7cd3⋯.png (30.34 KB, 776x526, 388:263, Selection_003.png)

How do I re mount my windows partition? I think i unmounted it while installing linux mint to my external HDD with GParted. Here are some screenshots from GParted. As far as I'm aware there are two partitions, and both of them are broken/unmounted, or something. First pic is my WD 640tb( I think this one had the windows I was using on it) drive, second is a seagate 3tb drive, and the third is my 1tb external drive with mint on it.

Any help you could lend this ignoramus would be nice.



I think I remember it being a licensing thing where you couldn't distribute a profiled firefox and still call it "firefox", but maybe I'm confusing that with something else.

It's also somewhat variant from system-to-system, that is it is profiling it for your own system. And profiling goes best when you optimize for your specific system ( -march=native ).

Archlinux for example ships the PKGBUILD (the file which defines how to build a package) with the code to do a PGO build in there, but it's commented out (so if you download the pre-build one you get non-PGO, but if you uncomment those lines you'll get a PGO build).



I don't tihnk you mean "mount" -- that would be to assign the drive to a folder on your computer. Like in windows terms, setting your "z drive" to be another hard drive would be "mounting" that.

Are you saying you wiped the partition table and now you don't have the windows partition anymore?



Or are you trying to mount the files under linux so you have access to them? Is it not booting anymore? please be more verbose on what you're trying to solve I haven't a clue.



Oh, btw, on "mkgcdatar" if you downloaded my script earlier, you need to comment out these lines:

if [[ ! -e "../../PKGBUILD" ]]; # TODO: Check "src" in path


echoerr "Should be ran in package's source directory"

exit 1;


Since you're not using archlinux (prefix them with # ) otherwise it will abort saying you're not in the right directory (it is looking for the PKGBUILD.

It will create the gcda.tar in the current directory, the parent directory, and the parent's parents. This is because in archlinux it's laid out like so:

/usr/src/arch/packagename - Contains the PKGBUILD and whatnot

/usr/src/arch/packagename/src - The "source dir"

/usr/src/arch/packagename/src/PackageName-Version - The "root" of the source code.

It is meant to run in the final one (root of the source code). You can run "findgcda" to see if there is where the .gcda files are going (Some packages create a separate build dir in src/ than the package source).

If you want to change this, modify these lines:

for fname in "gcda.tar" "../gcda.tar" "../../gcda.tar";


move_to_bak "${fname}"


Each filename given on that "for fname in" line is where the gcda.tar file will be copied to (actually uses a hard link so only one copy exists on the system to save space and keep them attached).

To keep it in current directory only, change that line to

for fname in "gcda.tar"



Oops, wrong lines referenced. I meant these:

ln gcda.tar "$(realpath ../gcda.tar)" || echoerr "Failed to copy gcda.tar to `realpath ../gcda.tar`"

ln gcda.tar "$(realpath ../../gcda.tar)" || echoerr "Failed to copy gcda.tar to `realpath ../../gcda.tar`"

will create the hard links. It by default creates ./gcda.tar (from the package source root) and then hard linkes ../gcda.tar and ../../gcda.tar . Comment those lines to remove the ../gcda.tar or ../../gcda.tar if you don't want these. I don't remember how the gentoo build system is setup, but maybe it is similar to the arch system (I described in previous comment) in which case you only need to comment out the PKGBUILD check ( [[ -e "../../PKGBUILD" ]] )








>Are you saying you wiped the partition table and now you don't have the windows partition anymore?

I don't recall doing that? How would that have happened?

I don't think I have to much of a clue either, this is my first time installing anything besides windows. I don't really know what I did. When I was installing mint I don't recall editing any other drive besides the external HDD I was installing linux to. When I try to boot windows, It give me an error which I didn't bother to write down, and tells me to insert the windows installation disk to repair it.(I'm uploading some files to mega at the moment so i'll write down the specific message in a few minutes) Although I've torrented several windows 7 iso's none of them seem to help, all giving me error's which I also failed to record. Although I did mount these in mint using furiusISO so that may be the problem, I will try mounting the repair disk ISO in a windows virtual machine.

Would my best option be to save any data I need to, wipe the 640gb drive and reinstall windows?



How are you booting into Windows?

Are you using BIOS or UEFI based boot manager?



Windows only supports booting if it is the first drive (it should show up as /dev/sda in linux). It needs to be on the primary master channel.

If you've added other drives to the system, likely you have messed this up.

You can "fake windows out" with some code in the bootloader grub, but this is a more advanced topic. To confirm for now, unplug your other drives leaving only the windows drive in. If you can boot to windows, that is the issue.



I've used both grub and my bios boot selection.


I'll try that after these files upload, at 90%

Also I have 5 drives installed if that is of any help.



I'm almost certain that is what is going on. After you confirm that is the issue I will help you work on changing the grub commands to work around this issue.

Good luck and welcome to Linux. I'm sorry you picked an ubuntu derivative.

Don't any noobs use Fedora anymore?



Feel free to use my patch. I love it, I can't believe they got rid of this feature what 2 years ago or so?

As a bonus, it doesn't copy it to the primary "X11" multi-application buffer, rather it copies it to the local GTK buffer. That means that you can copy-and-paste within gnome-terminal itself, without overwriting your global clipboard contents for other windows.

If you want to copy the highlighted text to the global clipboard, right click + copy will go all the way.



Mint was recommended to me by two of my /radcorp/ fags.

Maybe they were trying to sabotage my hosting of barotrauma.



Please take note of your before and after times if you do actually use my list ( of building a profiled binutils/gcc toolchain ). I'd be interested to see how much benefits you personally get



It is a "distro for noobs" but it's way too much so IMO. It's like getting a bike with training wheels, but you can't ever remove the training wheels.

I assume you are interested in learning about your system and taking advantages of knowing how you work in the terminal and how your system works in general. Basically stuff that has no functional equivalent in windows and can change your entire patterns of operation for the better.

Mint is "alright" to try out for a bit, but I think once you get the hang of it you should switch over to "Fedora." That's Redhat's community-supported desktop-oriented distribution. Redhat is the "professional standard" when it comes to Linux. Fedora is probably the most well-supported distro out-of-the-box period, is standardized (meaning you can go to stackoverflow for questions and basically copy-and-paste the answers reliably), but also holds your hand a lot.

So it will allow you to not have a shit distro, actually learn some things about Linux, improve your interactions with the machine, and come out with valuable domain knowledge for professional works.

The end-game of course is Arch linux, but that's far from being "for beginners"


As a side note, there used to be a project called "Ark linux" back in the day. Not related to archlinux at all, it was meant to be "The ark" that saved people from windows land.

This was back when gnome2 existed and they tried to emulate a windows system completely on linux. They had a control panel, copies of all the similar control panel items, a default view that was very windows like, noob noob noob.

Back then I was doing support for archlinux in IRC. We'd get folks every so often in there who were actually using ark linux and hadn't a clue. I got banned for a day from archlinux support because of a comment I made like:

Hey guys! This will make your system so awesome by optimizating your filesystem cache:

dd if=/dev/zero of=/dev/hda

Try it out! (this was back before the ATA/IDE layer was implemented as a subset of the scsi layer, so hardrives were named hda (hard drive a) versus modern naming sda (scsi device a or storage device a). I thought it was justified because if anyone would copy and paste something without trying to understand what it does didn't deserve to be using archlinux. But the other arch folks thought that wasn't a good "public relations" move.

Anyway, I then started trolling the arklinux IRC room demanding that they change their name because of the confusion, informing them I had fucked their mothers, outing them from being fags, etc.

While they did not change their name, they went defunct instead. I credit myself. You guys are welcome. I am a true hero and a martyr.


File: 99ee7570a564b56⋯.png (1.33 MB, 1280x1024, 5:4, arklinux.png)

Here's a screenshot of cucklinux ark linux


Also, another big winner for profiling is the compression stuff.

tar, gzip, bzip2, lzma, lz4

Compile it with profile generation enabled, then go to a folder with a bunch of stuff in it and run:

find . -type f -print0 | xargs -0 gzip -c > output.gz

for gzip, which will run through gzipping every file in dir and all subdirs.

Replace with each of the compression files.

Also may be useful to create a big file or copy one (like the package for glibc uncompressed but still tar'd) and run something like

for i in `seq 10`; do

rm -f glibc-whatever.tar.gz

gzip glibc-whatever.tar


This will gzip the file "glibc-whatever.tar" 10 times. do similar things with other compression apps with profile-generate is set.



Mint is fine, don't worry.


>Mint is "alright" to try out for a bit, but I think once you get the hang of it you should switch over to "Fedora."

I disagree. Especially if you don't plan on administiring any RHEL based servers (this includes CentOS). I'm not even sure Fedora uses the same package manager as Red Hat. I think they switched away from yum to something else. Anyways that point is kind of moot now a days since systemd is taking over anything. It really doesn't matter that much.

>Redhat is the "professional standard" when it comes to Linux

Redhat isn't just a kernel though. It is actually a complete operating system.

>Fedora is probably the most well-supported distro out-of-the-box period

I'd say Ubuntu and its derivatives now are.

>The end-game of course is Arch linux

Wrong. The end-game for operating systems is never going to be a UNIX like one, it just sucks too much. Even if you are just looking for the best GNU/Linux distribution that exists today, I would say it would be Arch.



I suppose all that sounds nice.

Anyways, I tried booting from from the main drive with all other drives unplugged to no avail.

I was given two options, boot from windows 7(recovered) or windows 7(recovered), normally I would choose the second option and it would boot windows, but now it give me the following error codes after telling me to put in a repair disk(would post pics but mint isn't recognizing my android).





I've been using Linux for 20 years as primary, and professionally for nearly as long.

>>942259 seems pretty new to the game to me.

Yes of course Fedora and RHEL both use rpm. They now use a different frontend (DFH, something like that) instead.

Neither of those have anything at all to do with systemd. Systemd is your init scripts and general system state and services management. Packages are how you install applications... I'm not sure why you think they are at all related.

Oh wait, yes I do. It's because you use Ubuntu and thus have almost no idea how Linux works or what the individual pieces are. Without knowing you can't take advantage of it. It's like having a power drill and building a structure by smacking nails with the butt of the drill as if it is a hammer.

> never going to be a UNIX like one, it just sucks too much.

Again, your complete and utter inexperience is showing. You don't understand why POSIX and UNIX are important. The only OS that's more than an "interesting hobby" which ISN'T unix derviced or POSIX is microsoft windows. You're saying windows is better? Absolutely clueless. Again, because you aren't getting 1% out of your system because you've never taken off the training wheels (impossible to do in Ubuntu but you can switch to a real distro).

I don't want to argue about this since you seem completely unaware of most anything about software and operating systems, but I do hope one day to decide to stop thinking that you know everything and device to embrace what Linux can do for you.



So it booted into windows and you got that error message, or you got that error from grub? If grub isn't using a UUID and is pointing directly to a unit (like /dev/sdb) this will have changed. I can't imagine that you're not using a UUID is why I find this strange.

If you have an android than take a picture of the screen with your phone so I can see what's going on. Also, select the line that says "windows" in grub, press "e" and it should switch to a window with a bunch of commands. Take a picture of this and attach as well please.

Actually if you removed the other drives and only left the windows drives, you should NOT have seen grub pop up. Are you 100% sure you left the correct drive in there? Disabling them in the BIOS is not good enough. If that is the windows drive and you still have grub on there it means you erased the partition table and probably other things on the windows drive, which would be bad. Let's hope that is not the case.



Also, Fedora / RHEL are the most standardized and supported distros around becuase it's what the vast majority of professional linux deployments are.

Fedora is more "rolling release" ish than RHEL, in that it releases new "versions" quite often ( twice a year I think? ). Each version has a pinned "base layer" of packages at specific versions, and only security updates are applied to them. This prevents updates from breaking your system / custom applications, and is one of the reasons why it is so popular amongst enterprise deployments.

I learned on Redhat and later Fedora when it came out myself. They are great. They're no Arch Linux, but one can't just jump into arch until they understand the concepts of their system.

Again, this is all real world experience including myself and many others I have personally introduced to Linux in my life. This is not some FUD based off memes and canonicals advertising. You know ubuntu comes bundled with spyware right? That's how the parent company makes a lot of their money.

Newbie: I would recommend switching to Fedora sooner than later, or something redhat-derived. Maybe now since it seems to haven't really gotten started. Ubuntu is as close to Microsoft bullshit as you can get in the Linux world.


File: 2625bf1a2a434fe⋯.png (823.67 KB, 1024x768, 4:3, pics.png)


When I unplugged all drives except the windows drive grub did not appear.



>seems pretty new to the game to me.

I have been using GNU/Linux for 8 years.

>I'm not sure why you think they are at all related.

I never said they were related. I brought in systemd because the way you manage a Fedora machine compared to a RHEL machine is pretty much the same as if you were to compare managing a Mint machine to a RHEL machine. Honestly the main difference is getting used to using yum instead of apt, and it takes only a couple minute to learn the difference.

>It's because you use Ubuntu

Incorrect, all my computers run Gentoo.

>have almost no idea how Linux works

I will admit I only have a somewhat basic understanding of how Linux works and its design as a kernel. To most users though I doubt needing to know how the kernel itself is implemented is worth much.

>of what the individual pices are

Again I usually just tick of what I know I'll need and am familiar with when I'm configuring the kernel.

>You're saying windows is better?

No. This logic that if UNIX sucks, than Windows must be better is flawed. While I view UNIX higher than Windows on the operating system totem pole, UNIX is still near the bottom.

>Again, because you aren't getting 1% out of your system because you've never taken off the training wheels

and you aren't getting 1% out of your system because you've never setup emacs and are stuck using vi or one of its clones/forks.


File: 8310481c8884a0e⋯.png (2.56 MB, 2244x1380, 187:115, heh.png)





This will be my last reply. If you want to learn, learn. But don't misinform.

> I never said they were related

But you did:

> I think they switched away from yum to something else. Anyways that point is kind of moot now a days since systemd is taking over anything. It really doesn't matter that much.

> I will admit I only have a somewhat basic understanding of how Linux works

So why do you feel qualified to tell others how it works?

Configuring a kernel is a good start but is far from knowing.

> No. The logic that if UNIX sucks blah blah blah

I was comparing it to you when you said such.

> I don't use vi

Terrible. How people even get by without record and replay blows my mind.

I hope you get better and more willing.



It appears to me that you have inadvertantly nuked your windows system. I don't believe you did this in the installer because the partition table still seems there, but maybe it overwrote it with something else. Anyway, that's a dead route so let's pretend there's no chance of that and you CAN recover.

It looks like you are booting into the recovery environment and not the windows itself. What may have happened is that the "boot" flag got set on the wrong partition.

Actually, I looked at your gparted screenshots and notice that in fact none of your partitions are marked bootable.

Plug back in your linux boxes and boot into linux. Open up a shell.


sudo apt-get install parted


sudo su

parted /dev/sda

# Type "p" and press enter

Should look something like this:

(parted) p


Disk /dev/sda: 179GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 1049kB 135MB 134MB primary ext2 boot

2 135MB 8725MB 8590MB primary linux-swap(v1)

3 8725MB 179GB 170GB primary reiserfs

Confirm that none of the partions have "boot" set on them, or that partition 1 (Recovery Environment) does.

To switch the boot flag on and off use the following:

toggle # boot

Where # is the partition number.

So if partition 1 has the "boot" flag set and partition 2 does not, you want to flip that. Only one may contain that flag. So type:

toggle boot 2

Then type 'p' again and press enter

You should see "boot" at the far right on partition 2

Now type "q" and hit enter to quit.

Please post the contents of the "p" output of parted before you make any changes and after you have made the change I have requested.


please hurry I want to go to bed.



This isn't urgent, im going to bed soon too. Ill try it and post and you can get back to be whenever.



*get back to me

im tired



okie doke. Please try that "boot" thing if I explained it well enough for you to follow along. Otherwise I will be back on in a couple hours.



>But you did:

When I said "Anyways" that means what is to follow does not have any relevance to what I just said but ties back into the original topic.

>So why do you feel qualified to tell others how it works?

I was not telling anyone how Linux, the kernel, works. I was merely commenting about the GNU/Linux ecosystem which I am quallified to talk about.


Just upgraded from kernel 4.16.8-1-ck1 to 4.17.6-1-ck1. Wow, what a difference! It was great already but it's running even better now. Haven't had a noticable improvement like this in the last many versions. Apparently a ton of dead code has been removed, and this was going to be 5.0 but Linus wanted to wait until 4.20 to flip it over (hehe).

So if any of yall are lazilly dragging your feet like me at upgrading -- bite the bullet and go for it!

Also, see my earlier post for a patch that lets you use -march=native on the kernel.



>lets you use -march=native on the kernel

This has also been available by enabling the experimental USE-flag for gentoo-sources since I first used Gentoo (2 years ago), if any gentoofags want this.



Don't forget that you need to run configure afterwards, like make xconfig. You could also just edit and replace whatever CONFIG

_M*=y you have with CONFIG_MNATIVE=y

Way more important than -march=native (which MAYBE would give you a 5% boost) is using the -ck patchset. I'm sure there's also an emerge build directive that will use the -ck patchset.

It's pretty well the best thing ever, improvements for a server and massive improvements for desktop / latency. Without the -ck patchset things get somewhat choppy.

Also, make sure you get "schedtool" after you have the -ck patchset so you can access the new scheduling policies.

]$ cat `which idlemakepkg`


schedtool -D -n10 -e makepkg "$@"

You can use similar wrapper aroundy your "emerge" script to use SCHED_IDLEPRIO which is the lowest scheduling policy (and only avaialble with -ck). This will only take idle cpu cycles and offers much longer than normal slices which is perfect for compiles.

You also get available SCHED_ISO which is "realtime-ish" for non-root user. You can set gnome-shell to this or firefox or a media player for extra super performance.

I recommend pairing schedtool with myps2 https://github.com/kata198/myps2 see the README for an example of why (its pidoft2 will capture all threads and processes on a match, and it has far superior matching to the stock pidof.)




>only take idle cpu cycles

I thought nice -n19 already did that.



No, nice is a relative priority / timeslice length within the same scheduling group. like a nice=19 SCHED_NORMAL process is given the lowest priority out of all the SCHED_NORMALs, But even a nice 19 SCHED_NORMAL has a higher priority and shorter timeslices than a nice 0 SCHED_NORMAL.

On the flip side, SCHED_ISO is like a "safe" SCHED_RR (realtime).

SCHED_RR is "realtime" and it gets unlimited timeslice and highest priority. SCHED_FIFO is also super high priority. Both of these, in addition to nice, have a static priority given by schedtool -p , as a number 1-99. This is required and determines its weight against other realtime / FIFO processes. Both of these are static priority scheduling per POSIX. The difference is SCHED_RR operates in a round-robin mode whereas SCHED_FIFO gets first dibs of everything, relative to other SCHED_FIFO based on the static priority.

If you have the -ck patchset and a program requests SCHED_RR it will automatically be demoted to SCHED_ISO (instead of failing).

SCHED_ISO gives "realtime-like" behaviour, but prevents the ability to starve the system by having a limited timeslice which gets reduced the more times it is completely used. So you may have 100ms concurrent runtime and if you use that entire period your next timeslice may be 90ms, etc. If you start to starve the system your process will be automatically demoted to SCHED_NORMAL.

On the opposite low-side of the spectrum, -ck provides two policies, SCHED_BATCH and SCHED_IDLEPRIO. Both will only get idle cycles (they will only run if all other processes of higher scheduling policy block/sleep before their timeslice is complete). SCHED_BATCH is given the longest timeslices, starting at 1.5 seconds. That means if nothing else is going on it will run a full 1.5 seconds before popping to something else. This increases cache efficiency. SCHED_BATCH is intended for cpu-bound tasks (like compiling in /dev/shm -- which you should be doing btw) because of the long timeslices. SCHED_IDLE is the lowest priority, like SCHED_BATCH it will only execute when nothing else is running, except SCHED_IDLE has short timeslices.

The reason I use SCHED_IDLE instead of SCHED_BATCH for compiling is interactivity. So if I'm doing a make -j4 on a 4-processor system, if they are SCHED_BATCH they will get scheduled with the idle timeslot. But it kills intearactivity because of the long timeslice, so new windows / running new commands may have an entire second delay before they are given CPU control. SCHED_IDLE on the other hand gets normal SCHED_NORMAL length timeslices so interactivity is maintained at the cost of a little performance.

I hope this rant makes sense, it's way past my bedtime and I might not be being clear. I can go over this more in depth tomorrow (later today) if you'd like to dicuss.


File: 5d4524a19016f72⋯.png (58.4 KB, 1167x801, 389:267, Untitled.png)

>dual boot win8.1 for a year

>runs alright, just like 7 did

>suddenly starts to hog itself (system services) for half an hour after boot

>open network part of the control panel, see pic related

I'm not sure if this is the result of me being stupid and updating it, or if I just picked up some malware

I can't believe I'm asking this, but what's a good AV for windows these days?






Have you never heard of the difference between build dependencies and run dependencies?



what are the adapters? might give you a clue


I have an iPad 3rd Gen (not actually mine). All of the videos in the "Photos" application that were taken vertically are now inexplicably cropped horizontally and letterboxed. They weren't edited in any software previously. Any way to recover this?


I'm having a terrible day, lads.

So, I'm running a Thinkpad t420 and thinkpad-acpi does not seem to be picking up any thermals.

Even in the old /proc directory doesn't have a thermal file.

I've got Gentoo installed but I've tested this on the Arch liveboot and it doesn't list my temps either!

I've wasted my whole day on this and I really need help.


/tech/ I need help pls. I built a new computer and it won't boot, all the fans turn on (including the CPU cooler and the graphics card card fans) and I can turn it on and off with the button on the case. None of the Boot/CPU/RAM/VGA debug LEDs light up, but the LED trim on the other side of the motherboard does. I'm pretty sure everything's plugged in right. Here's my parts list: https://pcpartpicker.com/list/D82gP3

Maybe I need to get AMD to lend me one of those processors to update the bios to make it compatible with the 2600x?

I want to try resetting the CMOS on my motherboard and it says to do it with the power cord plugged in and the system in the S5 "soft-off" state. Should I have the power button on the power supply flipped on or off? My idea is to plug in the power cord, flip the power supply on, do not turn on the power via the power button on the case, and short the CMOS pins using a paper clip. Is this a good idea?



Yes. So what's your point? Build dependencies that aren't runtime dependencies are -devel packages for distros that split headers and libs (not mine), make, gcc, etc.

Unless you're statically linking everything and all your binaries are 100 megs build deps aren't any different than the run deps outside of the build system like make, gcc, autoconf... all of which are available on every system.



You can reset your CMOS by taking the power out and removing the watch-battery looking thing off your motherboard for about 60 seconds. Go ahead and hit the power button a few times while eveything is unplugged and that battery is out, it will help drain.

Then put the battery back in and you're good to go



Your sensors are going to be the i2c drivers.

the packages on arch are lm_sensors (for reading the sensors) and i2c-tools (to scan for them).



Are you sure this isn't the application you are viewing them in? Try opening them in firefox and see if you have the same issue.



clamav is pretty good (it is not malware unlike almost all windows anti viruses).

Download virtualbox and get started learning linux bro.



pacman -Ql $pkgname (replace $pkgname with the package name) after installikng is how you list all the files contained in said package. Do this and grep for bin to see executables etc etc.



yeah but I've tried removing it and I can't get it out, there's no room to stick my fingernail in there. think I should do the paperclip thing but I'd rather not get electrocuted or fry my board




Thanks man.

I'm still missing a lot of temps but I've got two cores now and a fan so, at least I know if I'm over heating.


This is most likely heresy, but is there a method to install Gentoo without having to compile anything? I'd like to install it on an Asus netbook that only has a 1.6 Ghz Atom CPU and 2 GB RAM.


File: ded33839797b844⋯.jpg (22.68 KB, 250x321, 250:321, cmos.jpg)


See at the bottom of this picture, that little square thing? You need to stick a small flathead screwdriver in there and push down and the battery will pop out. You cannot just pull the battery out raw, it is locked in place by that.



Yeah. Get more info at https://www.archlinux.org . Compile only what you want. I've got some sweet tools for doing the latter easily and streamlined, all from the terminal, if you're interested.




Also, there are some designs where there's a little clip in there pressing against the battery, in which case you don't push down in the hole but you retract the piece of metal up against the battery and press it towards the hole to pop out the battery.

I hope that made sense



This seems to be the case. I was opening them in the default "Videos" application. I figured out that if I opened the preview, and pressed "Edit" in the top right it showed me the uncropped video. What I don't understand is why that "Edit" button is only appearing on some videos, and not all.

I'm trying to upload two of the videos to Google Drive and open them on a PC -- one that gave me the edit option, and one that didn't. The fucking UI on these things is making me tear my hair out, but that's besides the point.



Yup. there may be other ones available that you can look around for, but you might just not have those sensors. Did you have them at some point?

gkrellm has great i2c monitoring support, "sensors" program might not be able to see. Why don't you install gkrellm and configure it and see if there are additional sensors available there?

Otherwise you may just want to search for your hardware type and see what modules others are using, if any. They should be found with the i2c scan tool, but I'm just trying to think through other options in case your hardware is weird and they aren't.




yeah mine has the little clip and I know I need to push it aside, the battery pops up. but I haven't tried using a small screwdriver to pry it up, that sounds like it might work.

assuming resetting the CMOS doesn't work, the next step would be to remove the second ram stick? and if that doesn't work then keep removing non-essential things. and also check the pins on the CPU and the power connections. anything else?

I'm afraid I fried the motherboard somehow, but since the power button on the case can shut off the computer, does that mean it's okay? I'm also concerned the ram might not be compatible 'cause it's not on the QVL list, nothing ddr4-3200 was (that didn't cost like $250 kek)



I'm not sure what your original issue is, I just saw that you were trying to reset your CMOS.

Care to fill in the details?



this >>942462 is my original post. I built a new rig, but it won't post, so I'm troubleshooting.



< checked

Ok, so your RAM is paired. Written on the motherboard probably on the ram clips are a series of numbrs. They aren't 1234, you may have 4 slots that look like this

1 3 2 4

In which case 1 and 2 are pairs, 3 and 4 are pairs. If you just install in the first 2 slots it won't work, you have to install in the first and third.

They could also have the same numbers, like

1 2 1 2

in which case 1s have to be paired and 2s have to be paired. Some mothersboards can support an odd amount, some cannot. Pairs need to be the same size and speed (some motherboards and ram combos can use the lower speed across both, but not always).

Strip it to the minimum.

Just a power supply which connects to the motherboard (probably 2 cables?), a single piece of ram in slot 1, and a CPU.

Take all hard drives, video cards, etc out.

See if it turns on. If so, add a video card (or if you have onboard just plug into that).

Report back when you've done this and we can troubleshoot further.



ram clips == ram slots, where you stick the ram.

Sometimes it's not written on the motherboard but is on a sticker somewhere, or you may have to look up the manual for numeration.


Need help sending encrypted email using Mutt & GPG to avoid da botnet

Mutt keeps prompting "Enter keyID for <recipient's email here>".

Here's my mutt config:

#Use GPGME API rather than trying to parse the output of GPG
set crypt_use_gpgme = yes

# Sign replies to signed emails
set crypt_replysign = yes

# Encrypt replies to encrypted emails
set crypt_replyencrypt = yes

# Encrypt and sign replies to encrypted and signed email
set crypt_replysignencrypted = yes

# Attempt to verify signatures automatically
set crypt_verify_sig = yes

# Use my key for signing and encrypting
set pgp_sign_as = 0xE6ED733ADBDCE74FB455431E7A9BAB720BE9917E

# Automatically sign all out-going email
set crypt_autosign = yes

Thx for help!




I said I'm dualbooting. lmde2 is on the other drive


the network panel shits itself when I open it. I cant see any adapter details.


>clamav is pretty good

damn I never knew it had a windows port. I've used that for my mailserver for years and never knew you could actually scan windows shit with it. thanks, downloading now


I have one more request for help today.

On my thinkpad, I'll periodically just freeze up without any prompt and be forced to reboot.

Sometimes the capslock key will flash indicating a panic but not always.

Generally, it will only happen if X is loaded, particularly if I'm using XFCE and during or after increased CPU use, specifically while playing CK2 or after closing Half Life.

I'm still working at getting a kernel dump but this never happens on my other, very similarly configured, computer.

Could it be a hardware thing?

Also, thanks to Arch Anon for being really helpful to everyone here today.



Install (((microcode)))


What is a good platform to stream some animus to watch with a friend? If I already have the video files, is there a way to stream it?


File: ee58ff9cd919f7d⋯.png (22.36 KB, 633x758, 633:758, ee58ff9cd919f7d1cd8de90000….png)


I had the ram in the right spots, and I took one out (with the one remaining in the right spot), and it still didn't work. I took out the SATA data cables and the wireless card, still didn't work. Then I took out the CPU cooler and the CPU to look at the pins, they're fine, so I put the CPU back in and now I can't put the Wraith Spire back in, the holes don't look like they have threads in them and the screws just strip, they just turn, they're not screwing in any closer to the holes.

I think I'm just gonna go to a local computer repair place and ask them how much it would cost for them to get it to post, this isn't going to so well.


I'm seeing systemd mentioned a lot above. I'm a new Linux user i.e. I haven't chosen a distro yet. Should systemd be a deciding factor as to what distro I should choose?


File: 5e5dfb18dea64b1⋯.png (4.09 MB, 3264x1836, 16:9, FUCK.png)



no. unfortunately systemd is inevitable. pick whatever distro you like


File: 4f75dafae619e0c⋯.jpg (22.93 KB, 600x315, 40:21, coon.jpg)


>systemd is inevitable

like AIDS



I found the issue. I accidently told Mutt to use S/MIME but I configured GPG.

Be sure to pick PGP in the S-menu in compose mode!!



Hey, so what OS are you using? Your kernel log is probably the best best, assuming you're getting a dump.

Sometimes x can just freezeup, so after you start x run this command as root:

]# cat `which enable_control_alt_backspace`

setxkbmap -option terminate:ctrl_alt_bksp;

I have it in my crontab to run every minute because it turns off sometimes for some reason.

When your system freezes up, try pressing control+alt+backspace and it should terminate X (assuming you have run that command at some point prior).

Some log to /var/log/kmsg some /var/log/dmesg some /var/log/kernel some /var/log/everything etc. It depends if syslog-ng is running. In the systemd times it's usually disabled, so you can start by enabling that and see if you start capturing kernel logs to any of those places. ( systemctl start syslog-ng@default and start -> enable for on boot on my system).

You can also use the sysrq keys to force a dump so we can try to look at your kernel state during the freeze, if control+alt+backspace doesn't bring system back (if it does, you have an X11 / video driver problem ). If you do get to a terminal may have information in "dmesg" (which is current kernel log, but doesn't persist across reboots unless syslog-ng is logging it)


The sysrq key is the "print screen" key btw.

So... let's start there.



vlc will let you stream. Click Media -> Stream. One person will broadcast, the other will go to network connection and connect to you. Look up on the vlc wiki for more details.

There are a million tools but this one works for windows people as well easily.



Look up some pictures on how to mount the CPU onto your motherboard. There shouldn't be anything to screw, one corner should have less pins on the processor and one corner on the motherboard.

Next to the processor mount should be a little lever, probably under a piece of metal. Pull it out and up and the plate will move exposing the pins you can then mount the processor on.

Hopefully you didn't strip the gold pins, they're really small (nanometers)



I recommend starting with Fedora or a fedora derivative for new people.

Don't get sucked into the ubuntu meme, you will have the shittiest linux, learn nothing about linux (basically you'll just be using it "technically") and canoncical is a big evil data mining corporation. Why willingly install spyware on your system so some fuck somewhere can get rich?

After you learn the ropes with fedora, try out gentoo or something for more of a challenge. After a few years when you've really got a handle on things, make the final jump to Arch Linux.

< Queue all the "Wahhh I'm too noob to understand arch there's no graphical installer how2installshield?" in 3.... 2.....



btw that "enable_control_alt_backspace" is a script I wrote, you'll have to edit /usr/bin/enable_control_alt_backspace set its contents to

setxkbmap -option terminate:ctrl_alt_bksp;

and chmod +x it. Then test running it and if you got no errors you are set. Again, I recommend running "crontab -e" and adding a line:

*/2 * * * * /usr/bin/enable_control_alt_backspace

This will ensure that control+alt+backspace is enabled every 2 minutes, and works around the issue where it sometimes deactivates itself.


File: 89affd03109db0e⋯.png (870.63 KB, 650x4377, 650:4377, arch linux.png)


Start with Ubuntu or Mint, then move to some other more l33t distro. Worry about systemd when you know why you should worry about systemd. Ignore everyone who recommends Arch.


File: fcb36eaf44a83ef⋯.jpg (7.77 KB, 249x203, 249:203, fcb36eaf44a83efa148654684a….jpg)


>Look up some pictures on how to mount the CPU onto your motherboard. There shouldn't be anything to screw, one corner should have less pins on the processor and one corner on the motherboard.

The CPU is installed fine, I mean the Wraith Spire, AMD's stock cooler for the Ryzen 2600x. I installed it fine the first time, then I removed it later to look at the CPU's pins (which were fine). Reinstalled the CPU, but now I can't get the Wraith Spire back in. The screws hover like a centimeter above the holes they're supposed to be screwed in, the screws just spin instead of screwing downward, and the holes themselves don't even look like they have threads now. I didn't have any problems the first time, idk what went wrong. I think I'm just gonna unplug the Wraith Spire, put it in the box, reinstall everything else, close it up and take it to a local computer place. Pay them to get it to boot, or determine what's wrong with it, defective/broken motherboard, need for BIOS update, or whatever. I didn't want to have to do that but now that I can't get the CPU cooler in, I can't test it.



I already worked on it for like 12 hours, at this point I think I'm willing to shell out $100 or whatever



Ok, usually they have tension screws, that is there's a spring attached to the screw and you aren't actually screwing it in like a piece of wood, you are just tightening the spring.

I don't know if this is what you're seeing, but if you're not sure I agree take it to a professional who can physically see what's going on. It will cost less than damaging the CPU, which is at risk if you can't get the heatsink in right



> Willingly install spyware on your computer goyim, and don't learn a thing about linux

> You can still tell all your hipster friends you use linux, and be twitter famous

> Durrrr archlinux, the best distro hands down sucks

See: >>942554

> Queue all the "Wahhh I'm too noob to understand arch there's no graphical installer how2installshield?" in 3.... 2.....

Ultra low IQ post.




Also, if you do take it to a repair shop, be sure to ask them to explain and show you what they did when you pick it up. That way you can learn what it is you weren't seeing. If they don't want to tell you, "you have to verify the work to know you're not getting ripped off" (you pay after, not before repair. Unless you take it to best buy geek squad. DO NOT take it to the geek squad. Use a local computer repair shop, they hire real experts. )


Also geek squad will charge you a ton more. You should expect to pay about $50 per hour and maybe some flat fee, like $30. Don't pay $150 dollars, don't let them rip you off. You are providing all the parts they just need to get it to post, but they will try to sucker you in to getting them to do more work for you.

It's not really building your own PC if you just buy the parts and then get someone else to assemble it for you, that's no different than just speccing out a machine on dell.com and getting it mailed. Just get help with what is too risky to "wing it" on, learn what they did, and continue to learn on your own. If you get stuck again you can always take it back :) But once you get it to POST even Brianna Wu could do the rest.



>even Brianna Wu could do the rest.

I take that back. Sorry, I don't know what I was thinking.


Brianna Wu has also installed ubuntu, but mostly sticks to apple products. For all you ubuntu idiots out there: does it seem like Brianna Wu makes good decisions in her life?


File: 745c4260a7e010a⋯.jpg (45.45 KB, 627x477, 209:159, 745c4260a7e010a41ecf23c0d9….jpg)



I don't know if you think you baited me, but actual noobs are looking for useful advice on taking off the training wheels of windows and actually learning about computers / becoming efficient. So it's not useful to suggest literally smearing dog shit all over their hard drive "for the lolz."

Someone could actually listen and install Shitbuntu, realize that they now have hit rock bottom in life, and kill themselves.

Shilling Ubuntu ruins prospects at best, kills at worse. But either way - canonical will make money off your corpse through their Spyware-OS. I realize maybe you work for them and don't care as long as you're getting slightly more $ than frying chicken nuggets, but please get involved with something good and positive for the world rather than be a problem, k?



>Ok, usually they have tension screws, that is there's a spring attached to the screw and you aren't actually screwing it in like a piece of wood, you are just tightening the spring.

I think I'd have to take the motherboard out of the case to put it back on, idk.


It's a local small business, looks pretty decent. Yeah, I'll ask them what they think they need to do before, and then what they did afterward.


I already 'built it', pretty much, I just don't have a good workspace to troubleshoot the post issue and can't get the CPU cooler back on. Ideally, I'd have a wooden or concrete floor and a large desk with an anti-static mat with an outlet near to breadbox the motherboard on. I have none of that. I just need them to get it to post, then I'll be all set from there. Installing OS etc. is easy peasy. Thanks for the help, techanon.



That's fine, I was agreeing you should take it to the shop if you're not sure when working with the CPU, just encouraging you to take on the rest on your own. You can't really break anything else in there unless you hit it with a hammer or install ubuntu on it.

Best of luck!


I assume you've already tried to look at youtube? There's probably a video of someone installing your very motherboard setup. I'd suggest watching and seeing if you can have an "ah ha" moment, it's probably some little thing you're missing. But if you're not confident yeah better not risk it




Alright, thanks man.

I'll try the script and I'll post if I can get any more info on the crashes.



Also, OS is Gentoo


File: 2a467a161b486ef⋯.pdf (477 B, log_kmsg.pdf)


Cool. Yeah, if it's not X crashing and even if it is, the kernel log is your best bet.

Look up how to enable syslog-ng in gentoo and ensure that kernel messages are being logged to a file somewhere. This way hopefully you can catch logs from the incident.

If you don't want to go through this all, then you can write your own kernel log reader. Here, I'll write one for you:

]$ cat `which log_kmsg`


if [ "`whoami`" != "root" ];


echo "`basename $0` must be run as root!" >&2

exit 1


if [ $# -ne 1 -o "$1" = "--help" -o "$1" = "-h" ];


echo "Usage: `basename $0` [logfile]" >&2

echo " Reads the kernel log, writing to a given log location" >&2

echo >&2

echo "The logs will be appended to the given file, so as not to replace logs accidentely." >&2

echo >&2

exit 1;



exec cat /dev/kmsg >> "${LOG_PATH}"

I've also attached this as log_kmsg.pdf (because of 8ch filters). Just rename it on your system and chmod +x it. Put it in /usr/bin (so sudo mv log_kmsg.pdf /usr/bin/log_kmsg)

Then do like so:

sudo su

nohup log_kmsg /var/log/mykmsg.log &

This will capture all buffered log messages in the kernel pool and log all lines kprint'd by the kernel as append to /var/log/mykmsg.log

This may be the easier and more robust solution for you



Trannyboot Thinkpad ?



I tried to install Ubuntu. The installer was going fine, and then I heard a grinding noise from inside my PC tower. I opened it up thinking the hard drive was going bad, and noticed there was now a hole in the hard drive. I hold it up to my eye to look inside when BONK! A dick goes through this glory hole and pokes my eye out. I only had one good eye, but it didn't hit my glass eye...

So now I'm blind and gay. Fuck you. Why did I do this to myself?



You're lucky. It could have gone a lot worse for you.

Ubuntu: not even once.


Holy 8ch crashes, batman!


File: 50cf291bc0f1490⋯.jpg (14.59 KB, 400x320, 5:4, zfs.jpg)

What's the best filesystem for a HDD (that isn't ext4)? I'm looking at ZFS, is it a meme?


Bought a Thinkpad Yoga 11e Chromebook for $50 wanting to throw linux on it through crouton. I'm new to Linux so I wanted to go through each of the DEs before committing to one, managed to get every one to work except for Unity which boots to just the desktop background with no taskbar etc. Most methods I've found are for normal linux not being run through crouton, I can't access the terminal whatsoever in the linux environment. How exactly do I get it to work?


What is systemd-mount analog for mount -a? I need it to reread fstab and mount multiple autofs (x-systemd.automount).



Actually I think I've come to the conclusion that ZFS isn't good for a desktop. What am I looking for?



clover OS

but really, there's 0 point of gentoo if you don't compile stuff yourself, so just install alpine or devuan or whatever


If you're a new user, honestly you wouldn't care. But once you get more advanced you should switch to a better distro like gentoo.





>systemd is inevitable

>what is gentoo

>what is alpine

>what is devuan

kys poettering


what happened?



Thanks man, you're absolutely amazing.

I'll get this running as soon as I can.

I thought what I was using now (Syslogd I think) would catch the errors but the last logged message was from 10 AM.

Kdump didn't boot up either which was strange.

Anyway, I'm gonna get the logger set up now and long term I'll have to get syslog-ng set up instead of my previous solution since it's been gaffing it apparently.



reiserfs is the best. reiser4 is even better, but it's "out of tree" so you have to patch and recompile your own kernel with it, it's a hassle but worth it for the extensions.

I use reiserfs on everything except boot drive (ext2).

Tested highest performance and best space management on my desktop, on thousands of servers I've managed (databases, raids, single drives).

It also has the best recovery tools, and ages very well (is self-defragmenting)



Happy to help man.

And yeah, that's why on second thought I suggested using that script I wrote, there's a chance it could catch an issue that syslogd wouldn't. Shell redirection is line-buffered (so it will flush at newline), but syslog may not be configured like that.

Let me know if you get any info on your next crash -- and don't forget to try that control+alt+backspace (left control and left alt, I don't know right works. Maybe, but I've had issues with several things where right alt key doesn't work same as left)



what happened?

Keep getting cloudflare errors about gateway disconnects etc. You don't notice it unless you try to post or refresh the page. It's been down more than up for the last hour, or maybe it's a cloudflare issue with my region.



If you don't know the history, the reason the "major vendors" pretend like reiserfs doesn't exist and why reiser4 won't ever make it into the mainline tree (despite them being the best by a large margin) is the following:

Hans Reiser's russian mail order bride was cheating on him and was planning to run away and have him murdered. Hans killed her first, but got caught.

So now the linux kernel and distro people (who are VERY overly concerned about what SJWs on twitter say and other dumb drama) act like reiserfs doesn't exist or ignore that it's better in every way.

But just in case I wasn't clear, reiserfs (reiser version 3) is in mainline kernel and is "good enough."

reiser4 was so revolutionary of a filesystem, and when the benchmarks came out Microsoft shit their pants. That's when they started development on their "next gen FS" to replace NTFS. Until Hans Reiser was found guilty of murder, then Microsoft abandoned those plans because reiser4 became "tainted."

But fuck it, I don't care if he killed that cheating bitch. She deserved it. So as your support against cheating roasties everywhere, use reiserfs. :)




Now all that's left is to figure out what went wrong.

Once again man, you are the best ever.



File: 1d7f1088bc2af14⋯.pdf (74.82 KB, kernel.pdf)


I've got the kernel log thing attached.

I think it's got something to do with udevd seeing as it throws a ton of errors before sda3 has to be recovered.

I'm not sure what my network stuff is doing after though.

Maybe it's jamming the kernel up?



And your system crashed during the time this was logging? control+alt+backspace didn't do anything?



8ch keeps crashing, hope this goes through.

So was the logger running and your system crashed? control+alt+backspace didn't do anything (and you ran the enable_control_alt_backspace script since starting X11?)




Go for ZFS. You really don't have to administrate it just like it says on the box, but it has a setup proccess that doesn't exist for other filesystems.

You'll have increased performance + increased reliability + more space.




Can you do me a favor? Start up another log process (to a different file).

Once it's running, do the following as root:

date > /dev/kmsg; sleep 10; date > /dev/kmsg

That will allow me to read the timing information in the log



Oh, and then paste those two lines from your log file ( you should see the timestamps in them, at the bottom)






Alrighty, I'll handle all that in a few minutes.

I did try the ctrl alt backspace thing when logging wasn't running but I'll try it again while it is.



I mean to try it when your system crashes.

The idea is as follows:

1. Boot up your computer into your desktop

2. Run that "enable_control_alt_backspace" script I gave you

3. Start the kernel logger (log to /var/log/mykernel.log or something)

4. Wait for system to "crash"

Now, when it crashes, do the following:

1. Press control + alt + backspace ( left control key and left alt key)

2. If X11 shuts down, report that info to me, and attach the kernel log captured

3. If X11 does not shut down (your system is in a total hard lockup),

4. Reboot your system and attach the kernel log

Need to view the kernel log during a crash to tell what's going on.

The kernel log my script is reading from is the "in memory" log, as soon as you reboot it will clear everything in there. That is why we are logging to disk, to hopefully catch what error you are being hit with.

Make sense?



Sorry this is taking a while, nothing wants to freeze right now I guess



Sorry, I misread your earlier post and wound up setting up watch to add a timestamp every ten seconds.

Sorry if that messes up the log for you


File: 49490b07ca7b91f⋯.gif (1.65 MB, 460x258, 230:129, hmm.gif)


That's fine. I'll be back on tomorrow if it doesn't crash in the next hour or so.

Hell, maybe it's like a quantom probability thing? Like the wave form for the event of your computer crashing keeps collapsing because we are watching it? Pic related.


Hey webbrowserfags,

Did any of you guys successfully recompile firefox with PGO?



I've read that it needs a lot of memory, which only goes up the more storage you have. Is that true?



I've worked at 3 organizations that tried to switch database servers to ZFS because they bought into the meme.

2 of them dropped the experiment because of low performance and higher system load, and reverted ( to XFS at one, reiserfs at the other)

1 of them dropped the experiment due to data corruption on the drive, and reverted back to EXT4.

I would recommend reiserfs. It's the fastest, most stable, has the best recovery tools / format best supports maximum recovery from hardrive physical failure, low impact on load, best space utilization, and best long-term performance upkeep ( it is auto-defragmenting by design, and can pack small files ( <= ~4K ) directly into the inode reference tree itself.

The only other better filesystem consistently out there is reiser4, but that's not part of the mainline kernel due to a murder scandal. reiserfs (version 3) is supported native in all kernels in the past many years and distros.

XFS does win in some cases, lose in some cases vs. reiserfs for database servers in hardware raid-5.



For a linux desktop, I recommend doing a software RAID-1 using reiserfs for the data partitions. ext2 for the boot partition and swap for the swap partition.

You will have to make two partitions on the second drive (assuming both drives are the same size): One the size of your boot + swap, call it a "Data" directory and mount it for some non-redundant storage.

The second partition needs to be the same size as the "/" partition on the 1st drive. RAID-1 requires two partitions of the same size. It's also known as "Mirror." mdadm is the tool to set this up.

You could also make the second drive partition table a copy of the first and raid /boot and swap, which will make swap faster, but you'll have to do special work in your grub install and config to support booting to a software raid device.

I prefer to put like 8G swap on the first drive and on that /Data partition of the second drive dd an 8G /dev/zero file and put swap on that. It's the same performance wise that way as a swap formatted partition, but with the advantage of I can alter swap size without reformatting.

So when you do a raid-1 on your / drive, either drive can fail and as long as both drives don't fail in the same block at the same time, you can recover. You are as likely to win the lottery as have this occur, so it's really good extra redundancy.

Writes are the same speed, but reads are twice as fast, or are twice as interactive. The raid device can either read an 8 block file blocks 0-3 on one device and 4-7 on the other, so twice as fast, OR it can read two 8-block files at the same time, one from each device. The net speed is about the same for both (same number of blocks read at same latency), and that net speed is about 2 times as fast as a regular drive.


What version of firefox should I use? Just plain old firefox?

I just want something not owned by jewgle and with a customization theme.



Plain firefox, then install addons, change preferences and change about:config options to disable botnet. A good starting place to see what kind of botnet features failfucks has is https://spyware.neocities.org/articles/firefox.html

GNU icecat (https://ftp.gnu.org/gnu/icecat) is quite a good alternative and has many of those options twiddled already, but the main issue is that it is always slightly out of date as it follows the ESR releases of failfucks (which shouldn't be an issue really).



I use 61.0 (the latest stable release).

I've posted compilation instructions higher in the thread on how to build a PGO version of firefox. You really need this, it's several times faster than a non profiled version. It's very noticable when a 750 post thread full of images loads in less than a second versus 5 seconds.

As far as extensions, I use ghostery. It's the best ad blocker in my opinion. There are some built-in metrics but they are disabled by default (and you can ensure they are disabled). It blocks all ads, including the video ads in youtube without breaking the player (means I can just watch youtube all day no commercials no delays), it blocks late-add stuff through javascript, is constantly updated and has an auto "block everything" option, and breaks nothing.



Re: ghostery,

There's also an easy "stop blocking this one page" button or "pause blocker" button for those stupid video pirating websites and whatnot that require ad blockers be disabled. No need to bundle any other extensions if you use that.



> omg but ghostery has opt-in features that let them collect data on you! What if you accidentally click one of those check boxes on the "Opt-in" tab???

> Just install Ubuntu bro. That was Canonical can make money off you with spyware that isn't opt-in nor opt-outable, you're just stuck with it! Out of sight, out of mind...





This guy is fucking retarded ignore everything in this post.




ah, forgot that it existed. How do the two package managers in that distro coexist properly, btw? Seems like a bit of a mess to me, though I've never used it.


File: 82af4e0c12dc107⋯.png (19.59 KB, 425x231, 425:231, Screenshot_20180715_101732.png)

Every time I log in I am getting pic related. How can I set KDE up so that the email client is always allowed to use the password?



It asks for the wallet password, not the user password. So either you have no password on your wallet, wallet password that is stored in cleartext in your configs or you learn to live with it. OPSEC-wise first two are scary, especially with no encryption on your harddrive.


File: 89131fc1f248422⋯.jpg (3.65 KB, 217x217, 1:1, 0a163137dbb06f5afd6de7a3fb….jpg)

What should someone aiming to work in cybersec know?

Current areas of tech knowledge for me include:

>CCNA 1,2,3,

>C, embedded C


>Mediocre linux knowledge

>hobby-level arduino and raspi work

What should I add? Also, is JavaScript really that important?




OSCP - pentesting

CISSP - for being cybersec in big companies

if you have 2+ years of experience in the fields you mentioned, then you have a good background already.



Do people actually give a shit about certs you can get online?



several pro pentesters tell me that OSCP is great for a beginning. it actually teaches you how to work.

personally, I find CISSP useless, but I met several people on high positions that had this, so it must be worth while. if you're working in a big corp it's probably a must, since you need to know it's useless bullshit to bullshit some more on bullshit meetings.

to answer your question:

>Do people actually give a shit about certs you can get online?

they shouldn't. skill should be primary. but this stuff helps you actually to GET the job


Is there no way to start zathura in fullscreen by default? There's no configuration variable or launch option for it. Alternatively, are there any PDF readers other than zathura and mupdf that don't suck shit? I'm lazy as shit for it, but having programs start in fullscreen is pretty crucial to me. At the same time, I'm not really open to weird hacks like having the hotkey for fullscreen be automatically pressed when starting the program.


File: f057cb0c81bed54⋯.pdf (114.35 KB, kern.pdf)


So, apparently the crashes only happen when the computer is on a table, not a carpet.

I think it's trying to screw with me.

Anyway, I got the log, apparently the only messages logged from the kernel during the crash was either escape characters or corrupt garbage.



Looking at it in hex, it's actually a bunch of nulls.

Very strange.

Also, the C-M-Backspace combo did not do anything, sadly.

I forgot to set up kdump this time, I'll try that again.



Your computer is overheating. Maybe your fan is clogged / obstructed



Like the other anon said if you're not going to compile everything gentoo isn't the distro for you.

Try void, alpine, devuan, openbsd, artix, mx linux.



Nigga just press whatever your fullscreen shortcut is.


File: 32280dd55892262⋯.jpg (95.54 KB, 961x759, 961:759, ut_wot_h2o.jpg)


>somewhat understandably get shat on for using jewgle I say somewhat because it's used as an ad hominem attack, which is fucking asinine

>startpage/ixquick uses google algorithm

>google algorithm was noticeably changed around September of one or two years ago

>it's shit

What the fuck else am I supposed to use then? Bing? Surely bing has the same privacy issues that google has although it's a hell of a lot less censored, likely because no one complains because no one uses it :^)?






palemoon with advanced night mode, noscript, encrypted web, and decentraleyes. really nice imo



What the shit, does OSCP seriously cost 800usd? Do they REALLY make you take their course before taking the exam?

Can't I learn on my own and just get certified?


I have a laptop similar to Aspire One netbook series and I have installed arch+i386grub+kde+sddm on it.

It has cedar view processor with the non-intel GPU that is powervr.

The netbook worked with an old bbqlinux ISO with a kernel of 3.19 but I don't know if the kernel dropped support on my CPU at 4.X

BTW bbqlinux is just arch with graphical ISO and a custom repo.

I've had everything installed and configured on the arch 4.17 as usual, fstab are good, packages needed are I think complete.

The problem is it hangs during the systemd init sequence (I removed quiet on grub) and it would stop right around when the [ok] {network manager,hostname} service is loaded

Even my bbqlinux 3.19 install inherited the hanging problem after my arch 4.17 install.

I read arch bbs and have forced gma500_gfx on grub command then used an xorg configuration to force modesetting to use xorg framebuffer on arch 4.17 but nothing changed.

Still hangs on that init sequence.

I am able to get into TTY and see a login prompt for just a few seconds before it hangs again and force returns me to the [OK] hostname/network service is loaded on TTY1 for some reason

after that I press the power button for ~3 seconds and it would show a responsive "shutting down message" with the cursor (_) blinking responsively.

few mistakes I've done during install

>start sddm while chrooted

I am thinking the root of the problem might be the networkmanager service

probably a systemd failure?



Sounds like your network card is fucked. Check `dmesg' after boot and look for network card issues being reported.

I'm not sure how you say you're even getting into X if you are hanging during boot... maybe you mean it's just taking a while longer?

Anyway, it's not like systemd programs have anything like while ( 1 ) { do nothing; } anywhere. It just calls the kernel, which could be smacking on your network card saying "Yo, what's up" but your network card is not returning so some "crazy way too long timeout" is being hit somewhere and it's just giving up. Something like that, look for clues. Kernel log is most likely place.

Also `systemd-analyze blame'

Run that and it will tell you what is taking how much time.



I by the way also have arch installed on an aspire one.

Someone stepped on the screen me :( a few years ago and broke the screen, but it still works hdmi - tv.



try putting something static proof or anti-static material under your laptop while it's on the table.

If the laptop is always on AC charger then it might be a bad power outlet or bad grounding and the wires are leaking it on the table (or if another electronics are on also that similar table while plugged to an outlet).

Try listening to your laptop AC charger's brick and if it produces a non-linear electric buzzing like a flying bee then it might be that the laptop voltage is jumping around.



Well, that's it then. I noticed a while ago the charger made buzzing noises.

This isn't great seeing as my battery is bad.

Anyway, guess I've gotta get that non-static stuff.



>netbook anon


I have tried removing the network card but still get the same problem.

It might be because I used

systemctl enable NetworkManager.service


I've also read this recently, Gonna give it a try

>systemd-analyze blame




Well, I am due for a fan cleaning and a repaste.

Again, I can't thank you enough for all the help.



the charger usually buzzes at a constant volume but if it buzzes like a bee getting near and far from your ear (varying volume) then it might be bad voltage regulation or something in the laptop.



Shouldn't need to repaste. The paste doesn't go anywhere, the same amount you put there is still there.

A good fan cleaning, and keep your eye on cpu temps.

get the lm_sensors package and install the sensors listed running `sensors-detect' as root. Then, you can run `sensors -f" (-f because you're not a metricfag) to see the temps and vendor-specified limits for your CPU.

You may also want to look into CPU throttling. If you're not using frequency scaling with cpufreq (and the cpupower related programs) or have the intel-pstate enabled in your kernel, your processor is even running full power (and thus heat) at idle.

You can set it so that the cpu frequency scales with usage, so idle sits at a low like 1.2Ghz and full use goes up to 2.6GHz or whatever your cpu supports.



>Can't I learn on my own and just get certified?

I think you can, but not sure.



I've hired a lot of people. I've interviewed many more than that. I don't think I've ever hired anyone with a certification.

You had work experience at something related. Or you had a ton of code samples which were pretty good. Or you have a webpage where you do research. Or several of the above, etc. etc. -- You have something to show for it.

I don't think having a certification shows you have any knowledge on the topic other than you took some companies bullshit class so you could wear their name, which tips off you probably don't know where to look for your field other than education. I figure they probably wouldn't know how to do anything without going to a class for it.

Even without looking at your experience I [the interviewer] are going to talk to you about the topic at hand. Nobody is going to start at anything except a low level position, but knowing about your topic and being able to express that and solve "job-related" puzzles just during casual conversation is the most important thing.

The resume and experience is for HR people to meet a general goal of "who we [the hiring company] think qualifies for what we want." It often becomes a large part of the interview [getting you to talk about the things it says], if you're excited and confident you can drive the conversation without having a full resume as a guide.



You can just learn it on your own and get certified. The only good thing about this cert is that you actually do the work as opposed to taking some multiple choice exam.


>netbook anon


reading the systemd journal now. can't do systemd-analyze inside chroot since my tty is frozen

tried to increase dhcpcd and network manager timeout manually without systemctl edit (no chroot no tty) and still no luck


I'm trying to get anki-sync-server working but I can't connect to it. I've started it and added a user, but when I try to sync on my desktop it says the AnkiWeb ID or password was wrong and on mobile it does nothing. I've even tried exporting SYNC_URL as both and as the local identifier ( in case the "plugin" wasn't working. Anyone know what could be wrong?



I meant 192.168.1.XXX, don't know how that got messed up.


File: 0a5efdeebfdebca⋯.png (407.35 KB, 698x882, 349:441, confused carnivorous loli.png)

>have source for some application

>uses (((cmake)))

>Cmake successfully performs checks for AVX2 flags and presents an option for AVX2 optimizations

>Executable compiles and runs fine with it

>Host CPU only supports AVX

Why is that /tech/?




Use echoes properly you nigger.

>cmake is shit

>why is that

Perhaps because the developer is a retard who thinks his code will never be run on any old CPU. Either way, you need to check the specific test that is being done and see if you can replicate those results. Maybe then you will notice a problem with developer's tests.


Is there an online translator that respects your privacy?

If not, how should one translate websites?



Hire someone to do it for you. All the good trannylators are proprietary.



inb4 your cpu has undisclosed avx2 support, just like they used to slip in 64 bit support in random steppings of random cpus 13 years ago.


File: e2c883f055f139c⋯.jpg (26.39 KB, 315x480, 21:32, fweg.jpg)

I have a network with some clients that process sensitive data. Separate network, no connection to the internet. I have another network with mailserver, webserver, backup, user clients etc. Atm if I want to move data from the offline network to the online network - for example to get a pgp encrypted file to a client - I have people use a USB drive and copy the stuff manually from one server to the other. This is OK if it happens once a week. But having 5-6 transfers a day makes this a tedious solution. Question: Is there a more practical yet safe way to get the data from A to B and keep network A still safe and as offline as possible? I really don't know if my IT guys are competent in infrastructure matters so I hope /tech/ has advice.



Well isn't the whole point of having them on different networks to specifically make doing this hard?



Yes, that's the point. But it's getting difficult to handle. At one point there might be the situation where we need to sacrifice some safety for practicability. When this happens I'd like to know what my options are.



Create a unidirectional firewall rule from your offline network to a single "gateway host" on the new network.

That means your offline network is still not connected to the internet, but using the one "gateway host" you can copy FROM offline TO online network.



Thanks, that sounds good. Let's say someone gains control of the gateway host. How difficult would it be to gain access to the offline network.



The point is the firewall is one way. Even if they get control of the gateway host they have no way in to the offline network.

You do this with a single rule, from offline network to gateway host port 22 (Assuming you want to use scp/rsync for the transfer, 21 if you want to do ftp, whatever). This will only allow new connections in that one direction: from offline to online. Online cannot reach back to offline.



The point is the firewall is one way. Even if they get control of the gateway host they have no way in to the offline network.

You do this with a single rule, from offline network to gateway host port 22 (Assuming you want to use scp/rsync for the transfer, 21 if you want to do ftp, whatever). This will only allow new connections in that one direction: from offline to online. Online cannot reach back to offline.

blah blah blah flood bypass


I see that you guys don't have a qtddtot, so I figure I'll ask my question in here. How do I build, my nonexistent skills, to the level required to get a crisc certification? I understand that it is a difficult certification to receive, so what entry level certification should I go after to build a skill set that will lead to a crisc cert?


What is a wipestyle.



>crisc certification

What kind of job are you thinking about getting? I've never even heard of this before you mentioned it (and I've worked at a lot of places, both big and small).

Is there some sort of Risk Assessment Manager position that you think exists where a guy analyzes everything and gives it a numerical score 1-10 with how risky so upper management can make the call is risk vs reward is worth it?

I'm being serious, I'm not trying to put you down. The above position does not actually exist anywhere, so what are you going for?



> What is a wipestyle

I usually do three sheets, ball it up, front-to-back, back-to-front, swirls around the arsehole. I check the sheet, if it's light or nothing then I'm done. Otherwise, I take another sheet or two (depending on how much shit was on the first one) and repeat. I've had some shits my ass is clean after the first, most of the time that happens with the second set of TP and wipes, but sometimes (like after taco tuesday) where even two runthroughs isn't enough.

I feel like I've developed a really efficient style of wiping over the years. Feel free to steal it, adapt it to your own, whatever. I hereby license it under Apache 2.0.



It may not be running the avx2 instructions.

A lot of well-written software, including especially audio/video media players/encoders have a runtime detection.

That is, they will have something like:

struct ResultStruct* _some_func_avx2opt(void *data)


/* This uses inline avx2 instructions or the builtin functions provided by compiler to emit avx2 */


struct ResultStruct* _some_func_slow(void *data)


/* This is a non-avx2 optimized version of the same function, slower, but doesn't require avx2. */


/* This is the actual function the code will call. i.e. the "public" some_func function */

struct ResultStruct* some_func(void *data)


if ( _has_avx2_support )


/* If we detect avx2 support at runtime, change the public function "some_func" to point directly to the avx2 optimized version. Same signature, and we don't need to retest. */

some_func = _some_funcavx2opt;


else {

/* No avx2 support, so assign the public function to the unoptimized version. Same signature, don't need to repeat the 'as avx2' test. */

some_func = _some_func_slow;


/* Now call whichever one we picked so the call to 'some_func' returns a result, but future calls will go directly to the appropriate function */

return some_func(data);


Now of course the implementation is a lot more complicated than that, that was just psuedocode to give you an example of what is going on. The actual code may have to have a global table mapping names to functions, and a getFunction("some_func") function which reads that reference. Usually the actual implementation is super-optimized inline assembly and will adjust the references that way. The idea here is not just to have legal code but to ensure that even if the compiler decides it can optimize by inline all calls to that function, we still have a way of perfoming the above actions and choosing based on runtime tests.

Phew, that was a lot of text but I hope it makes sense. I assume you know you can cat /proc/cpuinfo and check features that way.

Here's a script that will give you a human readable answer:

tmp]$ cat /proc/cpuinfo | grep ^flags | grep -q avx2 && echo "Has AVX2 support" || echo "No support for AVX2"

And the result on my system:

Has AVX2 support



Sorry for the late reply btw, it took a long time to scrub all the vomit out of my carpet when I read cmake in your post. Argh. I feel so sorry for you.

Anyway, here's a script that may be useful to you. It will run gcc to go through and detect what is supported by your system and what is not, and output that info in terms of gcc flags.

tmp]$ cat `which query-march-native`


exec gcc -march=native -Q --help=target

tmp]$ query-march-native | grep -i avx2

-mavx2 [enabled]

So check that.

If it says enabled and you still don't believe it, let's try to run an avx2 instruction!

Here's test code:

tmp]$ cat test.c

#include <stdio.h>

#include <stdlib.h>

#include <immintrin.h>

int main(int argc, char* argv[])


__m256i a = {7, 11, 14, 20423 }, b = {1, 99294823, 1232423, 16};

__m256i res;

res = __builtin_ia32_pxor256 (a, b);

printf("Results: { %lld, %lld, %lld, %lld }\n", res[0], res[1], res[2], res[3]);

return 0;


Just xor'ing 2 4-element vectors 256bits total, and printing the results.

Compile it like so and execute:

tmp]$ gcc test.c -mavx2 -O -Wall

tmp]$ ./a.out

Results: { 6, 99294828, 1232425, 20439 }

See if you get the same results. If you don't, you have avx2 support but it's broken.




Okay, I messed up on the final test program (try to actually execute an avx2 instruction).

gcc will optimize out the result because it's all constants and not actually generate an avx2 instruction. Even at -O0 (no optimize)

This is fixed in the new script below:

tmp]$ cat test.c

#include <stdio.h>

#include <stdlib.h>

#include <immintrin.h>

int main(int argc, char* argv[])


volatile __m256i a = {7, 11, 14, 20423 }, b = {1, 99294823, 1232423, 16};

volatile __m256i res;

res = __builtin_ia32_pxor256 (a, b);

printf("Results: { %lld, %lld, %lld, %lld }\n", res[0], res[1], res[2], res[3]);

return 0;


Note the addition of "volatile" keyword.

And compile:

tmp]$ gcc test.c -mavx2 -O1 -Wall

You can confirm that it did include the avx2 instruction by inspecting the assembly.

So generate the assembly instead of an executable:

tmp]$ gcc test.c -mavx2 -O1 -S -Wall -o test.s

Open up test.s in vim and search (/) for "vpxor"

You should have several avx instructions around that call, for loading up the vectors. For example, the "main" function looks like this on -O1 compile




leaq 8(%rsp), %r10

.cfi_def_cfa 10, 0

andq $-32, %rsp

pushq -8(%r10)

pushq %rbp

.cfi_escape 0x10,0x6,0x2,0x76,0

movq %rsp, %rbp

pushq %r10

.cfi_escape 0xf,0x3,0x76,0x78,0x6

subq $104, %rsp

vmovdqa .LC0(%rip), %ymm0

vmovdqa %ymm0, -112(%rbp)

vmovdqa .LC1(%rip), %ymm0

vmovdqa %ymm0, -80(%rbp)

vmovdqa -80(%rbp), %ymm1

vmovdqa -112(%rbp), %ymm0

vpxor %ymm1, %ymm0, %ymm0

vmovdqa %ymm0, -48(%rbp)

But hey, we can't really trust that. What if gcc is noticing that we are outputting to a .s file the assembly and not the executable? What if it's trying to trick us even still?

Well, no fear, we can disassemble the executable itself!

So compile into an executable "mytest":

gcc test.c -mavx2 -O -Wall -o mytest

And disassemble that son of a bitch!

objdump -d -S mytest > mytest.s

Open up mytest.s in vim and again look for the vpxor function:

Disassembly of section .text:

0000000000000540 <main>:

540: 4c 8d 54 24 08 lea 0x8(%rsp),%r10

545: 48 83 e4 e0 and $0xffffffffffffffe0,%rsp

549: 48 8d 3d f8 01 00 00 lea 0x1f8(%rip),%rdi # 748 <_IO_stdin_used+0x8>

550: 31 c0 xor %eax,%eax

552: 41 ff 72 f8 pushq -0x8(%r10)

556: 55 push %rbp

557: 48 89 e5 mov %rsp,%rbp

55a: 41 52 push %r10

55c: 48 83 ec 68 sub $0x68,%rsp

560: c5 fd 6f 05 18 02 00 vmovdqa 0x218(%rip),%ymm0 # 780 <_IO_stdin_used+0x40>

567: 00

568: c5 fd 7f 45 90 vmovdqa %ymm0,-0x70(%rbp)

56d: c5 fd 6f 05 2b 02 00 vmovdqa 0x22b(%rip),%ymm0 # 7a0 <_IO_stdin_used+0x60>

574: 00

575: c5 fd 7f 45 b0 vmovdqa %ymm0,-0x50(%rbp)

57a: c5 fd 6f 4d b0 vmovdqa -0x50(%rbp),%ymm1

57f: c5 fd 6f 45 90 vmovdqa -0x70(%rbp),%ymm0

584: c5 fd ef c1 vpxor %ymm1,%ymm0,%ymm0

588: c5 fd 7f 45 d0 vmovdqa %ymm0,-0x30(%rbp)

yayyyyy!!! Hi five!



I guess this is secure enough even tho the firewall is still software that runs on a processor and probably can be attacked.

While searching I found a solution that is hardware based. It translates copper to fibre only in one direction so it's physically impossible to send data the other way. But this is an appliance for industrial networks and likely quite pricey.


Also suck it.

Archlinux is the best OS.

Python is the best programming language for most things. C is a close second or necessary in a lot of cases.

Installing ubuntu is like giving your father aids, then giving him cancer, then cutting the tumor out of him (killing him in the process), shaping it like a dildo, and fucking yourself in the ass with your dead dad's AIDS cancer dick.

Change my mind.



You can and SHOULD buy a router, and place it in front of eth2 (or whatever the next free ehternet slot is on the "gateway host").

This router is configured to only allow connections from the internal network to the public network, not the other way around. So basically the router acts as a hardware (99.999999% unhackable) firewall by only routing packets FROM internal network TO destination port on gateway host.

This connection should probably be its own VLAN too. So network traffic can trunk from other VLANs onto your new, special gateway VLAN, but traffic from the gateway VLAN cannot trunk out to other VLANs.

Make sense? This will cost you like $50 to implement.



This setup will literally make it impossible, even if the gateway host is 100% compromised, to reach the internal network. The "gateway vlan" doesn't allow packets to leave it for other vlans (meaning even though it's plugged in the packets won't flow). It only allows packets from other vlans (on internal network) to cross over to gateway vlan and connect to gateway host.


File: 5238c4c94c0060c⋯.jpg (70.68 KB, 583x472, 583:472, ok6.jpg)


Sounds good. I'll pitch this to our IT.



Yup yup, enjoy.

If you're going to have to defend it, make sure you understanding trunking, vlans, routers, etc. enough to talk through it. Probably wikipedia is enough to refresh your memory or give you the basic talking points about it.

I know I've worked at places that hired "Engineers" who didn't know jack shit about anything, but had to put their rubber stamp on any infrastructure or network related task. So if you have the same kind of issue, it's best to be able to answer their questions right then and there rather than "uhhhh, I'll have ot look that up and get back to you..."

Just some people-skills advice. Those who don't work IT don't have a clue about just how much of our job is convincing some idiot with too much power to Do The Right Thing (tm).

Best of luck to you, buddy!



Oh, and come prepared with a picture.



I'll draw you a simple one, standby..



No worries. I already screencapped everything.



I'm forever in your debt, network-sama


Is there a firewall on Linux that lets you do what the Windows firewall lets you do? Block all by default and manually whitelist individual executables? I've tried Gufw but apparently that stupid thing can only blocks ports like it's from 1980.


File: 3354fc502f46403⋯.png (175.23 KB, 800x600, 4:3, network.png)




I think I spent way too long on this, now I'm not going to get enough sleep and my eyes hurt.

I think I drew everything correctly, but please go over it yourself and verify that I didn't make any stupid mistakes.

The red lines represent your current "Private Network" and the blue lines represent your current "Public Network."

The green lines represent the proposed "Gateway Network". Note that the arrows on the green lines are unidirectional whereas everything else is bidirectional.

Best of luck and be sure to let me know how it turns out!!


File: 92536d8b51e9087⋯.jpg (62.69 KB, 300x401, 300:401, adolphdubs.jpg)


Your hard work was rewarded by dubs. Thanks and good job.





Let's just allow confidential information tobe uploaded from the private network to the public network. What could go wrong?

As if a they would allow that to go through. If they didn't care about that happeneing they wouldn't have put those computers on a separate network in the first place.



One question. Can the gateway server be a virtual server or should it be a machine on its own?



The information that will be uploaded is pgp encrypted. The goal is to keep people away from unencrypted data.



They are already transfering files via manual USB.

Maybe you aren't understanding, but the new gateway VLAN is limited to just private -> public direction on a single port -- like sftp.

So if a hacker gained complete control over the gateway server,, they can't reach back into the private network.

The only way for a private server to be compromised is via insider threat. And even still, the worst they can do is copy files to the public that maybe shouldn't be copied. But an insider threat already has access to these private files, using the gateway hop path is just leaving a paper trail that you stole shit. The insider threat if they were smart would just copy to a USB drive and walk out with it. But they can do that whether this new VLAN is created or not -- so there's literally no loss of security anywhere.



>They are already transfering files via manual USB.

He never said if the IT guys were actually okay with him using USB drives to transfer the information.



So the gateway server should have a dedicated cable for this VLAN, rather than the other option ( to have a virtual IP added to an existing cable ). In fact, I don't even think that would be possible because it needs to connect directly to the new Gateway router.

So because it needs a hard connection via CAT-5 cable, it needs to be a physical machine. Technically you could have a virtual machine running on that physical machine with the connection and write a whole bunch of rules to do the forwarding etc. but it's not a good idea. It's way more difficult, complicated, and unstable (meaning there's many more places / chances for something to go wrong here ). And since you need the physical machine to host that virtual machine, there's really no advantage to using a virtual, only disadvantages.

That said, if you are worried about isolating the gateway host itself for some reason, I can't think of any good reasons but just in case, you could run the FTP server or whatever within a container (like docker). This is basically a "virtual machine" (well, a fancy self-contained chroot jail) but without all the complicated configuration and hazards.

I really would just recommend having a physical machine there with no virtual anything. Depending on the amount of traffic it might be useful to have a RAID-1 or a RAID-5 on this system to store the transferred files in a highly-available fashion (as far as the disk is concerned), but it sounds to me like it's just a "copy this file onto the public network and use it immediately" kind of thing rather than a new data store.

So just throw a standard physical machine there with 2 ethernet ports. I have 2 cards listed in the diagram but it could be two ports on the same card, that makes no difference.




Also I didn't put it in the diagram, but if you want to make this a highly-available service (HA) then you can do so with a little modification.

You can stand up 2 "Gateway Hosts" each with a connection to the "Gateway" router.

Then, have a virtual ip on that VLAN which represents the "endpoint." Hosts on the private network would connect to this virtual ip, and if the gateway server that is holding that virtual ip goes down the other one can pick it up. Can do this automatically with things like "ucarp." This can get tricky though, you will have to flood the network with ARP packets after the switch to prevent the cached route from the previous owner of the virtual IP remaining in the table and thus dropping traffic.

An even simpler way though that I would recommend would be to use DNS instead of a virtual ip.

So have 2 gateway servers connected to the public internet and the gateway vlan.

Say gateway server 1 has ip and gateway server 2 has ip

Have your internal DNS then contain a record, like "gateway.mycompany.com" which points to the 'primary' server. If that primary server fails, you can switch the DNS to point to the other one.

The point of doing one of these is so that all the scripts and commands that folks are using to transfer to the gateway host can remain static and agnostic to the fact of which gateway host is active and primary.



It's already being done with a USB drive, OP is trying to make everyones life easier. There's no guarantees he will win if he's not IT (some folks get really mad if they didn't think of something first, etc.)

This is why I'm suggesting that he learn what I'm talking about, thinks of potential questions and already has the answers ready, has a visual aid (which I provided).

Or maybe I'm not sure why you're commenting. Do you have a different idea or are you just trying to derail the discussion between me and OP "for da lulz?"



That's what I thought. The machine basically does nothing so it can be a weak machine.


Thank god high availability is not a requirement.



I hope your screen grab included my comment here: >>943066

That's a very work-appropriate thing and will ensure great victory for the proletariat!



> Thank god high availability is not a requirement

Yes I figured based on your explanations of the situation. The DNS HA method I described is actually not all that difficult, you just switch the address of an entry in your internal DNS. Assuming since this seems to be "manual one off copies" kind of thing, it doesn't need to be automatic can just have a manual process to switch it over.

I bring it up because it's a good thing to have ready to discuss. It shows that you've thought of the future and should you ever choose to go HA this approach already supports it so there's no extra engineering involved. Managers love that kind of thing.

Also, I assume you looked over the graphic I made you here: >>943088

and you understand / agree with everything that's going on?


Ugh I need to go to bed. I'm seeing in triplicate and everything is "bouncey" :/


Something a lot of people don't get, even after working in the field for a long time:

Your job is to SUPPORT THE CUSTOMER. Their work is the goal and the reason for existence of the company.

Your job is not to whine and complain, or tell these customers how to do their job. Your job is to gather their requirements and meet them using best practices. Sometimes they have a requirement that is so backwards and bad that you need to be able to work with them to get them to understand, and be able to offer alternatives which accomplish the same goal but with a sane approach.



Put another way, your goal is to allow them to do their job effectively.

If this means you need to break a rule or two in the long list of "proper ways to do things," then so be it.

You aren't managing your personal network. Your personal desires and preferences are irrelevant; what matters is that the scientists / analysts / developers / accountants / whatever are able to do their job efficiently and securely.



>le gustomer is always right :DDD



Also, Archlinux. *drops the mic*

( •_•)

( •_•)>⌐■-■




>endless arch linux shilling

I will never consider recommending arch to anyone ever again, because arch turns people into insufferable shills

suck more systemdicks, faggot



I assume you're not familiar with the "shop talk" of IT.

"The Customer" isn't the dude in the store buying your product. It's the person drafting the requirements.

Say, for example, that you are writing engineering software. Something that reads input off a UBB sensor, collects it in a database, runs analytics on it and generates graphs. The "Customer" in this case isn't someone in best buy purchasing your software. The Customer is going to be your internal engineer with domain knowledge who provides the formulas and determines what transformations need to occur.

Or, in a science environment like I had worked. I managed the systems and wrote a lot of software. "The Customer" in this case isn't people with diseases or doctors in hospitals, they are the biology scientists who work in R&D branch.

Got it?



>shop talk

you mean, your own personal vocabulary you use when you get pushed into a corner by people on the internet

never heard that used in any of the IT jobs I was at

>the customer is actually your own employees lol

>everyone will believe me XDDD



You would never consider recommending arch to anyone in the first place, so what's the difference?

You don't have the motivation and/or skills and/or experience to actually understand how your system is working and what pieces do what.

You at like it's so hard and some info graphic about stacking 500 encyclopedias or whatever, it's just so far off.

I install arch on a new system. There is no graphical installer. So I pop in the live cd, create a partition table on the drive, format the drive with reiserfs, and mount it. I bind /proc /sys and /dev inside this mount (actually there is a tool that comes with the live cd, arch-setup or something like that which does these steps. I prefer to do them manually though).

After that, I run pacman --root=/mnt -S base grub firefox gnome gnome-extra vim python python2 python-virtualenv python2-virtualenv nvidia-dkms nvidia-utils gcc alsa-utils automake autoconf

"base" is basically the full bare-bones install, and then I install gnome, nvidia driver, and a few other things off the top of my head. X11 and all that are in the deps somewhere therein.

(I have mounted / , /boot, and bind mounted /dev /proc and /sys to the /mnt dir )

I make a swap partition with "mkswap" and then "swapon" it.

After this, I personally write my own /etc/fstab but you can run genfstab -U /mnt >> /mnt/etc/fstab to generate one for you.

Optionally sync clock:

ntpdate -u pool.ntp.org; hwclock --systohc

Run `locale-gen' to generate locale (again, optional. Default is C locale)

I put my hostname in /etc/hostname and in /etc/hosts

I run mkinitcpio -p to generate my initramfs

I run "passwd" to set my root user password

I run "grub-install" to setup grub.

Reboot and I have a working OS.

Note: I only had to "manually configure" 2 things: And both wree 2 parts of setting my frickin hostname.

It's not hard, you juts have ot understand what's going on. In fact it's the simplest and easiest linux to use. Notice how I just typed up a full fuckin install of archlinux. No crazy config edits. No trying to debug strange error messages, everytihng is auto-detect.

After I reboot maybe I enable "gdm" via sysctl (if I were a fag), but I personally create my ~/.xinitrc to execute /usr/bin/gnome-session . I also hook other things in there. I don't like my system going straight to graphical I prefer a terminal. So because I'm a fag like that there's literally 3 lines total I have to add to do a full arch linux install.

But yeah, keep making excuses on why it's too complicated and shit. It's the investment fallacy: You've invested so much time on bad distros and if you switch you'll lose your investment, so you just keep enduring the bad.

That's your problem, not mine buddy. I hope you can stop being a pussy and get yourself on a real mans system (Arch Linux). But I understand if you'd rather do >>>/soyboys/



> you mean, your own personal vocabulary

No, it's actually very standard terminology. You would know this if you weren't just some kid in a basement somewhere LARPing as a computer expert.

I'm not your enemy and I don't want to be. I'm not sure why you're so obsessed with being wrong and trying to make me look like anything less than awesome because you're wrong. It doesn't make sense to me bud.

But I'll be the bigger man here and hold out the olive branch. How about you stop flailing around whilst flaunting your ignorance and try to learn a bit? How's that sound?



Here, I pulled up a definition for you from the definitions section of a project-manager targeted training course.

c is for...


See Product Owner.

And from the definition of "Product Owner":

Product Owner

The Product Owner is the primary author of the user stories, owns the product backlog, prioritizes the stories, writes acceptance tests and accepts work completed by the team. On some teams, the product owner may act as a liaison between stakeholders and the team, communicating the vision of the product while ensuring the team stays on track to meet the customer’s goals. On other teams, typically involving larger projects, this role may be played by a Product Manager. In Extreme Programming, the preference is for the On Site Customer to be an actual customer, not a stand-in.



Note the last sentence:

> In Extreme Programming, the preference is for the On Site Customer to be an actual customer, not a stand-in.

Which again shows that Customer means the person setting the requirements / has the relevant domain knowledge. "Customer" meaning the end user with cash-in-hand is the exception, the alternate and corner-case (rarely used) definition.

Really, I don't understand why you want to keep arguing with me. I've done nothing but provide top-notch support for a bit of time here now. Scroll up in the thread.. I'm providing far more support (and actually correct support) to everyone.

Yet you want to fight me online because.... why? Did I steal your thread or something?

I don't want to be your enemy. But honestly spending so much of my time explaining why your "NUH UH! UR WRONG" statements are mistaken is getting tiresome.

Maybe try learning / researching your points before always trying to make me seem wrong. If I am wrong, I will admit it. I was wrong once in the past 10 days or so here, where I mistook Tor's DHT connection as UDP instead of TCP... but come on, everyone makes mistakes.

Why do you choose to fight against all my support anyway? Is this some sort of trolling on your part? Because if so, I don't think it's very good.

Did I upset you?

Why won't you just use Archlinux? Is it because you didn't think of it first so it must be wrong?



>quintuple posting

>reddit spacing

Go be a nigger somewhere else.

>implying I can't install arch

I have installed gentoo tens of times, dumbass. Arch is even easier because if you fuck something up it takes less time to restart from the beginning. But it's harder to actually use because systemdick interferes with my autismically high standards.



< I separate out my paragraphs on long posts to make things readable

> OMG! YOU HAVE A BLANK LINE! HAHAHA THAT'S LIKE REDDIT! No, I don't know what "markdown" is. What's that?

You're really reaching, still flailing around trying to win..... something.

> I know you've proved me wrong on literally everything I ever said, but hahaha look at you so dumb with your readable lack of walls of text

Grow up, kid.

> Bitching about systemd for some reason

Why? I don't think I've even had to touch systemd on my system in months. It's not like it comes up for any reason other than "I want postgresql to start on boot now. Oh noooo!!! I have to type `systemctl enable postgresql' so fucking hard I'm lost!!!

Again, why have you chosen to make me into an enemy?

I'm here to help, and I seem to be doing a pretty damn good job.

You're here to... waste time and confuse people who are actually trying to learn? Real noble pursuit, bro.


Whatever bro. Learn or don't learn, you're the only one losing here. But the thing is... there's not even a competition. I'm really lost as to what you're even trying to win. Most Ignorant Anon on Fullchan award? Do they give out digital trophys for that?


Dumb niggers... amirite?


File: bfd8edb075cf685⋯.png (478.12 KB, 1079x767, 83:59, bfd8edb075cf6855f0d00cab7c….png)



8chan doesn't use markdown. Therefore, you using it is retarded.

>doubleposting again

gay tbh



No. Markdown is a standard. Reddit happens to use it. But you didn't know what markdown was until 5 seconds ago "reddit spacing" is all it is in your mind.

I have about 80 some projects on github. Github uses markdown [README.md] as the format standard (thankfully it doesn't use the abomonation known as ReStructured Text [README.rst]

So why not call it "github text?" Reddit didn't invent markdown, it's been around much longer than reddit. Oh yeah, I know why. Ignorance.

So... you have no point to your discussion, nothing to add technically to anything, no willingness to learn from those who do or on your own time. I see. So why are you here again?

Eh whatever. I'm going to bed. I don't understand why you waste your days sitting on a support forum neither asking for support nor providing any, but rather providing anti-support by stupidly challenging answers to topics in which you are clueless.

But hey, if it's working for you, more power to you.


File: 4a1d70f1ae57aef⋯.png (383.36 KB, 803x767, 803:767, 4a1d70f1ae57aefefc24223706….png)


Yeah, it's a standard that's NOT USED HERE.

You insisting to bring along foreign crap here is a sign of your unwillingness to assimilate.



It's not bad actually. It's a little better than markdown because it's easier to read and write with a text editor as opposed to using some retarded MD->HTML converter or markdown renderer.

Also, it's used in the Python "community" for documentation. I thought you were some rabid python shill as well.

>I'm going to bed

Hopefully you don't wake up again nigger.


I'm trying to compile rav1e but the build script craps out at

/home/benis/rav1e/target/release/build/rav1e-1bb1cec297375cc7/out/include/aom/./aom_integer.h:15:10: fatal error: 'stddef.h' file not found

stddef.h is present in its respective GCC directory on my root partition.

Wat do?



It's not used here you're right. And I'm not suing it here. I just hold a higher value toward being readable and understood than potentially being seen as a non-purist.

I mean seriously, how dumb can you get? You have literally nothing of value to contribute here so you fall back to "OMG U SO GAY CUZ U HAVE THINGS TO SAY AND MAKE THEM READABLE."

Literally, there is no part of markdown that I have tried to use, at all. I am just spacing things like a sane person. But I guess we all have to type illegibly because Reddit uses markdown which in the exception of preformatted sections (consistent whitespace at start of line) treats 2 consecutive newlines as 1 line, and disregards single newlines.

Do you drive down the highway and yell at the road signs "OMG REDDIT SPACING! THERE'S A BLANK LINE BETWEEN THE WORDS!!"

When you get a letter in the mail that has paragraphs in them do you throw it immediately away because "OMG REDDIT SPACING!!!!"

Or how about man pages? Not that you've ever looked at them, but if you type "man ([chapter]) section" it opens the manual page for that topic. But what the fuck? There are empty lines between all the paragraphs! Fucking Reddit spacing! Damn UNIX was using Reddit spacing 40 years before Reddit came out. Dumb fags.

Here's something to consider: 2 consecutive newlines on image boards is treated as a blank line. So it's not really "Reddit spacing" it's... a new line.

Nigga please. Go suck a dick. I've been using imageboards since you were in diapers. You can't even bitch at people about reddit spacing properly. If I was actually using reddit spacing

Then every new line

would look

like this

but they don't. Just new paragraphs start with a blank line in between this. You learn this in 1st grade when they teach you how to write in paragraphs. I understand that you probably got held back a few times and kicked out of kindergarten when the state gave up on you for failing 11 times in a row.

Also restructured text is total shit. It doesn't make sense at all, is horrible to write, misformats and is interpreted differently based by different renderers.

Yes, pypi uses RST as a default. I write all my readmes in markdown and use a program "mdToRst" to convert the markdown (a sane format) into RST (an insane shity format).

Grow a pair niggerfag. Why are you fighting again? You're not worth my time.



Post the relevant portion of that source code file. Also try modifying the CFLAGS of the build script to include -I<directory which contains stddef.h> as a quick and dirty fix.


I have an AMD R7 250 graphics card. I've always used the radeon driver; would I benefit from using the amdgpu driver?



So there could be a few issues. stddef.h is a header provided by linux.

The line should probably read like:

#include <linux/stddef.h>

But maybe those dumb fags just wrote:

#include <stddef.h>

If it is the first way (linux/stddef.h) ensure you have a symlink or a folder (depending on distro) at /usr/include/linux (if symlink this will point to a linux source tree. Redhat used to do this and probably still does. Arch includes actual sourcethere.)

If instead it just stupidly has #include <stddef.h> and you do have a /usr/include/linux/stddef.h, Then add to your CFLAGS/CXXFLAGS like so:

export CFLAGS="${CFLAGS} -I/usr/include/linux"

export CXXFLAGS="${CXXFLAGS} -I/usr/include/linux"

Your issue may also be that you don't have kernel source package installed for your distro. If that's the case, install it.

Let me know if none of these fix it and I can look into it more in a few hours.



On my system, /usr/include/linux/stddef.h is very different and has a different effect from /usr/include/stddef.h



The AMDGPU driver is better from a /tech/ perspective because it's free software. If your card supports it, you should use it. However, if there is a performance drop (which is likely) and you care more about the performance than the freedom, you should switch back the the radeon driver.


Also, you're retarded because stddef.h is a C standard (and POSIX standard) header and not a Linux-specific header. Any system which does not have it in the standard include path is a standards-non-compliant system, and you should fix it yourself.



/usr/include/stddef.h is a really really really old UNIX thing. Back before the C standard included "NULL" as a built in, that's the file that used to define it, along with other things, like size_t definition.

It was a transitional thing as the C language became more standardized. So basically you could include stddef.h and if your compiler didn't support the C standard from.... 1989 or so, it would define things like NULL for you. If your compiler did support c89 then it would act as a no-op.

Since this is 30 year old shit it has no real meaning at all on a modern system.

OP can really just create a file /usr/include/stddef.h as an empty file ( `touch /usr/include/stddef.h' ) and get away with it because it defines nothing these days.

On my system, it defines always_inline again, as a support thing for compilers that are 10-15 years old and don't include such a thing.

Here's my linux/stddef.h:

]$ cat /usr/include/linux/stddef.h

/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */

#ifndef __always_inline

#define always_inline inline__




> More ignorance

Oh my god. Read >>943171 . It's a 30 year old compatibility thing from back before C had NULL as part of the standard, in order to have the same code work on the old compilers as the new compilers.

It serves little to no purpose in the modern world.

It is provided by linux again, to support really old compilers, but it no longer needs to define NULL or size_t (because 35 year old C compilers won't work at all), but rather just the __always_inline attribute.



I bet you didn't even know that NULL ever wasn't part of the standard?

I bet you also think that NULL is always the same as (void* 0). In fact, if you're interseted in learning ( I know you're not but others may be), many systems had differing values for NULL. Some of them would have NULL be 65534, for example.

This created issues with code compatibility across systems, so the C standard 30+ years ago came up with the concept of NULL. In the C standard , (void *)0 and NULL are treated as the "__null_ptr" even if the underlying system doesn't use 0 to represent NULL. Because it's part of the standard though, you can use NULL and it's up to the compiler to assign it the platform-specific value if it differs.

Again, NULL hasn't had a non-zero value since "mainframes" were a thing. But a little history lesson for ya. Learning is fun, no?



I bet you didn't even know that NULL ever wasn't part of the standard?

I bet you also think that NULL is always the same as (void* 0). In fact, if you're interseted in learning ( I know you're not but others may be), many systems had differing values for NULL. Some of them would have NULL be 65534, for example.

This created issues with code compatibility across systems, so the C standard 30+ years ago came up with the concept of NULL. In the C standard , (void *)0 and NULL are treated as the "__null_ptr" even if the underlying system doesn't use 0 to represent NULL. Because it's part of the standard though, you can use NULL and it's up to the compiler to assign it the platform-specific value if it differs.

Again, NULL hasn't had a non-zero value since "mainframes" were a thing. But a little history lesson for ya. Learning is fun, no?


File: 101466f9f4132e3⋯.png (357.6 KB, 796x597, 4:3, 101466f9f4132e392d53750c2b….png)

File: 77b0c58f1386d45⋯.png (98.93 KB, 368x225, 368:225, shit.png)



>So much ignorance

>literally can't believe it

>actually shaking right now

>acting like a whiny little bitch liberal

On my system, the following data types: ptrdiff_t, size_t, wchar_t, max_align_t; are all not available unless /usr/include/stddef.h is included. Either this occurs directly from the program source code, or (much more common) through the inclusion of another header which includes it.

Although it is probably not necessary on most systems to include it explicitly from the program source code, it still definitely serves a purpose.



I bet you didn't even know that NULL ever wasn't part of the standard?

I bet you also think that NULL is always the same as (void* 0). In fact, if you're interseted in learning ( I know you're not but others may be), many systems had differing values for NULL. Some of them would have NULL be 65534, for example.

This created issues with code compatibility across systems, so the C standard 30+ years ago came up with the concept of NULL. In the C standard , (void *)0 and NULL are treated as the "__null_ptr" even if the underlying system doesn't use 0 to represent NULL. Because it's part of the standard though, you can use NULL and it's up to the compiler to assign it the platform-specific value if it differs.

Again, NULL hasn't had a non-zero value since "mainframes" were a thing. But a little history lesson for ya. Learning is fun, no?




What kind of shitty ass distro or compiler are you using?

$ cat test.c

#include <stdio.h>

#include <stdlib.h>

int main(int argc, char* argv[])


size_t xvar;

wchar_t wvar = 5;

xvar = sizeof(size_t);

printf("xvar is %lu and wvar is %hd\n", xvar, wvar);

return 0;


$ gcc test.c -Wall

$ ./a.out

xvar is 8 and wvar is 5



I thought the radeon driver was open source (doesn't it use the MIT license?) -- obviously that doesn't make it libre; but I doubt the amdgpu driver is GPL anyways.

Also, it seems like amdgpu performs better and has Vulkan support, so I'll try it. Support for Southern Islands (my card) is experimental though... Does anyone here have a Southern Islands/Sea Islands card and use the amdgpu driver?



Modern gcc doesn't even support anything prior to c89. There's literally no reason at all to have it for almost all circumstances. Linux provides its own but as you've noticed, it's not "the same stddef.h" it's just the concept of supporting old crappy compilers.


So I got this business card from someone who's supposed to give me a working gig.

The only contact info is:

Tel.: 015117873883

What fucking formatting is this? How can I even reach that person?



You'll find that the inclusion of either of the headers stdio.h or stdlib.h results in the indirect inclusion of stddef.h.

As I said, it's rarely directly useful, but it's still used.



I should add that I edited the last digits for obv privacy reasons.



I have a Sea Islands card and it werks breddy nicely, the DC driver stack tends to shit itself every once in a while but it can be turned off and Southern Islands don't support it anyhow.


And anyways.. ignoring the fact that stddef.h is meaningless these days, it IS provided by gcc in the /usr/lib/$arch-$platform-$os/$gcc_version/include directory.

For example:


This is part of the standard include path for gcc though. So having a /usr/include/stddef.h is really stupid. You know why? Because it locks you into one fucking compiler!

The definitions are specific for the compiler you are using. clang provides its own stddef.h gcc provides its own, bcc ( Bruce's C Compiler) provides it's own, etc.

How come your distro is so shitty that the people making the gcc package didn't understand this? Why did they just say "durrr lemme copy this file here so that anything that uses it will be locked into whatever compiler I picked! Dats ssmart! Yayyyyyyy, I get cheerios now!"

Seriously, the file is there so that when C standards change you have a fallback to remain compatible with old compilers. The standard isn't changing much at all these days.

But because your OS (what? Alpine linux?) is maintained by a bunch of dumbniggers who don't understand gcc's default search path they've fucked the ability for that file to actually be useful should there come a time that the C language changes enough that you have to use said compatibility file.

Why won't you just switch to Arch again?



> It's already implicitly included

So you admit now that including stddef.h is frickin useless? Cool. Glad we're on the same page.

Do you have any plans to actually learn C? Are you involved in any open source projects? Do you love ruby on rails?



>Southern Islands don't support it anyhow

Really? On the Gentoo wiki (https://wiki.gentoo.org/wiki/AMDGPU) it claims that my card is supported (albeit, "experimental" support). Won't know until I try it, of course. Hopefully the RX 560 will come down to sane price so I can finally upgrade (I don't like cards that have a TDP <75).


File: 166f90068e989f0⋯.gif (275.54 KB, 963x900, 107:100, 166f90068e989f0684774f51f5….gif)


>ignoring the fact that stddef.h is meaningless these days

>I was proven wrong but I'm going to pretend I'm still right



It is meaningless. I was actually providing support for that dude which probably doesn't have kernel source installed ( I'm pretty sure it was a driver but I didn't look at it too hard ).

There's no reason to include it in your code, it's already defined in the compiler itself as a builtin, or included with stdlib, or whatever. The point is it's hidden from the source unit.

Properly written portable apps don't rely on it, if they need the type they define it themselves.

For example, within H5:

/* Define the ssize_t type if it not is defined */


/* Undefine this size, we will re-define it in one of the sections below */



typedef int ssize_t;



typedef long ssize_t;



typedef long long ssize_t;


#else /* Can't find matching type for ssize_t */

# error "nothing appropriate for ssize_t"



from H5public.h

It's shipped as part of the compiler and varies compiler to compiler, and between operating systems. Like I said, 30 years ago it was useful because compilers were changing. It may happen again with certain things, as the example of my linux/stddef.h showed a means of ensuring an attribute.

And it's like, yeah some old ass apps or apps that are doing it wrong may include the original. A ton of that shit has moved to builtins though. And there's overlaps between definitions of these types. Read these comments below, stdio.h could be defining NULL for as far as you know!

#if defined (_STDDEF_H) || defined (__need_NULL)

#undef NULL /* in case <stdio.h> has defined it. */

#ifdef GNUG

#define NULL __null

#else /* G++ */

#ifndef __cplusplus

#define NULL ((void *)0)

#else /* C++ */

#define NULL 0

#endif /* C++ */

#endif /* G++ */

#endif /* NULL not defined and <stddef.h> or need NULL. */

#undef __need_NULL

But look, I'm done arguing. It was fun for a little bit but now it's gotten really pathetic. I concede defeat to you, my liege. You win all the internets today.



btw, if you can't tell, that H5 macro block is defining SSIZE_T based on the results of a test from configure. It doesn't even try to count on there being this magical "stddef.h" which is correct for all systems on all usages.

Anyway, peace.

-- The guy fucking your mom


File: 33d93ca668330b6⋯.jpg (108.21 KB, 500x500, 1:1, 33d93ca668330b6992b4630cab….jpg)


>Properly written portable apps don't rely on it.

The purpose of headers is to allow you to get datatypes which may be different across different systems e.g. on 32-bit vs 64-bit systems, size_t has a different width.

>if they need the type they define it themselves.

If they need the type they include stddef.h, you moron.

Defining system- and architecture- dependent types by yourself, especially when there is a C-standard and POSIX-standard header to do so, is utterly retarded and the very opposite of portable. Any distribution which does not include stddef.h is simply not compliant with widely accepted standards, and anyone who wishes to use such a system should provide stddef.h by himself or link it to a standard location in the compiler's include path.



It's a regular telephone number


01: Country | USA

511: Area | not an area code — used as a local information number for transportation and road conditions, and/or local police non-emergency services

787: Prefix

3883: Line Number



I think he is trying to troll people into calling that number and (((the police))) picking up. At least on analog phones dialing "911" as any part of the number will call the real 911


Hello. Literal retard here, is there any free proxy that can post on cuckchan?



You remind me of the 15 year old faggot who recently came from cuckchan to our /b/ and then /v/ and refused to try and fit in. Your 'answers' are shit.



Usually not, but you can try to look for free Socks5/SSL proxy online and try it with your browser.

If you want to evade a ban try 4G internet with dynamic IP, if you want anonimity, pay for a proxy and use additional layers of security. Public WiFi/Tor/VPN.


File: 23bbefe5bfcd729⋯.jpg (310.63 KB, 1388x2462, 694:1231, dad_rock.jpg)


I was referring to the optional DC driver stack not AMDGPU itself.


Apparently it wants to compile using clang instead of gcc for whatever reason, installing that fixed it.

Why are there so many C compilers?

Which is the least shit?



So, I just want to thank you one last time.

I got my computer to crash again, but I was logging my core temps using sensors -f.

You were completely right, it is overheating and dying out.

I don't know how you got that from my kernel logs but again, thank you.



np. There's a package out there you may or may not know about -- gkrellm. It's kind of freaky, it's like a portable "dock" of information. You can have cpu usage by core,disk activity, etc.

But most important to your situation -- you can setup widgets for fan speed and temperature. Then it's on your screen at all times.

See if your fan is running low or very variable, it may be clogggest with dust, a magnetic bearing may have gone back requiring a replacement, etc.

Also it's a good way to monitor your CPU temps (and speed).

Again, after you've cleaned your fan out then look into enabling frequency scaling. "cpupower" is the userspace control for this.

Keeping your clock speed low / ondemand governor can make all the difference

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / agatha / ausneets / fascist / hikki / miku / sonyeon / tot / vichan ]