[ / / / / / / / / / / / / / ] [ dir / animu / doomer / egy / hkpol / jenny / kohl / strek / vichan ]

/tech/ - Technology

Winner of the 80rd Attention-Hungry Games
/otter/ - Otter For Your Soul

May 2019 - 8chan Transparency Report
Comment *
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Show oekaki applet
(replaces files and can be used instead)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 3 per post.

File: 740fc8f7d1844ca⋯.jpg (168.32 KB, 1280x960, 4:3, 740fc8f7d1844ca7957939d73d….jpg)


/tech/ has always been known for it's in depth browser threads, but how often do we really look under the hood and discuss the rendering engines that run these browsers and do most of the actual heavy lifting of displaying a webpage? Right now there are three main browser engines:

Webkit: Made by apple. Written in OOPC. Used by Safari. Has official Gtk+ binding for ease of use by pajeets. as easy as gtk+ can be, anyway. Also has external projects making bindings in many other languages.

Blink: Made by google. Originally forked from webkit. Written in C++. Used by Google Chrome. As far as I am aware, there are no bindings for it. If you want to roll your own browser with it, you better know what you're doing.

Gecko: Made by Mozilla. Infected with Rust. Used by Firefox. It used to be good before Mozilla went full retard. Like blink, there are no bindings for Gecko.

Other engines of note:

NetSurf: Originally built for RISC OS. Programmed in ANSI C. Doesn't have HTML 5 support, and is just starting to get Javascript support.

Goanna: Forked from Gecko. Used by Palemoon. Like Gecko used to be before Quantum and the webextensions bullshit.

Servo: Mozilla's experimental engine. Extremely infected with Rust.


How about Opera's old Presto engine? And Trident?

Anyway does it really matter? I doubt any of us can even differentiate between those engines.



Those aren't being actively developed AFAIK. I believe the ones mentioned in OP are the only active ones.



Some engines are easier to develop for then others and some are made intentionally to be retarded I.E: Blink


File: 82e67143c3dbf98⋯.png (395.92 KB, 3485x2580, 697:516, 3485px-Timeline_of_web_bro….png)


>you haven't written a libre voice-command browser with a fiery tsundere personality named Browsette

Why are you a faggot?



Webkit seems like the only one that wants people to use it outside of its parent project.


>Made by Mozilla. Infected with Rust. Used by Firefox. It used to be good before Mozilla went full retard.

firefox quantum is the reason why I haven't switched to chromium yet. firefox was such a slow piece of shit pre-quantum that any improvements that paint over mozilla's past mistakes is welcome.



>Mozilla has become infested with SJW

>Made their CEO resign for not being one

>Makes a literal garbage of a browser

>Muh! Quantum made me stick around! It's not that bad! I swear!

Here's a site better suited for you:






>Mozilla has become infested with SJW

>Made their CEO resign for not being one

not related to tech

>Makes a literal garbage of a browser

elaborate more

>Muh! Quantum made me stick around! It's not that bad! I swear!

yes this is true. I was literally going to switch chromium and block all google domains with hosts file as a temporary solution


literally a decade old thread


>muh webextensions deprecated xul so lets all switch to chrome because somehow chrome webextensions isn't bad???


>It’s really sad we now have to download an add-on that allows Stop and Refresh to show simultaneously. We’ve all been to those sites that keep loading and loading, and so you spam the Stop button, now you hit it one extra time and BAM!, we’re starting all over again. Firefox has become sh*t.

Maybe he shouldn't spam the stop button like an autistic monkey and just click it once?



>literally a decade old thread

It's a few days short of a year old...



Maybe you shouldn't disregard things that you don't like, turning a blind eye to them.

The biggest problem remains: many pages render wrong with Firefox and it still has massive memory leak.



It was an exaggeration.


>Maybe you shouldn't disregard things that you don't like, turning a blind eye to them.

The only XUL extensions I care about were ported to webextensions so I'm not turning a blind eye to anything actually.

>The biggest problem remains: many pages render wrong with Firefox

Firefox can't render webpages in the exact same way Chromium does without forking it entirely, so it's up to webdevs to recognize Firefox as a legitimate browser too and optimize their code for it.

>and it still has massive memory leak.

I don't know. Firefox never used more than half a gb of ram on my machine. Though admittedly I turned javascript off, maybe that's why.



>Firefox can't render webpages in the exact same way Chromium does without forking it entirely, so it's up to webdevs to recognize Firefox as a legitimate browser too and optimize their code for it.

The opposite is happening right now.

>I don't know. Firefox never used more than half a gb of ram on my machine. Though admittedly I turned javascript off, maybe that's why.

One thing is JS, but the other is how bad it manages its instances.


I don't think there is much to discuss about browser engines. They all suck because they implement a batshit insane set of standards that nobody adheres to anyway. You can get a modicum of sanity back if you forsake some of the more idiotic parts like Javascript, but the way NetSurf is going shows that people (think they) want that and CSS still has plenty of dirt.


The Presto source code was leaked at some point, but the retard who leaked it did so on fucking Github of all places. Not sure if backups exist.



Firefox has always chewed through RAM. I've been running it as a daily browser since Firebird 0.6. It has been a constant problem and the only reason people have put up with it over the last two decades is the fact that it was the only browser with decent ad-blocking for the longest time.



>It was an exaggeration

<I was only pretending to be retarded

Maybe don't use the word "literally" out of context, then.



Firefox used to be good, before the piece of shit that is quantum. Previously, I never noticed any lag or slowness. Now TBB will regularly hang for several seconds for no reason. It really isn't suprising that so many people jumped off the firefox ship after quantum. What is surprising is how mozilla shills continue to deny this. I thought they'd mostly switched over to the "oh we need to use firefox because muh web freedom" tactics. Here I see they're still trying to push the performance angle, despite everyone having seen it to be untrue.


>not sure if backups exist

I have a copy lying around if someone wants it.



>muh addons

Everything relevant/basic was ported to quantum. I wouldn't even use more than uBlock Origin on a generic browser profile, and that was available on day one. The only bad thing about Firefox is telemetry and default settings, all of which can be removed in a minute of tweaking. Firefox is the ONLY browser which has a goal of letting users gain web privacy. Nothing on WebKit/Blink can even compete.



>Firefox pro-privacy

The absolute nerve of this guy.



It's the only browser which makes it trivial to make yourself anonymous because Mozilla works close with Tor browser devs. So yes. Firefox is the only genuinely pro-privacy browser.



>Everything relevant/basic was ported to quantum

traxium's tab tree is still XUL only. All the new style tab trees are crap, they use the sidebar api, so you can't have your bookmarks open at the same time. And even basic configuration like hiding the tabs on the top require delving into obscure configs. Not a good UX.


>Firefox is the only genuinely pro-privacy browser.

If firefox was genuinely pro-privacy, then the TBB config would be the default. What you mean to say is that firefox is the least bad of the major browsers, which isn't a high bar.



>It's the only browser which makes it trivial to make yourself anonymous

[citation needed]

pozfox calls home constantly and even comes configured (by DEFAULT) with the option enabled for them to download and install (((studies))) in addition to the regular "telemetry" which is also enabled by default.


My perfect browser would have its own renderer that only works for some plain HTML and selected non-cancerous CSS. It has a fallback renderer (Blink by default) for sites that break, but that always requires explicit user action. It would support both the whole page in fallback mode, and also individual HTML elements. It would natively support uMatrix/uBlock like functionality, and the element blocking tool has an extra list of divs to always render in fallback. It would also not fully trust CAs by default. Instead it would encourage manually verifying certs for sites you know and intend to use again. There's no obnoxious warning when visiting a site with an unknown cert, just a special icon in the URL that means "you haven't trusted this site, but the CAs do" as opposed to "you have explicitly trusted this cert". Whenever the cert changes for a site you are warned about it and have the option of trusting the new one as well. The cert dialog includes things like cert information and how many CAs trust it to help you decide. When on https sites, http content is not loaded by default, instead it renders a placeholder which you can click to load it.


File: dcbf79d0cf35f43⋯.png (68.04 KB, 296x324, 74:81, eca5709638fe3a523839ecfeb3….png)


>My perfect browser

Make it then you fucking idea guy



>Now TBB will regularly hang for several seconds for no reason.

I have this regularly with out of the box v52 (though interestingly enough not with my configured one). Firefox has been a colossal piece of shit at least since version 4; that's where I switched to Opera because my potato could no longer open the goddamn thing without swapping.

>I have a copy lying around if someone wants it.

I do.



>then the TBB config would be the default

It wouldn't because that would break 90% of websites. Firefox assumes that if you're interested in privacy you'll change the default settings. Default settings are there for normies. If they were any different Firefox userbase would drop below 3% almost immediately.

When I say Firefox is good for privacy I'm mainly speaking about it's engine, not the default Mozilla release and not it's default settings.


>citation needed

1. about:preferences#privacy

-Choose what to block

--All Detected Trackers (disable) [keep DNT header disabled]

--Third-Party Cookies (enable), set to All

2. about:preferences#general

-Language and Appearance

--Fonts and Colours


----Allow pages to choose their own fonts (disable)

3. about:config

privacy.resistFingerprinting > true

webgl.disabled > false

privacy.firstparty.isolate > true

4. about:addons

install "uBlock Origin"

install "user-agent switcher"

5. user-agent switcher

set your user agent to "Mozilla/5.0 (Windows NT 6.1; rv:60.0) Gecko/20100101 Firefox/60.0"

6. Set your current resolution to 1000x700 or restart the browser, it will switch to 1000x700 by default each time it's open.

Done. You now have the same fingerprint as Tor Browser. Disable javascript by default and all 3rd party scripts in uBlock Origin and you're pretty much trackable only using your IP and browsing habbits, which you can't fix without Tor or VPNs. As for the telemetry, look into:

>privacytools - privacytools.io/#about_config

>mitigation guide - spyware.neocities.org/guides/firefox.html

>Librefox - github.com/intika/Librefox

>IceCat - gnu.org/software/gnuzilla/

>Tor Browser - torproject.org



>webgl.disabled > false


does tor browser fucking enable webgl?


File: 21301a17a26b9cb⋯.jpg (35.75 KB, 333x500, 333:500, mozilla gecko future.jpg)

Long gone are the days when every little browser author actually wrote engines from scratch, and the W³C maintained their own "reference implementation" browsers like Arena/Amaya.

Now that M$ finally threw in the towel on Edge, we're inches away from the WHATWG declaring "HTML"5 defunct in favor of Chrome's source tree, with the only protection being Mozilla. Apace with their effort to transform into Chrome, the industry will finally throw away any semblance of coherent standards for Apple/Google to comply with out the window.

Still standing:

>Chrome/Safari (i.e.: Konqueror)

>Firefox (engine embedding defunct since v3.6, so you have to fork the entirety of Firefox itself)

The deceased:


















>As far as I am aware, there are no bindings for it.

What? Looking at the Wikipedia article, I see dozens of browsers forked from Chromium, plus CEF & QtWebEngine.



No, it's a typo.



Lynx is not a meme, I use it for a fair share of stuff. Codewise it's rather uncomfortable spaghetti C from the nineties, but since it's HTML-only there is much less attack surface. Like I said earlier, if you want something sane, you have to massively adjust your expectations. What the users expect™ is broken idiocy, and so broken idiocy you will get if you follow them.



>it's uncomfortable spaghetti C from the 90s

>it's HTML-only

>just massively adjust your expectations

<Lynx is not a meme >:(



>>As far as I am aware, there are no bindings for it.

>What? Looking at the Wikipedia article, I see dozens of browsers forked from Chromium, plus CEF & QtWebEngine.

Forking != embedding. Forking implies you have to maintain your copy of the source tree, any time the upstream changes you have to merge that. Embedding implies there is a stable api, any time the browser engine changes you can just pull in the changes and it will continue to work against your code (version bumps notwithstanding).


Remember years ago when both Chrome and Firefox were shit that would crash "this tab has stopped responding" and leave orphan shit everywhere. Or after a few days they'd use like 2gb on the home page. Or tabs using 100% cpu, or videos glitching and freezing or audio just plain not working.

Lol what a nostaligia trip nothing like that happens any more.



Wouldn't that describe CEF & QtWebEngine?



W3m is surprisingly usable for what it is. I guess mostly I just think it's cool though.



Is this because of the engine, or because of the chrome? Could a webkit browser be made as "privacy focused" as Firefox?



I used that list to find not botnet browsers. Every single browser on that list is subverted is the criteria.



>Chrome/Safari (i.e.: Konqueror)

These are not the same though and with ms bending the knee google, apple will probably give up webkit sooner rather than later as well.



I want that too. I'll mirror it forever.



They're far more than similar enough that if Mozilla gave up, complete monoculture would exist. It would be utterly impossible to distinguish between intended behavior of WHATWG's "living 'standard'", versus a bug in the one single rendering engine that "implements" it.



>Could a webkit browser be made as "privacy focused" as Firefox?

Maybe they can but not a single WebKit/Blink dev gives enough shit about privacy nor does anyone other than Google have enough influence to fundamentally change the engine to allow proper privacy options. Just look at Brave. Their fingerprint protection doesn't work at all, it's so bad that even Tor tabs can uniquely identify you.



>implying FF or Chromium are any better

Their code is open-source nigger, look at it some time and despair. Also compare the age of the code with the amount of CVEs; a ton of security problems are in new code. And like I said before, HTML-only has advantages: An easy 99% of exploits require Javascript.



What does a rendering engine need to have the security and privacy focus you're looking for?



It shouldn't have any features which can remove your anonymity. There's no reason a browser would need to know my exact hardware and screen size.



used to calculate view-ports for responsive-design



That kinda sounds like an ass-pull. Got anything more substantial? I'm thinking of making a rendering engine as a little project to work on on the weekends.



Add Pale Moon (Goanna) to Still Standing.



>I'm thinking of making a rendering engine as a little project to work on on the weekends.




It's not needed. Firefox works fine when you deny websites screen size access.



>It's not needed. Firefox works fine when you deny websites screen size access.

It depends on what your willing to settle for as far as looks go. This breaks responsive design - which is a relatively new trend anyway.

I think a better solution is making websites declarative rather than imperative - so that JS isn't a thing. A perhaps more realistic solution is some kind of layering of javascript, where one runtime can access them dom and handle layouts, and another can preform AJAX requests and pass results to DOM runtime one way. So that information about screen size etc can't be leaked.




Disabling JS is one thing, but these CLI browsers don't even have reasonably complete (or, in many cases, any) support for crucial features such as CSS or SSL. CLI aside, the experience is similar aside from the ability to use modern crypto certs much easier to loading up an early 2000s-era version of NS or IE and looking at modern sites, everything is so incredibly broken.


It's just a fork. Maybe it will become something more some day, but that's a long ways off.





Do you wew at alchemists? Yeah I'd never finish it in my lifetime, but it's a fun thing to fuck around with.



Uses Blink.





Bump for Presto source code.



This time line is garbage. Netscape Navigator was created as Mozilla, with the express idea that it was a "Mosaic killer". Spyglass Mosaic (MSIE) and NCSA Mosaic are also obviously related. As for the khtml forks, the time line is clearly Konqueror -> Safari -> Chrome.



Are there any alternatives to the DOM? Is therea better way to parse out a webpage?


Is it just me, or are most of the good privacy-respecting browsers these days Linux Exclusives?

Can any of you recommend me something that’s not based on either Chromium, nor Firefox, as well as supporting multiple OS platforms (Midori does not count since the Windows version wasn’t updated since 0.5.11)



Isn't Midori open source? Just compile it.



>wants privacy-respecting anything on windows of all systems

It's a real waste of effort. If you meant lacking BSD etc support, afaik furryfucks works there too.



Midori crashes constantly for me. Nigh unuseable.


File: 45087464171677b⋯.jpg (233.98 KB, 1508x1000, 377:250, make anon less of a faggot.jpg)


>crucial features such as CSS

Would you mind if I turned off the lights?


File: 0f52022dbe4295f⋯.png (85.92 KB, 1366x730, 683:365, vivaldi.png)

<Vivaldi Browser all the way!

I want you to tell me why it's a bad idea besides the fact that it's proprietary and honest about it. It's everything Opera was supposed to be since the CEO left.

Don't give me none of that bloatware spyware botnet talk, just tell me if you of all people actually used it and/or tried it.

Only then can you give me an honest opinion on why it's either good or bad.


File: 056542e4e3c167b⋯.png (47.58 KB, 1600x900, 16:9, 2.png)

File: b6970a310ad4481⋯.png (133.79 KB, 1600x900, 16:9, 3.png)

File: 2e85d7a343f1c4e⋯.png (185.18 KB, 1600x900, 16:9, 1.png)

You're looking in the wrong place mate.

Might I suggest Otter Browser?

It's basically the IceCat of Opera browsers, but written completely from scratch using Qt5.

Did I also mention it's open source AND cross-platform? As well as one of the few Qt spyware-free browsers currently fully supported on Jewdows and Fagintosh?

I tried it myself, and it's great for what it is. Give it a spin and get rid of all that other nonsense.



Does it run on old computers without SSE/SSE2?


File: ef4b8e6ebfeed8d⋯.png (189.68 KB, 600x600, 1:1, 600px-Otter_Browser_Logo.s….png)


I don't think that was specified anywhere. That's for you to test out since you mentioned it mate, it's obscure for a reason. If it does not work on your PC, it's just another thing that can be implemented in a future release since it's Qt Open Source, and the developers can be contacted on SourceForge and GitHub.

All I know is, it still supports XP and 32-bit computers:


If you just want the bare minimum, might I suggest the qutebrowser:




Dillo is still alive.

And you forgot Links in "alive".



Depends on the BSD. OpenBSD is free of poz, while freebsd has taken a (((code of conduct))), with the subsequent decrease in package and ports updates and loss of users and donations.



Always separate content from presentation, that's the way it worked in SGML, and that's the way it should've worked from day 1 with HTML.


>safari/konqueror fork


>chrome/konqueror fork


>chrome/konqueror fork


Dillo hasn't updated in 4 years, good catch on Links though.



Добро пожаловать.



I used it for a long time. It does connections that you can't turn off and it's fucking slow. Many of the features are useless bloat.


Falkon is really nice.



Literally just a UI for chrome/safari.


File: 8211b9afe900eba⋯.png (34.83 KB, 788x207, 788:207, Screenshot_20190317_134402.png)


8chan does not work properly with Falkon. There is no floating response window, images open in a new tab always and there are no response links to a post. Pic related is Falkon on the left and Firefox on the right.



If only they stopped hosting shit on sourceforge



No request blocking, it's loading some unnecessary js right there in the pic lol



SGML is not exactly something to imitate, it was a gargantuan pile of shit that nobody really understood. Whether it's better than current HTML can be argued, I guess. How do you expect a separation of content and presentation to work? Preferably it shouldn't include humans writing semantic markup, because if history has shown one thing, it's that this doesn't ever happen. Feel free to give an example, say with imageboard threads.


Would be nice, but they would probably opt for Github instead, which is basically a rerun of the SourceForge bullshit.



>How do you expect a separation of content and presentation to work?

FrameMaker is IMHO the best example, though there are others like Interleaf and 3B2.

>Preferably it shouldn't include humans writing semantic markup

Ding ding ding, we have a winner!

Interesting fact: The original web client, Nexus, was not a web "browser", but an integrated GUI HTML browser/editor, as were the LibWWW API, and all of W³C's subsequent testbed clients. The intended paradigm was using a GUI to read and write webpages (and yes, style sheets did exist, though they were very limited in terms of flexibility), while HTML was never normally intended to be seen by human eyes nor written by human hands.

The idea of a read-only "browser" was the accidental result of producing the supposedly specialized Line Mode Browser, the first CLI client, and also the first crossplatform (Nexus was NeXTSTEP-only) client. Since Line Mode Browser was how nearly everyone outside the NeXT bubble first experienced the web, combined with the also crossplatform CERN httpd server, this meant nearly everyone who wrote clients like Viola and Mosaic thought the "normal" way to use the web was to write HTML by hand in a text editor, and view it in a read-only browser.

CERN/W³C, in retrospect, considered this a critical failure on their part.

>Feel free to give an example, say with imageboard threads.

Something like an imageboard honestly shouldn't function inside a website, with a better fit for the web format more closely resembling a wiki (e.g.: heavy reliance on powerful automated transclusion, Boolean operations, etc.) to reuse, generate, and organize content based on machine-readable semantic tags.

For something more like an imageboard, a database-centric client API like USENET would probably be better than a website to begin with, though on top of that transport/storage layer, a markup-based semantic and presentational standard would be superior to plaintext.



Javascript is disabled in the left. Has nothing to do with browser engines.



The first two are just "my extensions don't work since they fixed the fucked up security and model. I don't want multiprocessing, I just want my extensions to be able to be massive security liabilities and performance roadblocks."

The last one is literally just an unsubstantiated conspiracy theory that makes no sense, with Google donating a bunch of money and then allegedly giving secret orders to Mozilla to make the browser worse.

You're a fucking retard. You found the worst possible arguments and just bought them hook line and sinker because they agree with what you already think.

If you hate Firefox, have some relevant, modern, and actually correct reasons, instead of taking the first stop on the confirmation bias engine that is Google, you stupid fucking idiot. It's so fucking obvious you used google, you linked to a fucking year-old reddit AMP page, you goddamn retard.



Be careful with resistFingerprinting. It currently reports your resolution as your window resolution instead of screen resolution, which often gives you a very reliable and specific fingerprint. It does good things with your user agent and some other features, but it's still rough enough around the edges as to be considered experimental at the moment, and can currently make your fingerprint more unique rather than less.



CSS is actually essential to the modern web though. If you want a hyperlarp browser that renders HTML only thats fine, but CSS is actually a HGUE benefit - it separates the semantic structure and content of information from its presentation, and is rapidly interchangeable, including by users. Not to mention, its currently widely deployed as virtually every website on the internet uses CSS.

If anything about the web is worth keeping, its CSS separating content from presentation.


File: 505186109eb5ace⋯.jpg (140.66 KB, 694x530, 347:265, 505186109eb5ace6ee6610bda4….jpg)

Good thread. Bumpan



Javascript works on other websites. I'm not saying that it's the engine's fault, but something is wrong with Falkon for me.


Webshit and javascript weenies only exist because C and shell scripting are not powerful enough for websites. On a networked Lisp machine you could just download standalone lisp scripts from a webserver, run them like any other program, and it could do whatever it wanted: no extra weenie languages required. These solutions are obvious to real programmers but UNIX braindamage keeps you weenies from seeing them.

what to do with a tree structure. They're usually had a tree structure.
(Partly to get back on this one forced to admit, I don't know, whatever the
answer to the question that the PDP-11 compiler class with its own locate
all storage, probably because the middle of a block structure. They're
trying to rational semantics for C. It would programmers who use lots of
globals and wouldn't know whatever the instructured file system? My
guess is Multics, in the middle of a block with its own local storage,
probably amount to a translationalize the late '60s. the late '60s.hit!" but
stopped myself just in time someone counters my claim that the instructor
stated "Before Un*x, no file system? My guess is Multics, in time.

For reason to give me a really had it.

But because it looks
like it looks
like it looks
like it does, and would probably because they reasons I'm ashamed to
happen when I don't know, whatever the delusion "what's supposed to
invoke this procedure, not entry to a translation "what's supposed to
allocate all storage on entry to a block.

But it's much worse than than because you can jump into the middle of a
block, the compiler is forced to happen when I do <the thing above>?"
used to happen when I do <the old-timers on this list...) Last night the
instructure even if some poor bastard has tried to do a translational
storage doesn't help you. I'll almost guarantee you. It would probably
amount to allocating the block.

But because you need to be "gee, I am taking an "Intro to Un*x, not entry to
allocating the storage, probably because they really had it.

But it's much worse they are C programmers who use lots of globals and wouldn't
know, whatever the PDP-11 C compiler did." Now of course, they're trying to
rationalize the late '60s.re enters my claim that {decl stmt} into lambda call
beforehand this on this one for a while now. I pull deck, but can any of the answer
to the middle of a block, they're trying with a full deck, the compiler class with a
tree-structured file system had it.

But because you need to invoke this list...) Last night they are C programmers
my claim that C has tried to allocal storage on entering the structor stated "Before
Un*x, no block structure. They're usually under the PDP-11 compiler class with
thing an "Introduces a block, the compiler did." Now of course. (Partly to happen
when I don't know, whatever the delusion "what's supposed to do <the



))))))))))))) - this is the future and unix weenies can't even compete

LISP when? lisp is so perfect



> How do you expect a separation of content and presentation to work? Preferably it shouldn't include humans writing semantic markup, because if history has shown one thing, it's that this doesn't ever happen.

It doesn't happen because webdevs always try to stretch the technology further than it is meant to be used. You can write nice semantic static pages, but people always want to shove in decorative shit that serves no semantic purpose and makes the site less usable, but looks fancy.

I'm making a web site currently and my "client" (quotation marks because I'm doing it for free) is very satisfied with my proposal. It's clean, simple, precise, 100% static, responsive, no Javascript. It's funny, we were looking at some other related websites and he found all that added nonsense distracting and silly, which leads me to believe that bad websites are not necessarily because people want them, but because webdevs want to have fun on the job.

As for writing HTML by hand, I'm using a Lisp's s-expressions. I can write content as SXML and at any point I can splice in the result of a function call. It's a very comfy setup, much more compact than any templating language I have seen.



somewhat based



jokes on you, "semantic" doesn't actually mean anything. you got memed. HTML is a good idea (an obvious idea) with a terrible implementation. it's just a bunch of syntactic elements denoting stuff like paragraphs, images, text style (bold/italic), etc. it's like a text document but not fucking useless



> it's just a bunch of syntactic elements denoting stuff like paragraphs, images, text style (bold/italic), etc. it's like a text document but not fucking useless

That's what semantic means. Except for the style (bold, italic), that part does not belong into HTML. Use em and strong instead and use CSS to style it according to your wishes. The b and i tags no longer mean bold or italic anyway.





Actually retarded. Nothing prevents that from occurring right now. In fact, it somewhat exists, its called native software.

You could trivially write a wrapper to download index.lisp and execute, but this is exactly the opposite of what an non-idiot would want for a variety of reasons.

First, security. I do no want every website I visit running native code on my computer. Simply fuck off. Part of why browsers are useful, is they provide a limited execution model for code that doesn't allow all of the complexity that local access would. Imagine every single privileged escalation vulnerability ever discovered pwning your machine because you visted a lispsite. Fuck off.

Second, browsers specifiy a complete executation environment including a GUI, storage, DOM API's, styling, etc. Its incredibly complex, and cross platform. You can simply just replace all that with lisp. Lisp is a programming language, not a cross platform spec for rending documents. You could import all that kind stuff and rebuild web 3.0 but why ?

The actual solution is making the web STRICTLY declarative and providing NO execution model. This would make the web much more secure. It would also force a bunch of new technology to handle things like AJAX, and JQUERY like API's through declarative syntax.



Semantic does mean something. Having structual elements of the page denote things like articles, quotes, citations allows cool features like "Reader Mode" for FF. It could also allow automating fact checking etc. It could be a very exciting feature with a semantic web.



This is actually pretty fucking schway


File: b8225747925c6c6⋯.png (74.41 KB, 640x352, 20:11, SP2017-B3.png)

File: d92e5a02ee9e676⋯.jpg (76.04 KB, 1040x720, 13:9, ABC XML: Separation of Doc….jpg)



Anybody remember back when CSS was first being paraded around, and a common notion was that users could apply the same CSS everywhere they went? That users would be the ones writing CSS, not webdevs?

Instead, every individual element of every page on every site has its own CSS glued to the HTML in a non-semantic blob just as architecturally invalid as the "spacer GIF & invisible tables" bad old days of HTML4. Webdevs still think they're writing glossy magazines in QuarkXpress, instead of semantic hypertext meta-documents.

>It could also allow automating fact checking etc. It could be a very exciting feature with a semantic web.

Or something as simple as trivially and unambiguously scraping a store's page for the price/part#/availability of products.


I'd actually be okay with an execution model for online stuff like the web or whatever, just as long as it wasn't being abused for things other parts of the standard (HTML+CSS, in the case of the web) are supposed to be used for instead. The biggest problem isn't security (any competent sandbox architecture dating back to the 1960s could fix that), but bloat.

I'd like a separate Turing-incomplete subsets for the scripting environment, in addition to "complexity profiles" that would limit the number of discrete operations in tiers (e.g.: 100ops, 1Kops, etc.) as well as instantaneous working RAM limits. This would impose necessary discipline for what are and should be simple tasks, while still allowing the clientside flexibility and power needed for many tasks.


I really like Brave.


Fucking browser engines. It seems like there should be a way we can slap some libraries together and have one. I know it's not really that easy, but it's just one of those things. I should be able to slap together some libcurl and libxml, and throw it into cairo and have a browser.


File: 0c4dbf30aee855f⋯.png (180.98 KB, 1024x1024, 1:1, pharaoh.png)


Good Idea!

Let's call it the "Pharaoh Browser".

With the embedded idea that "You are your own emperor, librarian, and scientist."

I already have a logo concept in mind.



Pharaoh BROser



cf. Uzbl



>Webkit: Made by apple.



>Webkit: Made by apple.

It was actually forked from KHTML, made by the KDE guys.



>CSS was first being paraded around, and a common notion was that users could apply the same CSS everywhere they went?

Got any more info on that?






That's what the "cascading" in CSS means. You would mix together a bunch of stylesheets, and it automatically picks which styles to use. Originally they were going to have each property have a weight. Then the one with the strongest weight would win. For numerical values (font size etc) they would take a weighted average (really!). They decided against that, and made it so that later styles beat out earlier ones.

Nowadays if you try and use user styles on firefox you'll notice that you have to mark everything as "!important" (or at least you did until a couple years ago). This is basically a weaker cersion of the weighting system. But the reason user styles need it is because it was presumed that every user would have a ton of styles they applied to every website, fucking around with fonts and colors and background images and whatever. Webpages that restyled these things would be the minority, they must have a good reason for doing it, so their styles should win. Of course the opposite turned out to be true; websites without styles are the heavy minority, users that style pages are also a minority, and then they only do it with good reason. Firefox/chrome still have a gui config to set your background/foreground/font, and even then I've never seen anyone touch them.



I had to do a double take. Good Markov chain.



I wonder if it would be possible to write an AI that generates actually coherent anti-Unix, pro-Lisp rants.



You could call it, artificial unintelligence.



so basically your magical browser will break for 99% of all sites because they are built with "cancerous" html and will fall back to blink lol.



I actually use the font setting. Foreground/background tends to break too many sites, unfortunately.


bumping fun thread



Apple saves the day once more.


Dillo or death.

>Dillo hasn't updated in 4 years

It did, but they did not make a release.


The development has been slow though. Notable changes: they actually started using mbedTLS for a TLS library.

The question I want to ask: are there any alive free/opensource graphical browsers, which are not associated with huge Web companies (Google, Apple, Mozilla, etc), like, from ground up, engine too? So far I know only of Dillo and Netsurf. Everything else is either Blink, Webkit or Gecko. Presto is proprietary abandonware, so nobody gives a shit, and Opera itself is Blink now.



pale moon (kind of - forked from very old firefox version)


File: 265218c7887edb0⋯.png (6.55 KB, 130x130, 1:1, icon.png)

Anyone tried Luakit?

What are your thoughts on it?

How does it compare to qutebrowser?

How's the greasemonkey script support?



>Firefox can't render webpages in the exact same way Chromium does

What is standards compliance



Not something that's seen on the web, that's for sure.



>Note: webkitgtk, webkitgtk2 and qtwebkit-based browsers were removed from the list, because these are today considered insecure and outdated.

Why do people use and recommend surf if this is true?



I haven't heard this before. Sounds like it might be BS. It is a wiki, after all.



Wasn't Midori one of the promising replacements for the shitty browsers of today?


File: a4816aabe0bcf79⋯.gif (1.94 MB, 467x348, 467:348, Mario_sick_and_tired.gif)

Webkit died among other browsers because it used to be notorious for security holes, then Apple began fixing it up and polishing it. Like most things developed by Apple, Webkit is God-tier on Apple hardware, but mediocre on everything else. If you use an Apple device there is very little reason to use anything but Safari because Apple made damn sure Safari on their shit just werkz and is fast and responsive. Its literally the anti-Internet Explorer. IE was trash even on Microsofts own OS

[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[ / / / / / / / / / / / / / ] [ dir / animu / doomer / egy / hkpol / jenny / kohl / strek / vichan ]