748284 No.543
So what do you fine folks know 'bout volume culling in deferred rendering for shadow volumes and stencils.
I'm trying to do this as lazily ass possible so I can move on to based Oren-Nayar diffuse lighting with Cook-Torrance specular lighting, transparency, diffraction, diffusion etc.
912d55 No.546
Stop being a pussy and write a realtime global illumination renderer and you can have any BRDF you want.
912d55 No.547
Also, when you are done hit me up I want my name on that paper.
748284 No.553
>>546But the purpose of Deferred rendering is to have a fuckton of lights, not just one pretty light.
Also I'm having trouble grasping stenciling properly as the webm demonstrates, its capturing everything as of now.
I could really just pass the diffuse to a new shader for ambient lighting.
Which BRDFs do you recommend?
a41d9a No.561
>>553Just for the record, GI isn't limited to a single light source.
What do you want to capture in your stencil buffer? Anyway it happens before the final lightning assembly so it's just the same as without deferred shading. Not really sure what your question is, you dropped a few buzzwords without much context.
Do you just want object ids in your stencil? Or A shadow map?
For BRDFs check some viewer and see what you like.
http://brdflab.sourceforge.net 299af9 No.563
>added a stencil pass for deferred lighting
>shadowmapping made it useless
;_;
807db4 No.565
Just Quietly: in OpenGL, VAO's don't point to EBO's do they?
To glDrawElements do I need to separately bind a VAO and an EBO before the draw call, or is the EBO remembered as part of the VAO's state?
748284 No.568
>>561I'm actually trying to only draw the objects within the bounds as of now.
Really this is my only barrier(at the moment.)
Know how to make this work and have lights overlapping capabilities?
http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.htmlThis is basically what I'm attempting to implement without any actual understanding of how to use stenciling.
748284 No.570
>>565EBO is apart of VAO's state
807db4 No.573
>>570Awesome, thanks. I keep forgetting how to use Elements and going back to drawing Arrays.
748284 No.575
>>573the way I do it (actual code)
GLuint VA, position, norm, tan, uv, indbuff;
glGenVertexArrays(1, &VA);glBindVertexArray(VA);
GLuint pos(glGetAttribLocation(currProgram, "position"));
glVertexAttribFormat(pos, 3, GL_FLOAT, GL_FALSE, 0);glVertexAttribBinding(pos, 0);glEnableVertexAttribArray(pos);
glGenBuffers(1, &position);glBindBuffer(GL_ARRAY_BUFFER, position);
glBufferData(GL_ARRAY_BUFFER, meshdat[i][0].size()*sizeof(float), &meshdat[i][0][0], GL_STATIC_DRAW);
glBindVertexBuffer(0, position, 0, 3 * sizeof(float));
and for rendering I use
void render(){
GLint currentProg;
glGetIntegerv(GL_CURRENT_PROGRAM, ¤tProg);
if (PID != currentProg){
PID = currentProg,
position = glGetUniformLocation(PID, "WorldPosition"),
rotation = glGetUniformLocation(PID, "model"),
scaleloc = glGetUniformLocation(PID, "scale");
glUniform1i(glGetUniformLocation(currentProg, "AmbientTexture"), 0);///Bind "Texture0" to GL_Texture0/// MaterialTexture::AMBIENT
glUniform1i(glGetUniformLocation(currentProg, "DiffuseTexture"), 1);///Bind "Texture1" to GL_Texture1
glUniform1i(glGetUniformLocation(currentProg, "SpecularTexture"), 2);
}
glUniform3f(position, WorldPosition.x, WorldPosition.y, WorldPosition.z);
glUniform1f(scaleloc, scale);
glUniformMatrix3fv(rotation, 1, GL_FALSE, &Rotation[0][0]);
for (int i(0); i != model->VertexObjectArrays.size(); ++i){
glBindVertexArray(model->VertexObjectArrays[i]);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, model->texturesindex[model->materialsindex[i]][0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, model->texturesindex[model->materialsindex[i]][1]);
///animation uniform and function here
glDrawElements(GL_TRIANGLES, (GLuint)model->VertexArraysSize[i], GL_UNSIGNED_INT, 0);
}
748284 No.577
>>575latter half to first section where all variables are set
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indbuff);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, meshindic[i].size()*sizeof(unsigned int), &meshindic[i][0], GL_STATIC_DRAW);
glBindVertexArray(0);
//GLint worldpos;
//GLint orientation;
VertexObjectArrays.push_back(VA);
VertexArraysSize.push_back((unsigned int)meshindic[i].size());
Although my current implementation is bulky, I'm actually about to cut it back down because it might be unnecessary; I don't really have any shaders messing with any of the variables, so the if statement contents only really need to be set once and the current shader doesn't need to be constantly checked.
I forget how element arrays work outside of VAs actually (if they do)
But textures and uniforms like the ones listed have to be constantly set, except the location uniforms
0069ad No.586
>>568Yeah you are confused.
The OpenGL Renderer already does view frustum culling for you.
The Stencil Buffer in the tut you linked is just a silly hack. It just computes a depth map.
This is probably the info you seek:
In general the stencil buffer is just an extra frame buffer you can render into.
Typical use cases are shadow maps, object ids (for picking), reflections and so on.
1c7009 No.1170
>>586I noticed it ends up extracting the depth buffer.
Is there any documented GI method you know of?
Also forgot to thank you for the BRDF link!
175c32 No.1180
Man… how did I miss this thread? Reposting what I started a new thread for:
In OpenGL, I want to measure the rough area in pixels that a few triangles will take up when they are rasterized. So to do this I convert 3d vertices that make up the triangle to 2d screen coords.
I do that by multiplying my vertices by the MVP matrix, and then dividing by 'w' for the perspective transform.
osg::Vec4d world4(world.x(),world.y(),world.z(),1.0);
osg::Vec4d clip4 = world4 * mvp;
osg::Vec4d ndc4 = clip4/clip4.w();
ndc.x() = ndc4.x();
ndc.y() = ndc4.y();
The problem is that sometimes the W coordinate will be zero. I can check for this but I don't really know how to do 'deal' with it.
Like if a triangle has three vertices, and one of them has a 'bad' W I'd still expect that the triangle would get partially rasterized. How do I count the resulting area pixels?
2501d5 No.1184
>>1180Why not quickly render your stuff into a framebuffer, bind a texture to it and count the pixels in there?
175c32 No.1185
>>1184It would be slower since I'd have to bind the target framebuffer and then count each pixel.
Here I can roughly estimate the area because I'd know the output pixel of each triangle's vertex.
Plus I'm doing this in another thread whereas if I rendered to a framebuffer I'd have to use the render thread… lets not get into context switching and other nastiness.
636238 No.1214
Speaking of 3D, I figured my best shot at making a game was to make a Minecraft engine and build up from there. I don't know much about the specifics of maps or shaders, but the geometry is easy enough to work with. I have a limited internet so I can't watch YT tutorials, for example.
Anyways, let's say a chunk is 8x8x8, for a total of 512 blocks per chunk. My understanding is that when a chunk is created, cached, or modified, it should recalculate the geometry, right? I realize that each block is simply a value which indicates the type, and the geometry and such can vary greatly from that. However, I assume that each block would consist of 6 independent faces, consisting of 12 tris, 24 vertices, and 30 edges in total? That way, each face can be handled separately.
When it's rendered, you would cull chunks that aren't visible, and only draw parts of the mesh that is actually there (eg no backfaces, so you'd only see about 50% of a block at best). For efficiency, I assume that each chunk is stored as part of a quadtree, and while I get the gist of that, I don't see how that would necessarily improve things, since I thought the geometry would be unique per chunk, meaning you can't just set everything to "1". Or is that handled entirely separately from the block data? I don't want to construct it every frame.
I'm using XNA because it's what I know best.
Anyways, does that seem like a reasonable approach?
175c32 No.1219
>>1214Too bad you're sticking to XNA since you can do some cool stuff with shaders when it comes to voxels.
>I don't see how that would necessarily improve things, since I thought the geometry would be unique per chunkQuadtrees are usually used for view frustum culling and so on.
636238 No.1220
>>1219Well, I've never really worked with them, so I don't know what they would store, and where to put them in my update pipeline(?)
I'm confused by it in this way. Would it be fair to think of it conceptually as a Rectangle or box? That is, an object that is used for position testing? I'm thinking of it like an elaborate array.
636238 No.1221
>>1219And really, I don't even know if what I'm doing qualifies as voxels. I'm just ignorantly slapping shit together and hoping it works, but at least I have a good idea on the implications of my code and what (not) to do for performance. Just not the case-specific optimizations.
1c7009 No.1227
>>1214>XNAVertex Arrays, instancing, and other cool shit would make your game powerful as fuck in OpenGL in comparison to most older voxel engines, not to mention you would be using a geometry shader for voxels and delimit yourself;for newer graphics cards there is even based bindless textures(massive ability to customize/switch textures)
Dunno anything about XNA though.
681827 No.1230
Stumbled over this recently, you might find it interesting.
http://graphics.cs.williams.edu/papers/single-pass-shadow-volumes/
>deferred renderingPlease do reconsider, you can do many lights in forward rendering too with much less memory usage, nicer attenuation and antialiasing.
1c7009 No.1232
>>1230>Please do reconsider, you can do many lights in forward rendering too with much less memory usage, nicer attenuation and antialiasing.At no cost of speed or increase in complexity?
681827 No.1234
>>1232Evaluating the performance cost isn't straightforward. It depends on what technique you use, what you compare it to etc. Of course having more lights isn't free, never will be.
1c7009 No.1235
1c7009 No.1242
>>1234I've always thought that was the thing with deferred rendering.
Where
for each light
add color for each object in respect to
light
output color
is more efficient than the nested for complexity
for each object
for each light
add color for each object in respect
to light
681827 No.1263
>>1242more lights in deferred is not free, in particular light overdraw in deferred can be as expensive as naive forward rendering
the cost and complexity of rendering a g-buffer is also a factor
in modern forward rendering you can do a lot of lights for a single pixel in a single pass, with no waste buffers, and no waste recalculation of geometry, and you can still render advanced buffers like a velocity buffer if you want to
d2bd0f No.2195
>>1263Right now I'm capable of doing both.
A stencil buffer would stop overdraw wouldn't it?
>tfw when trying to update going by 8dgds suggestions but no free time because college.How would you do a forward lighting implementation without buffers filled with lights?
a3113b No.2245
>>1232>At no cost of speed or increase in complexity?Forward rendering can be much faster in most cases, but only with sufficient complexity. Tile-based forward rendering is widely regarded as the fuchure
>>2195>How would you do a forward lighting implementation without buffers filled with lights?Not him, but for a while I've been working on a tile-based forward renderer where the tiles are in world space (which is easy as it's an ortho-projected top-down 3D game). The idea is to achieve the classic Doom 3/Chronicles of Riddick/etc look of early unified shadowing & lighting games on mobile hardware. A traditional implementation would be very slow due to extreme overdraw.
What I'm doing is having automatically generated permutations of lit shaders- any combination of point, spot & directional lights, up to 4 lights per tile, all calculated in a single pass. Materials are assigned to tiles based on the light culling results.
This kind of thing is likely unsuitable for your project but I hope it's food for thought. You doing a space shooter?
35bcdd No.2261
>>2245Thinking of making a more combat oriented version of something like EVE or Star Citizen-more fast paced, less grinding etc.
Trying to figure out how to make it as pretty as possible.
bb2059 No.2263
I need to get my hands on a decent OpenGL tutorial that doesn't need me to download extra programs just to generate proper files.
17ebdd No.2266
>>2263What are you trying to do?
7ff6fe No.2298
681827 No.2302
>>2245The future only if "the future" is an aliased pixel soup.
deferred in particular taxes texture writes and bandwidth, the two things in modern systems that are NOT in abundance
its overall just a really bad idea that looks good on paper
d2bd0f No.2355
>>2302>>2245I just need a fuckload of lights.
Plan on making deadspace environments where the background illumination is mindfuckingly low or zero filled with raw materials/asteroids/rogue AI NPCs/NPC pirates/other players that force you to use various lights i.e. headlights for the dark environment, or FLIR beams with an IR thermal shader…among other things (Lit particle swarms, explosions,etc)
Basically, a GI with a fuckload of lights, Order independent transparency, depth of field, and other assorted goodies
a3113b No.2616
>>2355Poked around a bit yesterday. Apparently tile-based deferred rendering tends to be the fastest modern technique (for a "fuckton of lights"). As an alternative, clustered deferred rendering performs similarly but more consistently, and is less likely to pathologize. Original paper here:
http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdfI surprised to learn that forward shading has been used in far more ridiculous stretches than my case from
>>2245Apparently Crysis is forward-shaded with up to 16 lights per object, but I don't know whether they used dynamic branching in shaders or permuted shaders to achieve that. If it's permuted shaders, there are hundreds upon hundreds of combinations. Crysis performs excellently on modern hardware, though I think it's rare in that game for more than 3 or 4 lights to hit an object. May be an option. In Olsson's tests here
http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=tiled_shadingtile-based forward rendering performs worse than tile-based deferred, but I haven't seen any comparisons with straight forward-rendering where lights are assigned purely in world space on the CPU.
Anyway, GI is orthogonal to how you implement light sources, since you probably don't wanna use virtual point lights. For your case you could keep GI relatively simple, because everything's sparse, and occlusion isn't crucial.
684d01 No.2796
>>2616>>2616You are a godsend anon
684d01 No.2821
01c8a5 No.2887
>>2616Any more gold links like those?
a3113b No.3102
>>2887One of my favorite sites that has sadly been dead for nearly 3 years is a blog on real-time global illumination techniques by the guy who made Lightsprint:
http://realtimeradiosity.com/The "best" current technique for real-time GI in games is sparse voxel octree cone-tracing, implementations of which started popping up everywhere in the last couple years. It was even gonna be in UE4 until recently (Epic have their own paper on the technique, but I couldn't dig it up again). It's massively complex, not super-fast on current hardware even when optimized well, and unsuitable for very large scenes.
Ignore the dynamic lightmap family of techniques, they have no use for a space game. You could probably use a (relatively) simple scheme based on sparse light probes only existing at the corners of grid cells occupied by light receivers, updated each frame. On the CPU, calculate direct lighting incident to probes, bounce it once onto proxies of dynamic objects (ellipsoids? dunno) & back out to probes in range of the indirect light. Here you could also roughly compute occlusion. In the fragment shader, look-up the cell & interpolate between the values at its probes in the normal direction. Lots of details to work out but something like that should work.
17ebdd No.3190
>>3102I've been looking at a series of techniques.
Yeah, I was thinking lightmaps would be essentially useless for a game full of dynamic active elements.
I saw the article about voxels.
>tfw you realize UE4 ended up dropping them because they were slow and took up too much memory.My todo list has just gotten much longer and now
I'll have to rework everything.
Trying to include:
Order Independent Transparency
Real-Time Diffuse Global Illumination Using Radiance Hints
Tile based deferred rendering (and shadows? if possible)
Real-Time Volume Caustics with Adaptive Beam Tracing
Refraction/Reflection Shading
All the usual glow/bloom/blur/depth of field post processing etc
The most daunting thing will be salvaging useful data and reverse engineering this into one package from some of the junkiest code I've ever seen; typical of professors to do shit like this.
I've downloaded the demos and source to a few papers with those resources I'll link soon
88f3ba No.3269
>>3190Post all the sources or links you have related to graphics engine. That tiled deferred rendering one was nice. I've been looking for something like that for weeks since I didn't understand how it works.
f9cd23 No.3274
>>3102sparse voxel octree cone-tracing Is supposed to work great on the new Maxwell Cards the GTX 970/980. It might be a few years tell we see it in games.
681827 No.3276
>>3274They're already putting it back into UE4 because of that. However unless it becomes part of DX12 spec it won't be widely used just like any of the nvidia specific gimmicks.
620e58 No.3279
a3113b No.3290
>>3190>tfw you realize UE4 ended up dropping them because they were slow and took up too much memory.Definitely a memory hog but IIRC it wasn't too slow for mid-high end PCs, as optimized by Epic. Reckon it was just a lame cop-out to better equalize things with consoles, being that cross-platform development is king. Now they're using light propagation volumes (a p ugly/limited technique, hardly better than nothing). It'd been the only thing that really impressed me about UE4.
Folks have pointed out that conetracing on cascaded dense voxels instead of SVOs would be much faster. Never seen an implementation.
>>3276>They're already putting it back into UE4 because of that. Really? Sauce?
1ae603 No.3295
ad351c No.3387
ad351c No.3388
>>3295This guy released a demo of his engine based on
Cyril Crassin papers
http:/sdrv.ms/1cb8N99
Heres the vid
youtube.com/watch?v=4-KSMRjUqGU
ad351c No.3389
>>3295This guy released a demo of his engine based on
Cyril Crassin papers
http:/sdrv.ms/1cb8N99
Heres the vid
www.youtube.com/watch?v=4-KSMRjUqGU
660002 No.3403
I'm mostly using baked lighting in my renderer, since I need it to run on very low-end mobile devices.
I would like to explore some ways I could make it better beyond, though. Would, for example, sampling cubemap pictures of the scene at very points, converting them into spherical harmonics, and then using them to sample indirect lighting on moving objects at runtime be a mostly accurate technique?
Also, I'm hoping to create workflow in a program like Blender (since they recently added lightmapping to their new ray-traced offline renderer) which would allow me to automatically sample the incoming lighting from different directions to give me correct normal mapped surfaces. Memory is also a concern.
681827 No.3413
File: 1412259042150.png (688.38 KB, 856x605, 856:605, nvidia-simulation-left-and….png)

>>3290
>sauceer… the announcement of maxwell? The moonlanding demo is UE4
681827 No.3414
>>3403simplest way is probably just lightprobes
7e8588 No.3493
a3113b No.3506
>>3403>Would, for example, sampling cubemap pictures of the scene at very points, converting them into spherical harmonics, and then using them to sample indirect lighting on moving objects at runtime be a mostly accurate technique?Yeah, it's a common technique.
7e8588 No.3887
http://www.cse.chalmers.se/~uffe/The guys responsible for a fuckload of techniques, with links to each of their websites
7e8588 No.4121
>>2245>>2616Welp, decided not to fuck with tile based.
The same team is now circle jerking over superior clustered shading and
>muh one million lights!http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=clustered_shading 139f0d No.5617
204b02 No.5619
So let me get this straight
XYZ rotation is around the world unit vectors, whereas pitch/yaw/roll is relative to the object's "forward"/normal vector? How does that escape gimbal lock?
Also, sometimes rotation has a WXYZ component, what's W indicate?
a3113b No.5670
>>5619>XYZ rotation is around the world unit vectors, whereas pitch/yaw/roll is relative to the object's "forward"/normal vector? How does that escape gimbal lock?An axis and an angle can represent any rotation. Consider that you can describe a rotated object just in terms of its forward facing and its roll.
>Also, sometimes rotation has a WXYZ component, what's W indicate?Dunno, it might be either an angle-axis representation (which don't work the illustrative way as stated above) or a quaternion, both of which have 4 components.
681827 No.5695
>>4121what's the point even?
f473f8 No.5712
>>5619Undefined behavior is sometimes caused by the attempt to rotate without using order.
>muh RUB [x|y|z|pos]
Right vector->[x*xk |y*ysx|z*zsx|w]
Up vector->[x*xsy|y*yk |z*zsy|w]
Back vector->[x*xsz|y*ysz|z*zk |w]
Pos vector->[posx |posy |posz |1]
xk,yk,zk are scaling factors of each axis
xsy skews x in relation to the y axis
*positon in relation to the camera divided by the sum of the other elements in the column-you multiply this by the sum of those elements and get the actual position.
Ws I believe warps the whole matrix to all fuck
if touched and I *think they represent the difference between position and direction
hence positions get 1s and directions get 0s.
Gimbal lock isn't a problem when your axes of rotation are 'locked', the right in local transforms shouldn't ever touch the front
139f0d No.5728
>>5695>Muh space shooter with massive fields of viewGotta keep your matrices on point when you are far scaling to millions of points.
Also lights and volumetric effects will be situated in the extremely large range.
Also, to overcompensate for the lack of need for complex shadows.
Also deep space, low lit combat involving ships making usage of lighting infrared etc
681827 No.5751
>>5728lighting in space is some of the simplest to do, usually two point lighting suffices
139f0d No.5764
>>5712went back over it
its actually
Right vector->[x*xscale |y*yskewx|z*zskewx|tR]
Up vector->[x*xskewy |y*yscale|z*zskewy|tU]
Back vector->[x*xskewz |y*yskewz|z*zscale|tB]
Pos vector->[posx |posy |posz |1 ]
So the 'w' is the translation, but will likely fuck your matrix if it isn't uniform because you will warp the representation
b50783 No.6197
>>5619> XYZ rotation is around the world unit vectors, whereas pitch/yaw/roll is relative to the object's "forward"/normal vector? How does that escape gimbal lock?The world's X,Y,Z axes are always perpendicular. Gimbal lock occurs with yaw/pitch/roll when the middle of the 3 rotations is 90 degrees, causing the axes of the first and third rotations to be identical.
E.g. you rotate in the order yaw, pitch, roll, and the pitch is 90 degrees (nose pointing vertical), both yaw and roll rotate about the world's Z (vertical) axis.
The problem is that it's not trivial to construct world-axis rotations, as concatenating X/Y/Z rotation matrices results in each transformation occurring in the coordinate system defined by the prior transformations.
> Also, sometimes rotation has a WXYZ component, what's W indicate?Sometimes rotations are expressed in axis-and-angle form, where 3 of the components define a vector and the 4 component is the rotation angle about that vector. This is the form that glRotate() uses.
And sometimes rotations are expressed as quaternions, which also have 4 components. This is the preferred form for interpolation (e.g. between keyframes) as interpolation between quaternions is well-defined (SLERP, = Spherical Linear intERPolation).
f17e1b No.6270
Any OGRE users here?
2625bb No.7725
>>3102Just going back over thread
>Ignore the dynamic lightmap family of techniques, they have no use for a space game. You could probably use a (relatively) simple scheme based on sparse light probes only existing at the corners of grid cells occupied by light receivers, updated each frame. On the CPU, calculate direct lighting incident to probes, bounce it once onto proxies of dynamic objects (ellipsoids? dunno) & back out to probes in range of the indirect light. Here you could also roughly compute occlusion. In the fragment shader, look-up the cell & interpolate between the values at its probes in the normal direction. Lots of details to work out but something like that should work.This concept sounds vaguely familiar
is there a name for it?
Isn't this light propagation technique using some sort of voxel structure
This dude has a demo with source code.
http://hd-prg.com/RVGlobalIllumination.htmlIts sickening that many of these demos aren't OpenGL
ffbc81 No.7758
>>5712>>5619>Gimbal lock>Not using quaternions>>5764That is definitely not correct if youre thinking of a standard matrix4x4*vector4
Dont even touch the w component until doing perspective unless youre doing something unusual
681827 No.7766
>>7758
>implying quaternions solve gimbal lock 2625bb No.7767
>>7758>implying I don't use quaternions>implying the problem isn't attempting to have hybrid local/global axes rather than rotation methods. 31864a No.7779
>>7725What I described is alot like Irradiance Volumes as introduced by
http://www.sci.utah.edu/~bigler/images/msthesis/The%20irradiance%20volume.pdfbut what I was getting at is to emphasize the sparsity of your scene.
>Isn't this light propagation technique using some sort of voxel structureYou can't get much mileage out of voxels in your case. Cascades, SVOs, no matter what you use it's going to scale poorly to space-size scenes. They're not needed anyway, since in space, you're only ever gonna notice indirect light from very nearby moderately-sized objects and distant massive objects (the latter probably being static).
8e5d0f No.7786
>>7766>Quaternions provide an alternative measurement technique that does not suffer from gimbal lock ffbc81 No.7787
>>7767>hybrid local/global axes rather than rotation methodswaitwhat?
I thought you were talking about a modelview transformation
What do you mean hybrid? As in its not a direct mapping from one space to another?
681827 No.7808
2625bb No.7819
>>7787Sorry, by hybrid, I mean incorrect.
I.E. when people try to keep relative yaw/roll/pitch to global axes
302e40 No.8746
Welp, finals over time for dev.
Finally implemented a text system properly.
302e40 No.8754
File: 1418138606926.png (158.37 KB, 1306x596, 653:298, ceilings for tile shading ….png)

>>8746Now to optimize muh tiles and eventually get into clusters
d024ef No.8755
So let's say someone wanted to get into 3D modelling for vidya purposes, but has never done any kind of 3D modelling before in their life. Where would one start?
"Don't do 3D" fags need not apply, as if I'm asking then I'm already aware that 3D is unnecessarily complicated when 2D is easier
2625bb No.8765
>>8755Maya Blender Scupltris+Zbrush seem pretty cool.
Also makehuman seems like a good application
2625bb No.8826
File: 1418238359758.png (989.66 KB, 810x628, 405:314, now limiting fps to reason….png)

Finally got Compute shaders working, because imageLoad was a bitch, ended up using TexelFetch.
2625bb No.9379
>>8826Fuck that was hard, and still trying to force glsl to allow me to use conditionals and local variables because it is retarded.
Why the fuck is GLSL so buggy.
>have to make a new function solely because glsl for some reason flips the fuck out due to muh local variables>have to do odd nasty ass conditionals because muh glsl can't handle local variables>too many conditionals and I can't optimize dis!!11
>tfw you realize post processing and thermal shaders only took you 5 mins, the code for the compute shader a few hours and the bugs about a weekDoes Nvidia even give a fuck about OpenGL?
Fuck, I'm just going to go cool off in Maya/C++ for the next week or so.
b17b82 No.9402
>>9379aren't they your bugs? I'm sure the technology is fine as is, so the mistake probably lies in your code, talking about conditionals and local variables and bugs in parallel computations seems telling
87d548 No.9429
>>9402I'd usually have to end up nesting variables in a struct or procedures in a function for them to work fine, even still with memory related bugs.
Something is seriously wrong with the nvidia compiler.
For example, checking (couldn't even implement the other two sides) planes of view cubes forced me to create a 'max4' function;
Instead of simply checking if each side was within a light's radius and incrementing because of bugs(which also wouldn't work because bugs, also wouldn't force the other calculations) or doing
max(max(max(a,b),c),d)<Light.radius
you'd have to wrap it in a func and it'd work fine.
conditionals really bug out if they are nested, even given default returns outside of all conditions
2625bb No.12604
>>2245If you are still here anon, how are you doing culling?
The standard projection matrix loses precision for me(muh infinite projection matrix+infinite far distance)
3ce224 No.12772
So I'm trying to make a racing game sort of like F-Zero. I can't draw so I went online and tried to find some designs that would fit into that sort of game.
Can any kind anons please critique my model? Keep in mind that this is my first attempt and it has no textures or anything, just the basic shape.
cca15b No.12827
>>12772make cockpit area smaller, depending on the actual size of the ship
also have a separate polygon for the back that comes in a bit
some sports car like shit on the sides mite be cool too
2625bb No.16844
Welp, semester over finally took the two seconds of time to add Slerp and Lerp eqs.
Plan to make everything modular and rework the graphics system.
Mainly because of
>muh vulcan
Also constantly thinking about redesigning the whole engine so it can handle multiple techniques
Also need to work on GI and server side physics so the client only has to handle graphics input and network data
2e426e No.16846
>tfw people make some crazy ass complicated renderer and their game basically sucks
19b7e7 No.16849
2625bb No.16887
>>16849
vulcan will solve everything
I hope
b5a62f No.16972
>>16846
What is the demoscene?
Have some respect for the people who make your life so much easier by writing the damn things.
874fd0 No.16996
HOLY nigger shit. So deferred rendering is actually slower?
2625bb No.16997
>>16846
This works well commercially
>>16996
Here's how I believe the hierarchy of fastest goes in my possibly incorrect opinion
~0-5
Forward
~5-10
Deferred
~10-50
Tiled Forward
~50-250
Tiled Deferred
~250-300
Clustered Forward
~300+
Clustered Deferred
We are talking on average in the environment.
Found out the hard way clustered is shitty for anything that isn't using a fuckton of lights-constantly; if you were to only start out with 5 lights in a clustered scene you'd notice the wide gap in FPS that you could be using for caustics etc.
24f50e No.17000
>>16996
Computational throughput has advanced more than memory bandwidth, and the gap only continues to widen. Whether on the CPU or GPU, poorly written (or trivial) code can easily spend more time waiting for memory than it does performing actual work.
Deferred rendering trades reduced shader compute work (cheap firss past, only need to perform second pass once per pixel) for increased bandwidth usage (due to G-buffers being fuckhuge compared to simple color buffers).
Forward rendering trades greater shader compute work (need to run the expensive shaders potentially multiple times per pixel due to overdraw) for reduced bandwidth usage (color buffers are usually only 32-64 bits).
You do the math.
2f58ca No.17130
I've official given up on opengl , it was actually a reasonable api to work with but after version 3 they decided to DEPRICATE all the useful functions and make it 10 times harder to use. As a indie dev I really don't see the point of learning voa,vbos and several million switches and options to make a single polygon, when all you need is immediate mode and opengl 2.0. Its not like you need the latest features, its still going to run at 60fps if your still limiting yourself to indie games. chances are vulcan is going to replace it, or something else like apples metal api in the future so whats the point to learn this horrible api?
2f58ca No.17131
>>17130
oh and I forget to mention GLSL. to make a polygon even work you ahve to write shader code for it which is anothe rlanguage in another language. It's probably cool and all if your all comfy in university and learning this stuff in your dorm room and being chad, but to a indie dev living at moms who is desperate for money, this shit is unacceptable.
2dba5f No.17161
>>17130
>>17131
If you're not thirsty for performance why are you using raw opengl in the first place? Get an engine.
93c9b4 No.17163
>>17130
>immediate mode
Just go use Python for your engine while you are at it.
c98065 No.17169
>>17161
because engines have too many limitations, and constraints, I want to have the architecture for the framework to work exactly like how I want it, I've used dozens of game engines and I feel the devs are lazy with support/features or do not prefer how they implement it. There are good game engines that do it right but most of them are proprietary, so OPENGL and creating my own engine is the only way I can feel satisfied. The problem is OPENGL itself has become more difficult to use, When they were competing with directx they tried to make it more user friendly so more people would adopt it, but now that they are dominating the market with smartphones /embedded machines they have no reason to lsiten to what the users or care about ease of use because there is no other choice and we're all forced to use this shitty api.
c02cff No.17186
>>17130
You are an idiot if you do not see the amazing performance boosts from the jump from archaic practices like immediate mode. Besides being far faster, it is actually simpler to use. VAO, VBO, and shaders take a little bit of time to learn just like everything else. You end up writing less lines of code and compartmentalize your graphics data into a more OOP way.
24f50e No.17237
>>17130
modern opengl: upload an entire mesh (or ALL of your meshes, if you know how to batch) to a VBO, put all of your state flags into a VAO, render the whole thing at once with a single draw call
ancient opengl: spin in a loop drawing a triangle at a time. or, at best, copy the same mesh to the GPU every frame, when you could have just stuck it in GPU memory and sent a cheap draw call instead
Clearly immediate mode is superior.
2625bb No.17365
>>17130
VAOs and VBOs are a godsend and not hard to understand conceptually.
Treat VBOs like std vectors and VAOs are essentially a structure of those vectors similiar to this somewhat inaccurate pseudocode
VBO{
int type;
vector data;
}
VAO{
int binding;
vector <VBO>BoundVBOs;
}
The switches will hopefully be gone with Vulkan; I've yet to wrap mine up...but if you do, they won't be a problem
Vulkan is based on OpenGL, that the point of learning it.
It won't be completely different.
Its essentially OpenGL 5 without all the fixed pipeline and states fuckery.
17beeb No.17761
Anyone here dealt with large numbers of shader permutations? From what I've gathered with D3D11 and such I need my vertex format to exactly match the data in my vertex buffers, so for instance if my model vertex format diverges ever so slightly I need to compile a whole new shader for it with the right vertex attributes, even if I don't use them (since I need to specify the format exactly).
ATM the way the material system I'm using works is by toggling on and off bits of code to compile (thus even more shaders to compile), so basically I'm going to end up with (number of vertex formats) * (number of material variants) amount of shaders as a minimum. Coupled with the amount of further permutations possible from compiling variants of those shaders (such as for shadow maps), and I'm easily looking somewhere in the range of 100+ shaders being used for a scene.
Are there any good techniques I could be using to reduce this obscene amount of shaders I need to generate and switch between, or should I just give in and accept it?
847914 No.17775
>>17761
Reuse similar vertex and fragment shaders when you link them together into a new program. I doubt those are 100 unique programs. Especially the vertex shader parts. Also make sure to nor load all 100 of them into memory.
e158c1 No.18915
Is it even worth using OpenGL 3 over 2 when the latter doesn't require players to have a DX10+ GPU? I don't intend to write the next Cryengine 2 or anything, just something that I can use to render Deus Ex-style levels without looking like complete shit or something a hipster artfag crapped out in Unity.
c02cff No.18917
>>18915
On PC? You should always try to learn the latest OpenGL. So I recommend you go with OpenGL 4.0.
Don't worry about making cooler shit with the newer versions. You can do the exact same on lower versions just with different methods.
2625bb No.19202
>>18917
>buffers
>bindless textures
Granted this applies to Compute Shaders although most of the shit the new GL versions implement are more efficient
e2face No.19207
>>18915
Considering it has god damn awful vertex buffer update handling as standard, I would avoid using OpenGL 2. Look to use at least the version 3.1 API if you want to retain your sanity.
e158c1 No.21940
Would it be feasible to shade the actors in a scene using different techniques than the rest of the scene? Characters using cell shading while the environment is rendered more traditionally, for example.
df2b1f No.22183
>>21940
Yes, use different pipelines
2625bb No.25356
Anyone into vulkan? been doing mostly logic/ai shit as of late
48f7d4 No.25661
Currently my engine uses a BSP tree, so I don't even need a Z-Buffer to render the world. Feels pretty nice.
2625bb No.25740
>>25661
What are the advantages and caveats of a BSP tree?
fb65bb No.25760
I want to make something like pic related (PSP graphics) but I have a toaster laptop I had since beginning of high school and I'm at my 4th year of college.
I can't emulate wii/gc games anymore too
what limitations do I have?
48f7d4 No.25765
>>25740
Really easy and fast collision detection, and no Z-Buffer (although this is optional, and sometimes its faster to use a Z-Buffer)
Collision detection is the main reason to use it, its way better at that than, for example, going through every solid object in your level. Rendering is faster but only if your scenes are simple.
The disadvantage is that its hard to code and understand at first, so you'll be spending some time (a month?) learning how BSP trees work when really you could just use a Z-Buffer and go through every solid object in your level and this is much easier to program. Even though its slower and horribly inefficient to do that computers have high enough performance to handle it if your level is simple.
Heres the tutorial I used for this engine: https://www.cs.utah.edu/~jsnider/SeniorProj/BSP1/default.html
Its pretty long and complicated, and took me a while to implement.