Special Aircraft Service

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3   Go Down

Author Topic: Ambient Occlusion Mapping  (Read 7322 times)

0 Members and 1 Guest are viewing this topic.

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #12 on: July 23, 2014, 08:43:28 AM »

When you render a traingle with vertex colours, the colour of any pixel in the triangle is based on the 'distance' from the vertices.

So if you had two white and one black vertex, the triangle would be a gradient from white to grey.

This "should" be enough.

Doing it per pixel in the texture is far more difficult because I have to work out which triangle in the mesh a particular pixel is contained in (and of course it could be more than one, it's common to use the same part of the texture for multiple triangles.), then do the ray tracing.

I'll give my way a try first and we can see what it looks like.

Logged

Herra Tohtori

  • Modder
  • member
  • Offline Offline
  • Posts: 671
Re: Ambient Occlusion Mapping
« Reply #13 on: July 23, 2014, 10:35:22 AM »

When you render a traingle with vertex colours, the colour of any pixel in the triangle is based on the 'distance' from the vertices.


So if you had two white and one black vertex, the triangle would be a gradient from white to grey.

This "should" be enough.

I don't think it's an ideal solution. The problem is that the distances between vertices are not equal, they are sometimes quite long - this applies especially to wings. For example, if you think about the Spitfire Mk.IX which has those radiators poking out of the underside of the wing at almost 90 degree angle, the vertices separating the wing and radiator are in the "corner" and thus ends up being a bit more occluded by the geometry than the rest of the wing.

Since the vertex colouring relies on averages, it can cause weird stuff like an occluded vertice's dark values spreading outward along the wing further than they should.

Here's what I mean. Imagine this is the mesh of an aircraft's wing's underside, with a radiator protruding from it:



Now let's assign each vertex a colour based on how occluded it is by the model:



Note that the points on the corners of the protrusion are different colour from the single vertex "on" the edge - the points on corners are more "open" and less occluded, but that shouldn't affect the occlusion of points along the edges...

The problem becomes apparent when you apply the vertex colours to the faces - this is just an approximation made in GIMP, but displays the problem well:



See how the darkness spreads to places where it will look odd?

With per-pixel ambient occlusion, the result should appear more like this:





Besides, vertices and edges are a discontinuity - mathematically speaking, you can't define a tangent plane or normal vector for them. And you should have that to define the cone of rays you're using for occlusion checking. It becomes difficult to define proper ambient occlusion for them. On the other hand, you CAN define it for pixels on the faces of the mesh.


Quote
Doing it per pixel in the texture is far more difficult because I have to work out which triangle in the mesh a particular pixel is contained in (and of course it could be more than one, it's common to use the same part of the texture for multiple triangles.), then do the ray tracing.

But if you do it by applying vertex colours to the texture you still need to somehow define which colour should represent which pixel.

If you do it per-pixel to begin with, it's the same problem - just in reverse. The UV coordinates should, by definition, give you the information of where on the model's surface a particular pixel should be in. Unfortunately my knowledge of this matter is limited to theory, rather than practical experience...


Quote
I'll give my way a try first and we can see what it looks like.

It's a good idea to experiment, after all I could be totally wrong about this.
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #14 on: July 23, 2014, 11:25:26 AM »

A mesh is basically a big pool of triangles.

A tiangle is defined as three vertices.

These vertices have a position (x,y,z), a normal (nx,ny,nz), and a UV coordinate (u,v obviously)

So for that it is easy for me to start the process. I have triangle, I know where it is, I can use the normal as the basis for my calculations.

Once I have a ambient value for this vertex, it's easy for me to use it to generate a texture.

Now let's look at it the other way around.

I take a pixel. I then have to go through all the triangles and find any that contain this pixel. So I have to project the 2D uv coordinates onto a plane and do a plane triangle intersect for all of them.
Any that contain this pixel have to be put in a list.

I then have to loop through all the triangles calculating an ambient value, which one to I take?  Say I have three triangles that are textured by that pixel. Two return dark values and one returns a light one. It's perfectly possible for that to happen, and actually quite likely.

That's my big issue. How do I resolve that in a way that works.

If you had a very linear relationship between UV and triangles, yes it might be the best way to go.

Logged

Herra Tohtori

  • Modder
  • member
  • Offline Offline
  • Posts: 671
Re: Ambient Occlusion Mapping
« Reply #15 on: July 23, 2014, 12:58:14 PM »

Ah. Right. That could be a problem.

Models that use mirroring or otherwise replicate one location of the UV map on multiple locations on model will not work correctly with static ambient occlusion. It's just not going to work regardless of what method you use for generating the AO map - whether it's vertex colour based on per-pixel occlusion testing, it's just a matter of where you bump into the problem.

If you do the vertex colour based thing, you can define the occlusion for every vertex correctly and you can even colour all the faces correctly - but when you try to translate that to a texture, you can end up with one part of the texture trying to look at two different locations of the model to get the correct pixel value.
 
If you try to do per-pixel based AO, a single pixel that has two or more sets of UV coordinates will try to generate two or more different occlusion values. Technically if those values are the same, it doesn't matter, but to do it right you would have to add an exclusion term that just assigns those problem pixels a null value (white).

Mathematically speaking, it's an example of non-invertible function: A function where two different inputs produce the same output value does not have a valid, defined inverse function, because it would require a function that returns two outputs from a single input...

I'm not sure if this affects many models in IL-2 though. If you have a model where one pixel represents one point on the surface of the model, there's no problem.


As far as the problem of knowing where a pixel on the texture is on the model is - isn't that just the same with vertex colouring?

If you do vertex colouring, you start working from the model towards the texture, and the last part of the process is translating values from the model's surface into the texture based on UV coordinates.

IF you do per-pixel AO mapping, you start working from the texture's pixels with assigned UV coordinates, define their location on the model, and do your occlusion checking based on that knowledge of the position of that point - and assign the resulting occlusion value directly to the pixel on the texture.

It's just a case of reversing the problem, but of course if you have a system that doesn't have that functionality already, I can understand that adding it might be problematic.


In fact, it might be best to do occlusion mapping with utilities that already have that feature. What you can do is a way to export a model from IL-2 into a format where it can be directly read by some program like 3DS Max, Maya or Blender, then people can just generate an AO map from there at whatever resolution they wish.
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #16 on: July 24, 2014, 12:22:12 AM »

 ;D ;D    "aye, but there's the rub"

Most modders start of with a model in blender or max or maya or something.

They could bake the AO map then, but they haven't. Asking them to go back to the original mesh and bake the AO will probably not happen.

I'll keep going with this, in fact my circuits are already committed to the problem and processing is underway.

But I'll save the baked texture just as the AO map, so people can edit it by hand before merging it with the skin.
Logged

Herra Tohtori

  • Modder
  • member
  • Offline Offline
  • Posts: 671
Re: Ambient Occlusion Mapping
« Reply #17 on: July 24, 2014, 08:33:25 AM »

Well, I was thinking something like an utility that can export an object from IL-2 into a single model file (Collada would probably be most compatible) which could then be conveniently opened in a 3D editor of your choice, and generate the AO map for it there.

Either way, I'll be very interested to see what you come up with, and I hope it works well. :)
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #18 on: July 26, 2014, 03:52:01 AM »

You can already do that, my mod tool exports to 3ds.

This is the problem I am trying to solve.
This mesh is lovely, but the skin.... well it's flat and boring.





in fact it's only when you rotate it you can see all the detail





On the way home last night I had a brilliant idea. What if I put a camera at each vertex and render the scene along the vertices normal.Then render the aircraft.  The number of drawn pixels is a measure of how occluded the vertex is! Brilliant. AND my own idea !!

(well I found out later that it was a well known technique.  ???  bugger )

So I threw the code in and got this






interesting but wrong..   I spent most of the rest of the journey trying to work out what was wrong. It should work.

Eventually I had another idea and threw in some more code. My mod tool can now display normals.






The normals are wrong. Arse back to square one


Logged

Herra Tohtori

  • Modder
  • member
  • Offline Offline
  • Posts: 671
Re: Ambient Occlusion Mapping
« Reply #19 on: July 26, 2014, 05:37:43 AM »

Independent discovery of previously used technique does not diminish its value.

Vertex normal vector is an average of the surrounding faces' normal vectors. Is that model a continuous mesh or an amalgamation of "chunks" melded together?

If the mesh is broken into pieces, it's possible that the vertex normals end up pointing in strange directions (ie. not away from the mesh surface).


A bigger problem, I think, is the fact that there's simply not enough vertices to produce acceptably accurate results, at least for my standards. I still suggest translating pixels to model surface location based on U/V coordinates, and doing the occlusion rendering into the direction of surface normal as defined at that point. Or just doing the AO bake in a dedicated program, using the 3DS export as the basis.
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #20 on: July 26, 2014, 07:32:45 AM »

I'm putting several methods into the tool, you get to pick the technique you want to try.

Quote
A bigger problem, I think, is the fact that there's simply not enough vertices to produce acceptably accurate results, at least for my standards. I still suggest translating pixels to model surface location based on U/V coordinates, and doing the occlusion rendering into the direction of surface normal as defined at that point. Or just doing the AO bake in a dedicated program, using the 3DS export as the basis.

That won't make anything better if the normals are wrong. I'm trying to come up with a way of fixing the normals, but it's not trivial.

I've tried calculating a face normal and swapping the winding order if the angle between the calculated and stored normals is greater than 45 degrees, but that doesn't seem to do anything.
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #21 on: July 29, 2014, 03:03:53 PM »

Still working on it, at the moment I am still working on vertex based AO, just because it's quicker.

The results are getting better.




Mixing this with the skin changes this



To this



I have also come up with a way of getting around the broken normals.

If I use rays generated in all directions regardless of normals, then a flat plane would have 50% of the rays blocked by geometry.

Anything more than 50 % is occluded

So simple.

I'm working on that next.
Logged

Herra Tohtori

  • Modder
  • member
  • Offline Offline
  • Posts: 671
Re: Ambient Occlusion Mapping
« Reply #22 on: July 29, 2014, 04:39:23 PM »

Well, vertices that are on a "peak" or a "corner" might end up producing brighter than 50% values, but if you then clip and expand the 0-50% range to 0-100% range, that should take care of THAT issue nicely.
Logged

Stainless

  • Modder
  • member
  • Offline Offline
  • Posts: 1534
Re: Ambient Occlusion Mapping
« Reply #23 on: July 29, 2014, 05:59:00 PM »

This is the equivalent of about 100,000 rays per vertex




Now onto per pixel
Logged
Pages: 1 [2] 3   Go Up
 

Page created in 0.032 seconds with 26 queries.