Z Buffer
Z Buffer
I was thinking about using layers so that if i put an image on top of another image, the colors wouldn't combine. So i was thinking of using a z buffer for layers. I found a tutorial on a z buffer but i couldn't quite understand it. I'm not sure if that's the only way to make layers but, if anyone can tell me how to make a z buffer or make layers, that'd be great. : D
-
- Posts: 680
- Joined: May 28, 2005 1:11
- Contact:
-
- Posts: 341
- Joined: May 27, 2005 7:01
- Location: Canada
- Contact:
a Z Buffer mostly associted with opengl where the z buffer is used to tell if a object should be hidden from view by another object.
for 2d programming you simple just want off screen buffers that you can then blite with a transparence to your offscreen buffer to form a composit of the image you want.
for 2d programming you simple just want off screen buffers that you can then blite with a transparence to your offscreen buffer to form a composit of the image you want.
You can easily use the depth buffer in a 2D engine. It's a bit different though. You have to write the value of the tile(or pixel) where you want it. For instance...
Since ogl natively uses the right handed system...
Tile(1).Z=-1 '1st layer
Tile(2).Z =0 '2nd layer
Tile(3).Z =1 '3rd layer
Just make your 2D tileset 3D instead. ;)
Since ogl natively uses the right handed system...
Tile(1).Z=-1 '1st layer
Tile(2).Z =0 '2nd layer
Tile(3).Z =1 '3rd layer
Just make your 2D tileset 3D instead. ;)
soooo..... are you saying make some multipe offcreen buffers and make them trasnparent or see through?Shadowwolf wrote:a Z Buffer mostly associted with opengl where the z buffer is used to tell if a object should be hidden from view by another object.
for 2d programming you simple just want off screen buffers that you can then blite with a transparence to your offscreen buffer to form a composit of the image you want.
-
- Posts: 341
- Joined: May 27, 2005 7:01
- Location: Canada
- Contact:
here is an example what i mean
in SDL you can set a surface to transparent by setting the colorkey meaning any color you that is the colorkey won't blit to surface.
so you can make a composit image by having one surface you main tail surface. and another surface you NPC/OBJECt surface render the images to thouse surfaces then blit them to your offscreen buffer like time surface is blited first to the offscreen buffer then the BPC/object sufrface is then blited to the offscreen buffer effectively combining the two images togather. then you can flip the offscreen buffer into view.
you can do the same thing with the GFX lib as well with PUT and the PUT TRANS mode
in SDL you can set a surface to transparent by setting the colorkey meaning any color you that is the colorkey won't blit to surface.
so you can make a composit image by having one surface you main tail surface. and another surface you NPC/OBJECt surface render the images to thouse surfaces then blit them to your offscreen buffer like time surface is blited first to the offscreen buffer then the BPC/object sufrface is then blited to the offscreen buffer effectively combining the two images togather. then you can flip the offscreen buffer into view.
you can do the same thing with the GFX lib as well with PUT and the PUT TRANS mode
uhhhhhhhh i still didn't understand. sorry....Shadowwolf wrote:here is an example what i mean
in SDL you can set a surface to transparent by setting the colorkey meaning any color you that is the colorkey won't blit to surface.
so you can make a composit image by having one surface you main tail surface. and another surface you NPC/OBJECt surface render the images to thouse surfaces then blit them to your offscreen buffer like time surface is blited first to the offscreen buffer then the BPC/object sufrface is then blited to the offscreen buffer effectively combining the two images togather. then you can flip the offscreen buffer into view.
you can do the same thing with the GFX lib as well with PUT and the PUT TRANS mode
also can anyone tell me about the different buffers? i heard of frame and depth buffers but i don't know what they are XP
also.... i can someone explain about the different buffers? i heard of frame,
In 3-D there are some kinds of common buffers which, are for example used by hardware today, but you can create any buffer you want, storing any info you want. Every kind of the buffer can be simulated in software for tradeoff with speed. These buffers usually store information per pixel not per object.
There is usually color buffer or draw-buffer (or sometimes called frame-buffer), the one you draw-in with paint, pset, put etc. Even oldest VGA's have this color buffer. Then there is most common Z-buffer and in today's HW there is usually stencil-buffer too, which has no special purpose and it's free to use by programmer. In hardware specs sometimes all this buffers combined are called as frame-buffer.
Imagine Z-buffer as array with same x & y dimensions as your game screen/draw-buffer is. As draw-buffer stores information about pixel's color, Z-buffer stores information about pixel's Z, or better said distance from screen.
So if you want to visualize it, imagine same rendered image as for drawbuffer but use different color scheme. For example good one is visualising furthest pixels being black and closest pixel being white, pixels inbetween having any color shade between the two extremes (so color will vary from different shades of grey based on distance).
What is z-buffer good for? Imagine you draw a rectangular Plane in some distance from screen, being parallel to that screen. Then imagine you are drawing some complex object like torus being partly "sunk" into (intersecting) that Plane, drawing one pixel at the time. As calculation of pixel color for goraud shading is expensive, first you calculate Z (distance from screen) of that pixel, you take this value and compare it to value stored in Z-buffer. If Z value of drawn pixel is bigger than that one stored in Z-buffer (is "closer" to the screen), then this pixel is not obscured by Plane and must be drawn using expensive goraud calculations. Otherwise pixel is farther than Plane and being obscured and no goraud calculations are needed, so you move to the next pixel of torus.
Advantage of z-buffer is that you can draw anything to screen in any order and it will sort out correctly (this not true for translucent polys), having even pixel perfect intersections between objects. Disadvantage is that for every drawn pixel you must do at least one z-buffer test. So the golden rule of CG is still valid: "fastest drawing is not drawing at all" because every Z-buffer test is special kind of drawing with "depth".
However you can use this Z-buffer for 2-D tiling engine to your avantage.
You can set up OpenGL to orthogonal mode (not perspective one) and draw base "layer" of tiles (quads) parallel to screen with z position eqal to -1000 for example. You are efectively drawing the tiles very far away from screen/camera but perspective deformations won't apply as you are in orthogonal mode. Then you can draw another set of tiles of "upper layer" with z set to -999. They must be keyed for transparency (that mean the trasparent parts of tiles must be marked, and won't be drawn), otherwise they will completely overdraw underlying "layer". This way if you will continue drawing "layers" , for each layer incrementig tile's z position, you can have almost unlimited amount of them as today's HW z-buffers suports 32-bit values. You can even draw randomly to different layers as z-buffer will sort thing out. If you will draw from farthest to closest layers you can even have working translucency, everything HW accelerated, with scaling and filtering.
There is usually color buffer or draw-buffer (or sometimes called frame-buffer), the one you draw-in with paint, pset, put etc. Even oldest VGA's have this color buffer. Then there is most common Z-buffer and in today's HW there is usually stencil-buffer too, which has no special purpose and it's free to use by programmer. In hardware specs sometimes all this buffers combined are called as frame-buffer.
Imagine Z-buffer as array with same x & y dimensions as your game screen/draw-buffer is. As draw-buffer stores information about pixel's color, Z-buffer stores information about pixel's Z, or better said distance from screen.
So if you want to visualize it, imagine same rendered image as for drawbuffer but use different color scheme. For example good one is visualising furthest pixels being black and closest pixel being white, pixels inbetween having any color shade between the two extremes (so color will vary from different shades of grey based on distance).
What is z-buffer good for? Imagine you draw a rectangular Plane in some distance from screen, being parallel to that screen. Then imagine you are drawing some complex object like torus being partly "sunk" into (intersecting) that Plane, drawing one pixel at the time. As calculation of pixel color for goraud shading is expensive, first you calculate Z (distance from screen) of that pixel, you take this value and compare it to value stored in Z-buffer. If Z value of drawn pixel is bigger than that one stored in Z-buffer (is "closer" to the screen), then this pixel is not obscured by Plane and must be drawn using expensive goraud calculations. Otherwise pixel is farther than Plane and being obscured and no goraud calculations are needed, so you move to the next pixel of torus.
Advantage of z-buffer is that you can draw anything to screen in any order and it will sort out correctly (this not true for translucent polys), having even pixel perfect intersections between objects. Disadvantage is that for every drawn pixel you must do at least one z-buffer test. So the golden rule of CG is still valid: "fastest drawing is not drawing at all" because every Z-buffer test is special kind of drawing with "depth".
However you can use this Z-buffer for 2-D tiling engine to your avantage.
You can set up OpenGL to orthogonal mode (not perspective one) and draw base "layer" of tiles (quads) parallel to screen with z position eqal to -1000 for example. You are efectively drawing the tiles very far away from screen/camera but perspective deformations won't apply as you are in orthogonal mode. Then you can draw another set of tiles of "upper layer" with z set to -999. They must be keyed for transparency (that mean the trasparent parts of tiles must be marked, and won't be drawn), otherwise they will completely overdraw underlying "layer". This way if you will continue drawing "layers" , for each layer incrementig tile's z position, you can have almost unlimited amount of them as today's HW z-buffers suports 32-bit values. You can even draw randomly to different layers as z-buffer will sort thing out. If you will draw from farthest to closest layers you can even have working translucency, everything HW accelerated, with scaling and filtering.