مع احترامي لكم والله انكم مخرفين
كل واحد يتلفسف ولا يعترف برأي الثاني
الشيدر هي كالتالي
المضلعات المكسية بمادة تحدد معالم الجسم ان كان صخر او ماء او جلد او لحم او عضم او اي شيئ باختلاف النتوء وشكله ودرجة لمعانه و و و و و ويحتوي على الخائص التالية (بعضها )
1-diffuse map اللون او الـtexture والكل يعرفها
2-spec map اللمعة
3-bump map النتوء
4-normal bump mapping النتزء الحي المرتبط بالمضلعات
ورجاءً يا شباب بلا فلسفة مع احترامي لكم 
خليتوني اكر المنتدى نوعاً ما 
على الأقل
ابحثوا في en.wikipedia.com
Shader
From Wikipedia, the free encyclopedia
[edit] Introduction
In the 3D computer graphics creation process, shaders are used to define "materials" and are attached to the different surfaces of objects. For example, for a simple ceramic mug model, a single ceramic shader will be attached to the entire mug; for a hammer model, a rubber or plastic shader will be attached to the handle and a metal shader to the striking surface. During the
rendering process, the rendering program will run the various surface shaders as it attempts to draw the various surfaces, passing to the shader programs all of the needed parameters, such as the specific 3D location of the part of the surface being drawn, the location, directions, and colors of all the lights, any texture or bump maps needed by the shader, and environment or shadow maps. The shader program will return as its output the final color of a given pixel in a scene. In layman's terms, a shader answers the question (that the rendering program asks),
"Given the locations of all the objects, all the lights, and the camera in a scene, what color should I draw this particular object at this particular location?"
In non-realtime applications for which rendering speed is less important than final image quality, such as special effects for film and television, shaders are typically run in software, and can be arbitrarily complex. The
RenderMan Shading Language is a common non-realtime shading language used extensively by visual effects studios for film and television.
In realtime applications where rendering speed is of utmost importance, such as video games, the shading portion of the rendering pipeline runs on specialized hardware on modern
video cards. When specialized 3D-accelerated video cards first appeared on the market, the shader on any object was essentially hard coded as a simple
Phong surface shader, for which a few basic lighting parameters (color, specular, diffuse and ambient) could be tweaked, with a few additional features (such as texture maps and hardware fog). Recently, new video cards have been released which allow arbitrary (but length-limited) shader programs to be run on hardware, resulting in vast improvements in video game graphics.
Shaders are usually written using a
shading language, a specifically designed
programming language.
By design, hardware shaders are ideal candidates for
parallel execution by multiple graphic processors, which are usually located on a video card, allowing for scalable
multiprocessing and lessening the burden on the
CPU for rendering scenes.
The increasing performance and programmability of shader-based architectures attracted researchers interested in exploiting the new parallel model for
General Purpose computation on GPUs. This demonstrated that shaders could be used to process a large variety of information, and not just rendering-specific tasks. This new programming model, which resembles
stream processing, allows high computational rates at extremely low cost that will operate on a wide installed base (e.g. the common home PC).
[edit] Real-time shader structure
There are different approaches to shading, mainly because of the various applications of the targeted technology. Production shading languages are usually at a higher abstraction level, avoiding the need to write specific code to handle lighting or shadowing. In contrast, real-time shaders integrate light and shadowing computations. In those languages, the lights are passed to the shader itself as
parameters.
There are actually two different applications of shaders in real-time shading languages: vertex shaders and pixel shaders. Although their feature sets converged (meaning that it is now possible to write a vertex shader using the same
functions of a pixel shader), the different purposes of computation impose limitations to be acknowledged.
[edit] Vertex shaders
Vertex shaders are applied for each vertex and run on a programmable vertex processor. Vertex shaders define a method to compute vector space transformations and other linearizable computations.
A vertex shader expects various inputs:
Uniform variables are constant values for each shader invocation. Changing the value of each uniform variable between different shader invocation
batches is permissible. This kind of variable is usually a 3-component
array but this does not need to be. Usually, only basic
datatypes are allowed to be loaded from external APIs so complex structures must be broken down
[1]. Uniform variables can be used to drive simple conditional execution on a per-batch basis. Support for this kind of branching at a vertex level has been introduced in shader model 2.0.
Vertex attributes, which are a special case of variant variables, which are essentially per-vertex data such as vertex positions. Most of the time, each shader invocation performs computation on different data sets. The external application usually does not access these variables directly but manages them as large arrays. Besides this, applications are usually capable of changing a single vertex attribute with ease. Branching on vertex attributes requires a finer degree of control which is supported with extended shader model 2.
Vertex shader computations are meant to provide the following stages of the graphics pipeline with
interpolatable fragment attributes. Because of this, a vertex shader must output
at least the transformed homogeneous vertex position (in
GLSL).
Outputs from different vertex shader invocations from the same batch will be linearly interpolated across the primitive being rendered. The result of this linear interpolation is fetched to the next pipeline stage.
Some examples of vertex shader's functionalities include arbitrary
mesh deformation (possibly faking lens effects such as
fish-eye) and vertex displacements in general, computing linearizable attributes for later pixel-shaders such as texture coordinate transformations. Vertex shaders cannot
create vertices.
[edit] Pixel shaders
Pixel shaders — or "fragment shaders" in OpenGL nomenclature — are used to compute properties which, most of the time, are recognized as pixel colors.
Pixel shaders are applied
for each pixel. They are run on a pixel processor, which usually features much more processing power than its vertex-oriented counterpart. As of
October 2005, some architectures are merging the two processors in a single one to increase transistor usage and provide some kind of
load balancing.
As previously stated, the pixel shaders expects input from interpolated vertex values. This means there are two sources of information:
Uniform variables can still be used and provide interesting opportunities. A typical example is passing an integer providing a number of lights to be processed and an array of light parameters. Textures are special cases of uniform values and can be applied to vertices as well, although vertex texturing is often more expensive.
Varying attributes is a special name to indicate a fragment's variant variables, which are the interpolated vertex shader output. Because of their origin, the application has no direct control on the actual value of those variables.
Branching on the pixel processor has also been introduced with an extended pixel shader 2 model but hardware supporting this efficiently is beginning to be commonplace only now, usually with full pixel shader model 3 support.
A pixel shader is allowed to
discard the results of its computation, meaning that the corresponding
framebuffer position must retain its previous value.
Pixel shaders also don't need to write specific color information because this is not always wanted. Not producing color output when expected however gives undefined results in GLSL.
Pixel shaders have been employed to apply accurate
lighting models, simulate multi-layer surface properties, simulating natural phenomena such as turbulence (
vector field simulations in general) and applying depth-of-field to a scene or other color-space transformations.
[edit] Geometry shaders
Geometry shaders are a new type of shaders coming in next generation graphics hardware like the
GeForce 8 Series and
Radeon R600. Geometry shaders do per-primitive operations on vertices grouped into primitives like triangles, lines, strips and points outputted by vertex shaders. Geometry shaders can make copies of the inputted primitives, so unlike vertex shaders they can actually create new vertices. Examples of use include
shadow volumes on the GPU, render-to-cubemap and
procedural generation.
[edit] Lighting & shadowing
Considering the lighting equation, we have seen the trend to move evaluations to fragment
granularity. Initially, the lighting computations were performed at vertex level (
Gouraud shading using the
Phong reflection model), but improvements in fragment processor designs allowed to evaluate much more complex lighting equations such as
Phong shading and
bump mapping.
It is well acknowledged that lighting really needs hardware support for dynamic loops (this is often referred as
DirectX Pixel Shader Model 3.0) because this allows to process many lights of the same type with a single shader. By contrast, previous shading models would have needed the application to use
multi pass rendering (an expensive operation) because of the fixed loops. This approach would also have needed more complicated machinery.
For example, after finding there are 13 "visible" lights, the application would have the need to use a shader to process 8 lights (suppose this is the upper hardware limitation) and another shader to process the remaining 5. If there are 7 lights the application would have needed a special 7-light shader.
By contrast, with dynamic loops the application can iterate on dynamic variables thus defining a uniform array to be 13 (or 7) "lights long" and get correct results, provided this actually fits in hardware capabilities
[2]. As of October
2005, there are enough resources to evaluate over 50 lights per pass when resources are managed carefully.
Computing accurate shadows make this much more complicated, depending on the algorithm used. Compare
stencil shadow volumes and
shadow mapping. In the first case, the algorithm requires at least some care to be applied to multiple lights at once and there's no actual proof of a multi-light shadow volume based version. Shadow mapping by contrast seems to be much more well-suited to future hardware improvements and to the new shading model which also evaluates computations at fragment level.
Shadow maps however need to be passed as samplers, which are limited resources: actual hardware (
27 October 2005) support up to 16 samplers so this is a hard-limit, unless some tricks are used. It is speculated that future hardware improvements and packing multiple shadow maps in a single 3D-texture will rapidly raise this resource availability.