毕业设计(论文)-贴图技术的研究与应用--外文翻译.doc_第1页
毕业设计(论文)-贴图技术的研究与应用--外文翻译.doc_第2页
毕业设计(论文)-贴图技术的研究与应用--外文翻译.doc_第3页
毕业设计(论文)-贴图技术的研究与应用--外文翻译.doc_第4页
毕业设计(论文)-贴图技术的研究与应用--外文翻译.doc_第5页
已阅读5页,还剩4页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

本科毕业设计(论文)外文翻译9本科毕业设计(论文)外文翻译译文:贴 图凹凸贴图是一种技术在计算机图形学中, 在场景中使用用灯光使一个渲染的表面纹理看起来更具真实感建模的交互技术。凹凸贴图通过改变映象点的亮光做此在表面的以响应每个表面指定的二维的高度图。 当渲染3d场景,亮度和颜色的像素决定交互作用的3d模型用灯在场景里。之后,这是决定一个物体可见,三角用来计算表面法线方向的“几何”的对象,定义为一个向量在每个像素位置在该物体上。几何表面法线然后定义了多么对象与来自一个特定方向的光强烈交互使用phong明暗器或一种相似的照明设备算法。 对表面的轻的移动的垂线强烈比与表面是平行的光互动。 在最初的几何演算以后,色的纹理经常被申请于模型做对象看上去更加现实。经过纹理贴图,执行计算上的每个物体表面的像素:查找二维高度图上的对应位置上的表面位置。1. 计算二维高度图的表面法线。2. 从第二步添加几何表面法线,使得法线指向一个新的方向。 3. 计算新的在场景中使用光线的凹凸表面,例如,phong 明暗器。结果是一个表面似乎真正的深度。该算法也可以确保表面状态变化是跟随着场景中光线的变化而时刻改变的。(法线贴图是最常用的凹凸贴图技术,但也有一些其他的备选方案,比如视差贴图的效果。凹凸贴图的一个局限是它只影响表面法线不改变下垫面本身。轮廓和阴影因此不受影响。这限制是可以克服的技术包括:位移贴图其实是用于在不平坦的表面或者使用一个表示法。出于实时渲染的目的,凹凸贴图经常被称为一张“通道”,是多通道渲染,并且可以被多通道执行(三或四个通道)以减少需要三角法演算的数量。纹理贴图用于添加细节,表面纹理(位图或光栅图像),或色彩到计算机生成的图形或三维模型的方法。三维图形的应用开创了埃德温博士在他的博士学位的catmull论文1974年。纹理贴图应用(映射)到一个形状表面或多边形。这个过程是类似的图案申请文件,一个普通的白盒子。每一个多边形顶点分配一个纹理坐标(这在二维情况下也被称为紫外线坐标),可通过转让或以程序明确的定义。图像采集地点,然后插一个多边形产生的视觉效果,似乎比本来用的多边形数量有限取得更丰富整个表面 。多贴图是在多个纹理多边形上的使用时间。例如,光线贴图可以用来作为代替每次渲染时对物体表面光照的重新计算。另一种是多重纹理凹凸贴图技术,它允许一个纹理直接控制了它的照明计算的目的面朝的方向,可以给一个复杂的表面非常良好的外观,如树的树皮或粗糙的混凝土,这需要照明除了通常的详细细节,着色。凹凸贴图,由于图形硬件已足够支持视频游戏的实时响应,导致了视频游戏的流行。在屏幕上产生的像素是从纹素(纹理像素)的计算方式是由纹理过滤。最快的方法是使用最近邻插值,双线性插值但或三线性插值之间mipmap是两种常用的办法是减少叠影或锯齿形。在事件的纹理坐标到纹理存在,这是不是夹具式或包裹式。 纹理坐标的每一个给定的三角形顶点指定,而这些坐标插补使用扩展bresenham画的线算法。如果这些纹理坐标在屏幕上是线性插值,其结果是仿射纹理贴图。这是一种快速计算方法,但可以有一个明显的不连续性,当这些相邻三角形是在一到屏幕的平面夹角。透视正确的顶点纹理占的位置在三维空间,而不是简单的插补一个二维的多边形。这实现了正确的视觉效果,但它的计算速度较慢。直接对纹理坐标进行插补,以坐标除以它们的深度代替(相对于浏览器),以及深度值的倒数也用于插补和恢复正确的透视坐标。此修正使得多边形的更接近观众,从像素与像素之间的差异,这样的纹理坐标部分较小(拉伸纹理更宽),这种差异有部分地方越拉越大。(压缩纹理)。另一种技术是多边形细分成更小的多边形,如在三维空间或屏幕空间广场三角形,并利用他们的仿射贴图。在仿射映射的图像失真变得更加明显减少对小多边形。然而,另一种技术是使用速度更快的计算近似的观点,如多项式。还有另一种方法是使用1 / z在过去两年绘制像素值进行线性推断下一个值。该部门开始,然后做那些值,这样只有一小其余部分进行划分4,但这种方法使得簿记金额在大多数系统过于缓慢。最后,一些程序员的距离不断扩大招用的厄运通过查找任意多边形的距离不断线,沿着它渲染。 在计算机图形,环境贴图,或反射贴图,是一种有效的图像为基础的近似通过一个预先计算的纹理图像是指一个反射面的外观照明技术。纹理是用来存储遥远的周边环境形象呈现的对象。好几种方法均采用存储周围环境。第一个技术是球面映射,其中单一材质含有图像周围发生的一切,如反映在镜子上的球。它几乎完全超越了由立方体映射,由环境被投射到一个立方体的六个面和储存为6平方纹理或展开成六方地区一个单一的纹理。其他的预测,有一些优越的数学、计算性质包括抛物面映射,金字塔的映射,八面体映射,healpix映射。反射映射方法比传统的跟踪计算的光线,通过跟踪并按照它的反射光路方法精确射线更有效率。色的反射在用于计算一个像素着色通过计算确定在目标点上的反射向量,并映射到环境中的地图纹元。这种技术产生的结果往往是表面类似于光线追踪所产生的,但是光线跟踪太花费资源,因为计算的反射辐射值计算的发病率和反射角度,然后纹理查找,而不是跟着来对场景几何和计算的射线辐射,简化cpu的工作量。然而,在大多数情况下,反射映射只是一个近似的真实反映。环境映射依赖于两个很少假设但很少完美: 1)所有物体辐射事件后,被阴影来自于一个无限的距离。当这种情况并非如此,附近的几何反射物体上的反映出现错误的地方。当这种情况下,没有视差是出现在反射中。2)被遮蔽的对象是凸的,例如,它没有自反光,这种情况对象不会出现在反射对象里,只有环境反射时才有。反射贴图也是一种传统的基于图像的合成,用于创建对象对现实世界的背景反射照明技术。环境贴图通常是呈现一个反射面最快速的方法。为了进一步提高渲染速度,渲染可以计算每个顶点的反射光线的位置。然后,位置是插在那一个顶点连接多边形。这就避免了重新计算每个像素的反射方向的需要。反射类型球面映射代表了照明领域,仿佛它是在一个反射球反射透过一个虚拟摄像关。纹理图像可以创建接近这个理想设置,或使用鱼眼镜头或通过预渲染一个球形映射一个场景。球形映射遭受限制,减损产生的效果图的真实感。由于球形地图为代表的环境中,它们存储方位推算,一个奇点(一个“黑洞”效应)突然在反射点上看到的地方或附近的地图边缘特克塞尔颜色被扭曲,由于对象不足的决议,代表点准确。球形映射也浪费像素,而不是在正方体是在球体。球面贴图太强烈以至于只能看见高虚拟摄像头近的物体。图描绘一个明显的反射立方体映射正在提供的反映。这张地图其实是投射到从观察者的角度来看表面。主要挑染法线和角度和光线路径所决定的光线踪迹。可以“捏造”,如果它们是手动绘制到纹理域(或已经出现在那里,如果他们如何获得不同纹理映射),从那里他们将预计出贴图对象随着其余的纹理细节。 多面体立方贴图和其他贴图存在严重的地址失真。如果立方体映射正确地创建和过滤,就不会出现明显的接缝,并且可以使用的往往虚拟相机的视角无关。在大多数计算机图像应用立方体和其他多面体贴图是由球面贴图来复制,基于图像照明的例外。一般来说,立方体映射使用相同的skybox是用在户外渲染。立方反射映射是通过确定该对象正在浏览载体。这台相机是反映了对光线的表面相交,相机矢量对象正常。在光线的反射,这是然后传递给立方体映射结果得到纹素给它提供了在照明亮度值计算使用。这将创建该对象的反射效果。原文: mappingfrom wikipediabump mapping is a technique in computer graphics to make a rendered surface look more realistic by modeling the interaction of a bumpy surface texture with lights in the environment. bump mapping does this by changing the brightness of the pixels on the surface in response to a heightmap that is specified for each surface.when rendering a 3d scene, the brightness and color of the pixels are determined by the interaction of a 3d model with lights in the scene. after it is determined that an object is visible, trigonometry is used to calculate the geometric surface normal of the object, defined as a vector at each pixel position on the object.the geometric surface normal then defines how strongly the object interacts with light coming from a given direction using phong shading or a similar lighting algorithm. light traveling perpendicular to a surface interacts more strongly than light that is more parallel to the surface. after the initial geometry calculations, a colored texture is often applied to the model to make the object appear more realistic.after texturing, a calculation is performed for each pixel on the objects surface: look up the position on the heightmap that corresponds to the position on the surface. 1. calculate the surface normal of the heightmap. 2. add the surface normal from step two to the geometric surface normal so that the normal points in a new direction. 3. calculate the interaction of the new bumpy surface with lights in the scene using, for example, the phong shading. the result is a surface that appears to have real depth. the algorithm also ensures that the surface appearance changes as lights in the scene are moved around. normal mapping is the most commonly used bump mapping technique, but there are other alternatives, such as parallax mapping.a limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself.3 silhouettes and shadows therefore remain unaffected. this limitation can be overcome by techniques including the displacement mapping where bumps are actually applied to the surface or using an isosurface.for the purposes of rendering in real-time, bump mapping is often referred to as a pass, as in multi-pass rendering, and can be implemented as multiple passes (often three or four) to reduce the number of trigonometric calculations that are required.texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3d model. its application to 3d graphics was pioneered by dr edwin catmull in his ph.d. thesis of 1974. texture mappinga texture map is applied (mapped) to the surface of a shape or polygon.1 this process is akin to applying patterned paper to a plain white box. every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a uv coordinate) either via explicit assignment or by procedural definition. image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. multitexturing is the use of more than one texture at a time on a polygon.2 for instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.the way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. the fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. in the event of a texture coordinate being outside the texture, it is either clamped or wrappedperspective correctnesstexture coordinates are specified at each vertex of a given triangle, and these coordinates are interpolated using an extended bresenhams line algorithm. if these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. this is a fast calculation, but there can be a noticeable discontinuity between adjacent triangles when these triangles are at an angle to the plane of the screen.perspective correct texturing accounts for the vertices positions in 3d space, rather than simply interpolating a 2d triangle. this achieves the correct visual effect, but it is slower to calculate. instead of interpolating the texture coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the depth value is also interpolated and used to recover the perspective-correct coordinate. this correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger (compressing the texture). affine texture mapping directly interpolates a texture coordinate between two endpoints and : where perspective correct mapping interpolates after dividing by depth , then uses its interpolated reciprocal to recover the correct coordinate: all modern 3d graphics hardware implements perspective correct texturing.doom renders vertical spans (walls) with affine texture mapping. screen space sub division techniques. top left: quake like, top right: bilinear, bottom left: const-zclassic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective correctness was about 16 times more expensive. to achieve two goals - faster arithmetic results, and keeping the arithmetic mill busy at all times - every triangle is further subdivided into groups of about 16 pixels. for perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which improves details in non-architectural applications. software renderers generally preferred screen subdivision because it has less overhead. additionally they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 cpu; the 68000 or any risc is much more suited). for instance, doom restricted the world to vertical walls and horizontal floors/ceilings. this meant the walls would be a constant distance along a vertical line and the floors/ceilings would be a constant distance along a horizontal line. a fast affine mapping could be used along those lines because it would be correct. a different approach was taken for quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor 3. the polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it.another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares in screen space, and using an affine mapping on them. the distortion of affine mapping becomes much less noticeable on smaller polygons. yet another technique was approximating the perspective with a faster calculation, such as a polynomial. still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. the division is then done starting from those values so that only a small remainder has to be divided 4, but the amount of bookkeeping makes this method too slow on most systems. finally, some programmers extended the constant distance trick used for doom by finding the line of constant distance for arbitrary polygons and rendering along it.in computer graphics, environment mapping, or reflection mapping, is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture image. the texture is used to store the image of the distant environment surrounding the rendered object.several ways of storing the surrounding environment are employed. the first technique was sphere mapping, in which a single texture contains the image of the surroundings as reflected on a mirror ball. it has been almost entirely surpassed by cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures or unfolded into six square regions of a single texture. other projections that have some superior mathematical or computational properties include the paraboloid mapping, the pyramid mapping, the octahedron mapping, and the healpix mapping.the reflection mapping approach is more efficient than the classical ray tracing approach of computing the exact reflection by tracing a ray and following its optical path. the reflection color used in the shading computation at a pixel is determined by calculating the reflection vector at the point on the object and mapping it to the texel in the environment map. this technique often produces results that are superficially similar to those generated by raytracing, but is less computationally expensive since the radiance value of the reflection comes from calculating the angles of incidence and reflection, followed by a texture lookup, rather than followed by tracing a ray against the scene geometry and computing the radiance of the ray, simplifying the gpu workload.however in most circumstances a mapped reflection is only an approximation of the real reflection. environment mapping relies on two assumptions that are seldom satisfied:1) all radiance incident upon the object being shaded comes from an infinite distance. when this is not the case the reflection of nearby geometry appears in the wrong place on the reflected object. when this is the case, no parallax is seen in the reflection.2) the object being shaded is convex, such that it contains no self-interreflections. when this is not the case the object does not appear in the reflection; only the environment does.reflection mapping is also a traditional image-based lighting technique for creating reflections of real-world backgrounds on synthetic objects.environment mapping is generally the fastest method of rendering a reflective surface. to further increase the speed of rendering, the renderer may calculate the position of the reflected ray at each vertex. then, the position is interpolated across polygons to which the vertex is attached. this eliminates the need for recalculating every pixels reflection direction.if normal mapping is used, each polygon has many face normals (the direction a given point on a polygon is facing), which can be used in tandem with an environment map to produce a more realistic reflection. in this case, the angle of reflection at a given point on a polygon will take the normal map into consideration. this technique is used to make an otherwise flat surface appear textured, for example corrugated metal, or brushed aluminium.types of reflection mappingsphere mapping represents the sphere of incident illumination as though it were seen in the reflection of a reflective sphere through an orthographic camera. the texture image can be created by approximating this ideal setup, or using a fisheye lens or via prerendering a scene with a spherical mapping.the spherical mapping suffers from limitations that detract from the realism of resulting renderings. because spherical maps are stored as azimuthal projections of the environments they represent, an abrupt point of singularity (a “black hole” effect) is visible in the reflection on the object where texel colors at or near the edge of the map are distorted due to i

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论