webgl draw a 3d tetrahedron
The globe of 3D graphics can exist very intimidating to become into. Whether you just desire to create an interactive 3D logo, or design a fully fledged game, if you lot don't know the principles of 3D rendering, you're stuck using a library that abstracts out a lot of things.
Using a library tin be just the right tool, and JavaScript has an amazing open up source one in the form of three.js. There are some disadvantages to using pre-made solutions, though:
- They can take many features that you don't plan to use. The size of the minified base 3.js features is around 500kB, and whatsoever extra features (loading actual model files is one of them) make the payload even larger. Transferring that much data just to show a spinning logo on your website would exist a waste.
- An extra layer of abstraction can brand otherwise easy modifications hard to do. Your creative style of shading an object on the screen can either be straightforward to implement or require tens of hours of work to contain into the library'south abstractions.
- While the library is optimized very well in near scenarios, a lot of bells and whistles tin can be cut out for your use case. The renderer can cause certain procedures to run millions of times on the graphics carte du jour. Every instruction removed from such a procedure means that a weaker graphics menu can handle your content without problems.
Even if you decide to use a high-level graphics library, having basic cognition of the things under the hood allows you to use information technology more effectively. Libraries can also have advanced features, similar ShaderMaterial
in 3.js
. Knowing the principles of graphics rendering allows you to utilize such features.
Our goal is to give a short introduction to all the central concepts behind rendering 3D graphics and using WebGL to implement them. You lot volition see the most common thing that is done, which is showing and moving 3D objects in an empty space.
The last code is available for you to fork and play around with.
Representing 3D Models
The first affair you would demand to empathise is how 3D models are represented. A model is fabricated of a mesh of triangles. Each triangle is represented past three vertices, for each of the corners of the triangle. There are 3 almost common properties attached to vertices.
Vertex Position
Position is the most intuitive property of a vertex. It is the position in 3D space, represented by a 3D vector of coordinates. If you know the exact coordinates of iii points in infinite, you would have all the information yous need to draw a uncomplicated triangle between them. To brand models look actually good when rendered, there are a couple more things that demand to exist provided to the renderer.
Vertex Normal
Consider the two models above. They consist of the same vertex positions, yet wait totally different when rendered. How is that possible?
Too telling the renderer where we want a vertex to exist located, nosotros tin too give it a hint on how the surface is slanted in that exact position. The hint is in the grade of the normal of the surface at that specific indicate on the model, represented with a 3D vector. The following image should give y'all a more descriptive look at how that is handled.
The left and right surface correspond to the left and correct ball in the previous prototype, respectively. The carmine arrows represent normals that are specified for a vertex, while the blue arrows represent the renderer's calculations of how the normal should look for all the points between the vertices. The image shows a demonstration for 2D space, simply the same principle applies in 3D.
The normal is a hint for how lights will illuminate the surface. The closer a lite ray'due south management is to the normal, the brighter the point is. Having gradual changes in the normal direction causes calorie-free gradients, while having abrupt changes with no changes in-betwixt causes surfaces with constant illumination across them, and sudden changes in illumination betwixt them.
Texture Coordinates
The terminal pregnant property are texture coordinates, commonly referred to as UV mapping. You have a model, and a texture that you want to use to it. The texture has various areas on it, representing images that we want to apply to different parts of the model. There has to be a way to marker which triangle should be represented with which part of the texture. That'southward where texture mapping comes in.
For each vertex, we marking ii coordinates, U and Five. These coordinates represent a position on the texture, with U representing the horizontal axis, and V the vertical axis. The values aren't in pixels, but a per centum position within the prototype. The bottom-left corner of the image is represented with two zeros, while the top-correct is represented with two ones.
A triangle is just painted by taking the UV coordinates of each vertex in the triangle, and applying the image that is captured betwixt those coordinates on the texture.
You can run into a demonstration of UV mapping on the image above. The spherical model was taken, and cut into parts that are small enough to be flattened onto a second surface. The seams where the cuts were made are marked with thicker lines. One of the patches has been highlighted, so you lot can nicely see how things match. You tin also come across how a seam through the center of the smile places parts of the mouth into two unlike patches.
The wireframes aren't part of the texture, only but overlayed over the prototype so you can see how things map together.
Loading an OBJ Model
Believe it or not, this is all you need to know to create your ain simple model loader. The OBJ file format is simple enough to implement a parser in a few lines of code.
The file lists vertex positions in a v <float> <bladder> <float>
format, with an optional fourth float, which we will ignore, to go on things uncomplicated. Vertex normals are represented similarly with vn <float> <float> <bladder>
. Finally, texture coordinates are represented with vt <float> <float>
, with an optional third bladder which nosotros shall ignore. In all three cases, the floats stand for the respective coordinates. These 3 properties are accumulated in three arrays.
Faces are represented with groups of vertices. Each vertex is represented with the alphabetize of each of the properties, whereby indices beginning at 1. There are various means this is represented, merely nosotros volition stick to the f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3
format, requiring all three backdrop to be provided, and limiting the number of vertices per confront to iii. All of these limitations are being done to proceed the loader equally unproblematic as possible, since all other options require some extra trivial processing before they are in a format that WebGL likes.
Nosotros've put in a lot of requirements for our file loader. That may sound limiting, merely 3D modeling applications tend to give you the ability to set those limitations when exporting a model as an OBJ file.
The following code parses a string representing an OBJ file, and creates a model in the form of an array of faces.
function Geometry (faces) { this.faces = faces || [] } // Parses an OBJ file, passed equally a string Geometry.parseOBJ = role (src) { var POSITION = /^v\south+([\d\.\+\-eE]+)\southward+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)/ var NORMAL = /^vn\s+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)\due south+([\d\.\+\-eE]+)/ var UV = /^vt\s+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)/ var Face = /^f\s+(-?\d+)\/(-?\d+)\/(-?\d+)\southward+(-?\d+)\/(-?\d+)\/(-?\d+)\s+(-?\d+)\/(-?\d+)\/(-?\d+)(?:\s+(-?\d+)\/(-?\d+)\/(-?\d+))?/ lines = src.divide('\n') var positions = [] var uvs = [] var normals = [] var faces = [] lines.forEach(function (line) { // Match each line of the file confronting various RegEx-es var upshot if ((effect = POSITION.exec(line)) != goose egg) { // Add new vertex position positions.push(new Vector3(parseFloat(result[1]), parseFloat(consequence[2]), parseFloat(effect[3]))) } else if ((outcome = NORMAL.exec(line)) != nada) { // Add new vertex normal normals.button(new Vector3(parseFloat(result[one]), parseFloat(result[2]), parseFloat(result[3]))) } else if ((result = UV.exec(line)) != nada) { // Add new texture mapping indicate uvs.push(new Vector2(parseFloat(outcome[i]), one - parseFloat(result[ii]))) } else if ((result = FACE.exec(line)) != null) { // Add new confront var vertices = [] // Create three vertices from the passed one-indexed indices for (var i = 1; i < ten; i += 3) { var part = upshot.piece(i, i + 3) var position = positions[parseInt(office[0]) - 1] var uv = uvs[parseInt(part[1]) - 1] var normal = normals[parseInt(part[2]) - ane] vertices.push(new Vertex(position, normal, uv)) } faces.push(new Face(vertices)) } }) return new Geometry(faces) } // Loads an OBJ file from the given URL, and returns it equally a promise Geometry.loadOBJ = function (url) { render new Promise(function (resolve) { var xhr = new XMLHttpRequest() xhr.onreadystatechange = function () { if (xhr.readyState == XMLHttpRequest.Washed) { resolve(Geometry.parseOBJ(xhr.responseText)) } } xhr.open up('Become', url, truthful) xhr.send(null) }) } role Face (vertices) { this.vertices = vertices || [] } role Vertex (position, normal, uv) { this.position = position || new Vector3() this.normal = normal || new Vector3() this.uv = uv || new Vector2() } function Vector3 (ten, y, z) { this.x = Number(10) || 0 this.y = Number(y) || 0 this.z = Number(z) || 0 } function Vector2 (10, y) { this.x = Number(ten) || 0 this.y = Number(y) || 0 }
The Geometry
structure holds the exact information needed to transport a model to the graphics bill of fare to process. Earlier you practise that though, yous'd probably want to take the ability to move the model around on the screen.
Performing Spatial Transformations
All the points in the model we loaded are relative to its coordinate system. If we want to translate, rotate, and calibration the model, all we need to do is perform that performance on its coordinate system. Coordinate organisation A, relative to coordinate system B, is defined by the position of its heart as a vector p_ab
, and the vector for each of its axes, x_ab
, y_ab
, and z_ab
, representing the direction of that axis. So if a bespeak moves past 10 on the 10
axis of coordinate system A, then—in the coordinate system B—it will motion in the direction of x_ab
, multiplied by 10.
All of this data is stored in the post-obit matrix class:
x_ab.x y_ab.ten z_ab.x p_ab.ten x_ab.y y_ab.y z_ab.y p_ab.y x_ab.z y_ab.z z_ab.z p_ab.z 0 0 0 1
If we want to transform the 3D vector q
, nosotros just have to multiply the transformation matrix with the vector:
q.x q.y q.z i
This causes the betoken to movement by q.x
along the new ten
axis, by q.y
forth the new y
centrality, and past q.z
along the new z
axis. Finally it causes the point to move additionally by the p
vector, which is the reason why we utilize a i as the last element of the multiplication.
The large advantage of using these matrices is the fact that if we have multiple transformations to perform on the vertex, we tin can merge them into one transformation by multiplying their matrices, prior to transforming the vertex itself.
In that location are various transformations that can be performed, and we'll take a look at the key ones.
No Transformation
If no transformations happen, and so the p
vector is a aught vector, the x
vector is [one, 0, 0]
, y
is [0, one, 0]
, and z
is [0, 0, 1]
. From now on we'll refer to these values equally the default values for these vectors. Applying these values gives united states an identity matrix:
1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
This is a good starting betoken for chaining transformations.
Translation
When we perform translation, and then all the vectors except for the p
vector have their default values. This results in the following matrix:
ane 0 0 p.x 0 1 0 p.y 0 0 i p.z 0 0 0 1
Scaling
Scaling a model means reducing the corporeality that each coordinate contributes to the position of a point. There is no uniform offset acquired past scaling, and so the p
vector keeps its default value. The default axis vectors should exist multiplied by their respective scaling factors, which results in the following matrix:
s_x 0 0 0 0 s_y 0 0 0 0 s_z 0 0 0 0 i
Hither s_x
, s_y
, and s_z
correspond the scaling applied to each centrality.
Rotation
The image above shows what happens when we rotate the coordinate frame around the Z centrality.
Rotation results in no uniform offset, then the p
vector keeps its default value. At present things get a scrap trickier. Rotations crusade motion along a certain centrality in the original coordinate system to motility in a different direction. So if we rotate a coordinate organization by 45 degrees around the Z axis, moving along the x
axis of the original coordinate organisation causes movement in a diagonal management between the x
and y
axis in the new coordinate arrangement.
To proceed things simple, we'll but evidence you how the transformation matrices look for rotations around the main axes.
Around X: 1 0 0 0 0 cos(phi) sin(phi) 0 0 -sin(phi) cos(phi) 0 0 0 0 1 Around Y: cos(phi) 0 sin(phi) 0 0 1 0 0 -sin(phi) 0 cos(phi) 0 0 0 0 1 Around Z: cos(phi) -sin(phi) 0 0 sin(phi) cos(phi) 0 0 0 0 1 0 0 0 0 ane
Implementation
All of this can be implemented as a class that stores sixteen numbers, storing matrices in a column-major order.
function Transformation () { // Create an identity transformation this.fields = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] } // Multiply matrices, to concatenation transformations Transformation.paradigm.mult = office (t) { var output = new Transformation() for (var row = 0; row < 4; ++row) { for (var col = 0; col < 4; ++col) { var sum = 0 for (var k = 0; k < 4; ++thou) { sum += this.fields[k * 4 + row] * t.fields[col * 4 + k] } output.fields[col * four + row] = sum } } return output } // Multiply past translation matrix Transformation.prototype.translate = function (x, y, z) { var mat = new Transformation() mat.fields[12] = Number(x) || 0 mat.fields[13] = Number(y) || 0 mat.fields[14] = Number(z) || 0 return this.mult(mat) } // Multiply past scaling matrix Transformation.paradigm.scale = function (ten, y, z) { var mat = new Transformation() mat.fields[0] = Number(10) || 0 mat.fields[v] = Number(y) || 0 mat.fields[10] = Number(z) || 0 return this.mult(mat) } // Multiply past rotation matrix around 10 axis Transformation.image.rotateX = function (bending) { angle = Number(angle) || 0 var c = Math.cos(angle) var s = Math.sin(angle) var mat = new Transformation() mat.fields[5] = c mat.fields[10] = c mat.fields[nine] = -s mat.fields[vi] = south return this.mult(mat) } // Multiply by rotation matrix around Y centrality Transformation.image.rotateY = part (angle) { angle = Number(angle) || 0 var c = Math.cos(angle) var due south = Math.sin(angle) var mat = new Transformation() mat.fields[0] = c mat.fields[10] = c mat.fields[2] = -s mat.fields[8] = s render this.mult(mat) } // Multiply by rotation matrix effectually Z centrality Transformation.epitome.rotateZ = function (angle) { bending = Number(angle) || 0 var c = Math.cos(angle) var due south = Math.sin(angle) var mat = new Transformation() mat.fields[0] = c mat.fields[5] = c mat.fields[iv] = -s mat.fields[ane] = s render this.mult(mat) }
Looking through a Camera
Here comes the key part of presenting objects on the screen: the photographic camera. There are 2 key components to a camera; namely, its position, and how it projects observed objects onto the screen.
Camera position is handled with one uncomplicated fob. There is no visual difference between moving the camera a meter frontward, and moving the whole earth a meter backward. Then naturally, we do the latter, by applying the inverse of the matrix as a transformation.
The second key component is the way observed objects are projected onto the lens. In WebGL, everything visible on the screen is located in a box. The box spans between -1 and 1 on each axis. Everything visible is within that box. We tin use the same arroyo of transformation matrices to create a projection matrix.
Orthographic Project
The simplest projection is orthographic projection. You accept a box in space, denoting the width, height and depth, with the assumption that its center is at the zero position. Then the projection resizes the box to fit it into the previously described box inside which WebGL observes objects. Since we want to resize each dimension to two, nosotros scale each centrality by two/size
, whereby size
is the dimension of the corresponding centrality. A pocket-sized caveat is the fact that nosotros're multiplying the Z axis with a negative. This is done because we want to flip the direction of that dimension. The final matrix has this form:
2/width 0 0 0 0 2/height 0 0 0 0 -two/depth 0 0 0 0 i
Perspective Projection
We won't become through the details of how this projection is designed, but merely use the final formula, which is pretty much standard by at present. We can simplify it by placing the projection in the zippo position on the 10 and y axis, making the correct/left and top/lesser limits equal to width/2
and height/2
respectively. The parameters n
and f
stand for the near
and far
clipping planes, which are the smallest and largest distance a point tin can be to be captured past the camera. They are represented past the parallel sides of the frustum in the higher up image.
A perspective project is usually represented with a field of view (nosotros'll use the vertical 1), aspect ratio, and the nigh and far plane distances. That information can exist used to calculate width
and pinnacle
, and so the matrix can be created from the following template:
2*northward/width 0 0 0 0 2*due north/height 0 0 0 0 (f+north)/(n-f) 2*f*n/(n-f) 0 0 -i 0
To calculate the width and tiptop, the following formulas can be used:
height = 2 * almost * Math.tan(fov * Math.PI / 360) width = aspectRatio * elevation
The FOV (field of view) represents the vertical angle that the photographic camera captures with its lens. The aspect ratio represents the ratio between image width and height, and is based on the dimensions of the screen we're rendering to.
Implementation
Now we tin can represent a camera equally a class that stores the photographic camera position and projection matrix. We likewise need to know how to calculate inverse transformations. Solving full general matrix inversions can exist problematic, but there is a simplified arroyo for our special case.
role Camera () { this.position = new Transformation() this.projection = new Transformation() } Photographic camera.prototype.setOrthographic = function (width, height, depth) { this.projection = new Transformation() this.projection.fields[0] = two / width this.projection.fields[five] = 2 / height this.projection.fields[10] = -2 / depth } Camera.prototype.setPerspective = function (verticalFov, aspectRatio, near, far) { var height_div_2n = Math.tan(verticalFov * Math.PI / 360) var width_div_2n = aspectRatio * height_div_2n this.projection = new Transformation() this.projection.fields[0] = 1 / height_div_2n this.projection.fields[v] = 1 / width_div_2n this.project.fields[10] = (far + most) / (near - far) this.projection.fields[10] = -i this.project.fields[14] = two * far * near / (near - far) this.projection.fields[15] = 0 } Camera.prototype.getInversePosition = function () { var orig = this.position.fields var dest = new Transformation() var x = orig[12] var y = orig[thirteen] var z = orig[xiv] // Transpose the rotation matrix for (var i = 0; i < iii; ++i) { for (var j = 0; j < 3; ++j) { dest.fields[i * iv + j] = orig[i + j * 4] } } // Translation past -p will apply R^T, which is equal to R^-1 return dest.translate(-x, -y, -z) }
This is the final piece we need earlier we can start drawing things on the screen.
Cartoon an Object with the WebGL Graphics Pipeline
The simplest surface you tin can draw is a triangle. In fact, the majority of things that you lot draw in 3D space consist of a smashing number of triangles.
The first thing that you lot need to empathize is how the screen is represented in WebGL. It is a 3D space, spanning between -1 and ane on the x, y, and z axis. By default this z axis is non used, but yous are interested in 3D graphics, so you lot'll want to enable information technology correct away.
Having that in listen, what follows are iii steps required to draw a triangle onto this surface.
You lot tin can define three vertices, which would represent the triangle you want to draw. Y'all serialize that data and send it over to the GPU (graphics processing unit). With a whole model bachelor, you can practise that for all the triangles in the model. The vertex positions you give are in the local coordinate space of the model you've loaded. Put simply, the positions you provide are the exact ones from the file, and non the one you get after performing matrix transformations.
Now that you've given the vertices to the GPU, y'all tell the GPU what logic to use when placing the vertices onto the screen. This step will exist used to apply our matrix transformations. The GPU is very skillful at multiplying a lot of 4x4 matrices, and then we'll put that ability to good use.
In the concluding step, the GPU will rasterize that triangle. Rasterization is the process of taking vector graphics and determining which pixels of the screen need to be painted for that vector graphics object to exist displayed. In our instance, the GPU is trying to determine which pixels are located within each triangle. For each pixel, the GPU will ask yous what colour y'all want it to exist painted.
These are the 4 elements needed to draw anything you lot want, and they are the simplest example of a graphics pipeline. What follows is a look at each of them, and a uncomplicated implementation.
The Default Framebuffer
The about important element for a WebGL application is the WebGL context. You can access it with gl = canvas.getContext('webgl')
, or use 'experimental-webgl'
every bit a fallback, in case the currently used browser doesn't support all WebGL features yet. The sheet
we referred to is the DOM element of the canvas we want to draw on. The context contains many things, among which is the default framebuffer.
You could loosely describe a framebuffer as any buffer (object) that you can draw on. Past default, the default framebuffer stores the color for each pixel of the canvas that the WebGL context is spring to. Equally described in the previous section, when we draw on the framebuffer, each pixel is located betwixt -1 and i on the 10 and y centrality. Something we too mentioned is the fact that, past default, WebGL doesn't use the z axis. That functionality can be enabled by running gl.enable(gl.DEPTH_TEST)
. Great, only what is a depth test?
Enabling the depth test allows a pixel to shop both color and depth. The depth is the z coordinate of that pixel. Later on you draw to a pixel at a certain depth z, to update the colour of that pixel, you demand to draw at a z position that is closer to the camera. Otherwise, the draw attempt will be ignored. This allows for the illusion of 3D, since drawing objects that are behind other objects will cause those objects to be occluded by objects in front end of them.
Any draws you perform stay on the screen until you tell them to get cleared. To exercise and so, you take to call gl.articulate(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
. This clears both the color and depth buffer. To selection the color that the cleared pixels are set to, apply gl.clearColor(reddish, green, blue, alpha)
.
Let's create a renderer that uses a sail and clears information technology upon request:
role Renderer (canvas) { var gl = canvas.getContext('webgl') || canvass.getContext('experimental-webgl') gl.enable(gl.DEPTH_TEST) this.gl = gl } Renderer.prototype.setClearColor = role (red, dark-green, blue) { gl.clearColor(red / 255, green / 255, blue / 255, ane) } Renderer.prototype.getContext = function () { return this.gl } Renderer.prototype.render = function () { this.gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT) } var renderer = new Renderer(document.getElementById('webgl-sheet')) renderer.setClearColor(100, 149, 237) loop() function loop () { renderer.render() requestAnimationFrame(loop) }
Attaching this script to the following HTML will give you a bright blueish rectangle on the screen
<!DOCTYPE html> <html> <head> </head> <body> <canvas id="webgl-canvas" width="800" peak="500"></canvas> <script src="script.js"></script> </trunk> </html>
The requestAnimationFrame
telephone call causes the loop to be called once more as soon as the previous frame is done rendering and all event handling is finished.
Vertex Buffer Objects
The first thing you lot need to do is define the vertices that you want to depict. You lot can do that past describing them via vectors in 3D space. After that, you want to move that data into the GPU RAM, by creating a new Vertex Buffer Object (VBO).
A Buffer Object in general is an object that stores an array of memory chunks on the GPU. It being a VBO just denotes what the GPU tin apply the retentiveness for. Most of the time, Buffer Objects you create will be VBOs.
You can fill up the VBO by taking all North
vertices that we take and creating an assortment of floats with 3N
elements for the vertex position and vertex normal VBOs, and 2N
for the texture coordinates VBO. Each group of three floats, or ii floats for UV coordinates, represents private coordinates of a vertex. And then nosotros pass these arrays to the GPU, and our vertices are gear up for the rest of the pipeline.
Since the data is now on the GPU RAM, you lot tin delete information technology from the general purpose RAM. That is, unless yous desire to afterwards on change it, and upload information technology over again. Each modification needs to be followed by an upload, since modifications in our JS arrays don't apply to VBOs in the actual GPU RAM.
Beneath is a code case that provides all of the described functionality. An important note to brand is the fact that variables stored on the GPU are not garbage nerveless. That ways that we accept to manually delete them once we don't want to use them any more. We will just give you an example for how that is done here, and will not focus on that concept further on. Deleting variables from the GPU is necessary simply if you plan to cease using certain geometry throughout the plan.
We also added serialization to our Geometry
class and elements within it.
Geometry.epitome.vertexCount = function () { return this.faces.length * three } Geometry.prototype.positions = function () { var answer = [] this.faces.forEach(function (face) { face.vertices.forEach(function (vertex) { var v = vertex.position reply.push button(v.10, five.y, v.z) }) }) render respond } Geometry.image.normals = function () { var answer = [] this.faces.forEach(office (face up) { face.vertices.forEach(role (vertex) { var 5 = vertex.normal answer.push(v.x, 5.y, v.z) }) }) return reply } Geometry.prototype.uvs = part () { var answer = [] this.faces.forEach(function (face up) { face.vertices.forEach(role (vertex) { var five = vertex.uv answer.push(five.x, v.y) }) }) return respond } //////////////////////////////// function VBO (gl, information, count) { // Creates buffer object in GPU RAM where nosotros can store annihilation var bufferObject = gl.createBuffer() // Tell which buffer object we want to operate on equally a VBO gl.bindBuffer(gl.ARRAY_BUFFER, bufferObject) // Write the data, and set the flag to optimize // for rare changes to the data we're writing gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW) this.gl = gl this.size = data.length / count this.count = count this.data = bufferObject } VBO.prototype.destroy = function () { // Complimentary retention that is occupied by our buffer object this.gl.deleteBuffer(this.data) }
The VBO
information type generates the VBO in the passed WebGL context, based on the array passed as a second parameter.
You lot can see three calls to the gl
context. The createBuffer()
call creates the buffer. The bindBuffer()
call tells the WebGL state machine to utilise this specific memory as the current VBO (ARRAY_BUFFER
) for all future operations, until told otherwise. Later that, we set the value of the current VBO to the provided data, with bufferData()
.
We likewise provide a destroy method that deletes our buffer object from the GPU RAM, by using deleteBuffer()
.
You can use three VBOs and a transformation to describe all the properties of a mesh, together with its position.
office Mesh (gl, geometry) { var vertexCount = geometry.vertexCount() this.positions = new VBO(gl, geometry.positions(), vertexCount) this.normals = new VBO(gl, geometry.normals(), vertexCount) this.uvs = new VBO(gl, geometry.uvs(), vertexCount) this.vertexCount = vertexCount this.position = new Transformation() this.gl = gl } Mesh.image.destroy = office () { this.positions.destroy() this.normals.destroy() this.uvs.destroy() }
As an case, here is how nosotros tin load a model, store its properties in the mesh, and so destroy it:
Geometry.loadOBJ('/assets/model.obj').and then(role (geometry) { var mesh = new Mesh(gl, geometry) console.log(mesh) mesh.destroy() })
Shaders
What follows is the previously described two-step process of moving points into desired positions and painting all private pixels. To exercise this, we write a program that is run on the graphics card many times. This programme typically consists of at least 2 parts. The first part is a Vertex Shader, which is run for each vertex, and outputs where nosotros should place the vertex on the screen, amongst other things. The second part is the Fragment Shader, which is run for each pixel that a triangle covers on the screen, and outputs the color that pixel should be painted to.
Vertex Shaders
Permit's say you want to take a model that moves around left and right on the screen. In a naive arroyo, you could update the position of each vertex and resend information technology to the GPU. That process is expensive and slow. Alternatively, you would give a plan for the GPU to run for each vertex, and practice all those operations in parallel with a processor that is built for doing exactly that job. That is the role of a vertex shader.
A vertex shader is the role of the rendering pipeline that processes individual vertices. A call to the vertex shader receives a unmarried vertex and outputs a single vertex later all possible transformations to the vertex are practical.
Shaders are written in GLSL. There are a lot of unique elements to this language, simply most of the syntax is very C-similar, and so information technology should be understandable to most people.
There are three types of variables that go in and out of a vertex shader, and all of them serve a specific employ:
-
attribute
— These are inputs that concord specific properties of a vertex. Previously, we described the position of a vertex as an attribute, in the grade of a three-element vector. You can wait at attributes equally values that draw one vertex. -
uniform
— These are inputs that are the aforementioned for every vertex inside the aforementioned rendering call. Allow's say that nosotros want to be able to move our model around, past defining a transformation matrix. You can use auniform
variable to describe that. You lot tin can indicate to resources on the GPU as well, similar textures. Y'all can wait at uniforms every bit values that depict a model, or a part of a model. -
varying
— These are outputs that we pass to the fragment shader. Since there are potentially thousands of pixels for a triangle of vertices, each pixel will receive an interpolated value for this variable, depending on the position. So if one vertex sends 500 equally an output, and another one 100, a pixel that is in the centre between them will receive 300 as an input for that variable. You can look at varyings equally values that describe surfaces betwixt vertices.
So, permit's say you want to create a vertex shader that receives a position, normal, and uv coordinates for each vertex, and a position, view (inverse camera position), and projection matrix for each rendered object. Let'south say you also desire to paint individual pixels based on their uv coordinates and their normals. "How would that code wait?" you might ask.
aspect vec3 position; attribute vec3 normal; attribute vec2 uv; uniform mat4 model; compatible mat4 view; uniform mat4 project; varying vec3 vNormal; varying vec2 vUv; void main() { vUv = uv; vNormal = (model * vec4(normal, 0.)).xyz; gl_Position = projection * view * model * vec4(position, ane.); }
Most of the elements here should be cocky-explanatory. The key matter to notice is the fact that at that place are no return values in the main
office. All values that nosotros would want to return are assigned, either to varying
variables, or to special variables. Hither we assign to gl_Position
, which is a 4-dimensional vector, whereby the last dimension should ever be set to one. Another foreign matter you might observe is the style we construct a vec4
out of the position vector. You can construct a vec4
past using 4 bladder
south, two vec2
s, or any other combination that results in 4 elements. There are a lot of seemingly foreign type castings which brand perfect sense in one case you're familiar with transformation matrices.
You can likewise see that here nosotros can perform matrix transformations extremely easily. GLSL is specifically made for this kind of work. The output position is calculated by multiplying the projection, view, and model matrix and applying it onto the position. The output normal is only transformed to the earth space. We'll explain later on why we've stopped there with the normal transformations.
For now, nosotros will continue it simple, and movement on to painting individual pixels.
Fragment Shaders
A fragment shader is the pace after rasterization in the graphics pipeline. It generates color, depth, and other data for every pixel of the object that is beingness painted.
The principles behind implementing fragment shaders are very similar to vertex shaders. There are three major differences, though:
- In that location are no more
varying
outputs, andattribute
inputs have been replaced withvarying
inputs. We have but moved on in our pipeline, and things that are the output in the vertex shader are now inputs in the fragment shader. - Our only output now is
gl_FragColor
, which is avec4
. The elements correspond blood-red, light-green, bluish, and alpha (RGBA), respectively, with variables in the 0 to i range. You should keep blastoff at ane, unless you're doing transparency. Transparency is a adequately avant-garde concept though, and so we'll stick to opaque objects. - At the beginning of the fragment shader, you need to set the float precision, which is important for interpolations. In nigh all cases, but stick to the lines from the following shader.
With that in listen, you can easily write a shader that paints the ruddy channel based on the U position, green channel based on the V position, and sets the bluish channel to maximum.
#ifdef GL_ES precision highp float;
0 Response to "webgl draw a 3d tetrahedron"
Post a Comment