Technologies of realism of the three-dimensional image. 3d art

3D art includes a variety of graffiti, three-dimensional computer graphics, realistic drawings that create the illusion of a three-dimensional scene.

Artists have always strived for a believable representation of nature and surrounding things. In our modern age, this is easily achieved with the help of advanced devices. However, there is something charming and especially appealing about the many 3D images created by the human hand. After all, the 3D drawing technique requires great skill and patience, not to mention talent.

We offer you to admire the creations of different masters, whose works are made in a realistic 3D genre.

1. Points.

Simple, elegant and whimsical 3D drawing that looks realistic.

2. Hall of the Giants, Palazzo Te, Mantua, Italy

Illusionist frescoes from the 16th century by Giulio Romano are credited with the origins of 3D art.

3. 3D pencil drawing by Nagai Hideyuki

The artist creates a three-dimensional illusion using only an album and colored pencils.

4. Museum of 3D-paintings in the city of Chiang Mai, Thailand

There is a whole museum dedicated to 3D art in Thailand. Its halls are filled with large frescoes that look completely real.

5. Coca Cola is an illusion

Often the inspiration for 3D art comes from popular objects in our daily lives. The classic version is a bottle of Cola.

6. CGI: Girl

Who would have thought that this girl does not exist?

7. Columns of the Corinthian order

Beautiful 3D pencil drawing of two Corinthian columns.

8. Realistic waterfall in Dvur Kralove, Czech Republic

Part of a city park in the Czech Republic has been turned into the illusion of a beautiful waterfall.

9. Globe

Often 3D art is used in marketing. This picture of the globe encourages people to fight poverty.

10. Igor Taritas

The young artist creates paintings using the basics of hyperrealism. This canvas radiates the depth of the real world, as if we could go on stage if we wanted to.

11. Davy Jones by Jerry Groschke

A classic character from Pirates of the Caribbean, created by a 3D CG artist.

12. Kazuhiko Nakamura

A Japanese 3D artist who creates creative steampunk photography using software.

13. Kurt Wenner: Wild Rodeo in Calgary, Canada

One of the most famous contemporary 3D artists, Kurt Wenner, has depicted a fictional rodeo in a Canadian town.

14. Leon Kier, Ruben Poncia, Remco van Schaik and Peter Westering

Four artists teamed up to create this incredible Lego army illusion.

15. Lodz, Poland

Swimming pool near a busy shopping mall in Lodz, Poland. I hope no one jumped into it.

16. Market

A beautiful 3D still life painted on asphalt near a vegetable market. It completes the atmosphere with perfect sophistication.

17. MTO, Rennes, France

Street artist MTO has created a series of large-scale 3D murals in Rennes, France. His wall paintings feature giants trying to break into people's homes. The pictures are both shocking and terrifying.

To increase the realism of the display of textures superimposed on polygons, various technologies are used:

Smoothing (Anti-aliasing);

· MIP-mapping;

texture filtering.

Anti-aliasing technology

Anti-aliasing is a technology used in image processing to eliminate the effect of "stepped" edges (aliasing) of objects. With the raster method of forming an image, it consists of pixels. Due to the fact that pixels have a finite size, so-called stairs or stepped edges can be distinguished at the edges of three-dimensional objects. To minimize the staircase effect, the easiest way is to increase the screen resolution, thereby reducing the size of the pixels. But this path is not always possible. If you cannot get rid of the step effect by increasing the monitor resolution, you can use the Anti-aliasing technology, which allows you to visually smooth out the effect of the stairs. The most commonly used technique for this is to create a smooth transition from the line or edge color to the background color. The color of a point lying on the border of objects is defined as the average value of the colors of the two border points.

There are several basic anti-aliasing technologies. For the first time, the most high-quality result was given by the full-screen anti-aliasing technology FSAA (Full Screen Anti-Aliasing). In some literary sources, this technology is called SSAA. The essence of this technology lies in the fact that the processor calculates an image frame at a much higher resolution than the screen resolution, and then, when displayed on the screen, averages the values ​​of a group of pixels to one; the number of averaged pixels corresponds to the screen resolution of the monitor. For example, if a frame with a resolution of 800x600 is anti-aliased using FSAA, the image will be calculated at a resolution of 1600x1200. When switching to the monitor resolution, the colors of the four calculated points corresponding to one monitor pixel are averaged. As a result, all lines have smooth color transitions, which visually eliminates the effect of the stairs.

FSAA does a lot of unnecessary work, loading the GPU, smoothing not the borders, but the whole image, which is its main drawback. To eliminate this shortcoming, a more economical technology, MSSA, was developed.

The essence of the MSSA technology is similar to the FSAA technology, but no calculations are performed on the pixels inside the polygons. For pixels on the borders of objects, depending on the level of smoothing, 4 or more additional points are calculated, by which the final color of the pixel is determined. This technology is the most common at the present time.

Individual developments of video adapter manufacturers are known. For example, NVIDIA has developed Coverage Sampling (CSAA) technology, which is supported only by GeForce video adapters starting with the 8th series (8600 - 8800, 9600 - 9800). ATI introduced AAA (Adaptive Anti-Aliasing) into the R520 graphics processor and all subsequent adaptive anti-aliasing.

MIP mapping technology

The technology is used to improve the quality of texturing of 3D objects. To add realism to a three-dimensional image, it is necessary to take into account the depth of the scene. As you move away from the viewpoint, the overlay texture should look more and more blurry. Therefore, when texturing even a homogeneous surface, not one, but several textures are most often used, which makes it possible to correctly take into account perspective distortions of a three-dimensional object.

For example, it is necessary to depict a cobbled pavement that goes deep into the scene. If you try to use just one texture over the entire length, then as you move away from the viewpoint, ripples or just one solid color may appear. The fact is that in this situation several texture pixels (texels) fall into one pixel on the monitor at once. The question arises: in favor of which one texel to choose when displaying a pixel?

This task is solved with the help of MIP mapping technology, which implies the possibility of using a set of textures with different levels of detail. Based on each texture, a set of textures with a lower level of detail is created. The textures of such a set are called MIP - maps (MIP map).

In the simplest case of texture mapping, for each image pixel, the corresponding MIP map is determined according to the LOD (Level of Detail) table. Further, only one texel is selected from the MIP map, the color of which is assigned to the pixel.

Filtration technologies

As a rule, MIP mapping technology is used in combination with filtering technologies designed to correct mip-texturing artifacts. For example, as an object moves further away from the viewpoint, a transition from a low MIP map level to a higher MIP map level occurs. When an object is in a transition state from one MIP map level to another, a special type of visualization error appears: clearly distinguishable boundaries of the transition from one MIP map level to another.

The idea of ​​filtering is that the color of an object's pixels is calculated from adjacent texture points (texels).

The first texture filtering method was the so-called point sampling, which is not used in modern 3D graphics. The next one was developed bilinear filtration. Bilinear filtering takes the weighted average of four adjacent texture pixels to display a point on the surface. With such filtering, the quality of slowly rotating or slowly moving objects with edges (such as a cube) is low (blurred edges).

More high quality gives trilinear filtering, in which to determine the color of a pixel, the average color value of eight texels is taken, four from two adjacent structures, and as a result of seven blending operations, the color of the pixel is determined.

With the growth in the performance of GPUs, a anisotropic filtration, which has been successfully applied so far. When determining the color of a point, it uses a large number of texels and takes into account the position of the polygons. The level of anisotropic filtering is determined by the number of texels that are processed when calculating the color of a pixel: 2x (16 texels), 4x (32 texels), 8x (64 texels), 16x (128 texels). This filtering ensures high quality of the displayed moving image.

All these algorithms are implemented by the graphics processor of the video card.

Application Programming Interface (API)

To speed up the execution of the stages of the 3D pipeline, the 3D graphics accelerator must have a certain set of functions, i.e. in hardware, without the participation of the central processor, to perform the operations necessary to build a 3D image. The set of these functions is the most important characteristic of the 3D accelerator.

Since the 3D accelerator has its own set of commands, it can only be used effectively if the application program uses these commands. But, since there are many different models of 3D accelerators, as well as various application programs that generate three-dimensional images, a compatibility problem arises: it is impossible to write such a program that would equally well use the low-level commands of various accelerators. Obviously, both application software developers and 3D accelerator manufacturers need a special utility package that performs the following functions:

efficient conversion of application program requests into an optimized sequence of low-level commands of the 3D accelerator, taking into account the peculiarities of its hardware construction;

software emulation of the requested functions if the accelerator used does not have their hardware support.

A special utility package to perform these functions is called application programming interface (Application Program Interface = API).

The API occupies an intermediate position between high-level application programs and low-level accelerator commands that are generated by its driver. Using the API relieves the application developer of the need to work with low-level accelerator commands, facilitating the process of creating programs.

Currently, there are several APIs in 3D, the scope of which is quite clearly delineated:

DirectX, developed by Microsoft, used in gaming applications running on Windows 9X and later operating systems;

OpenGL, used mainly in professional applications (computer-aided design systems, 3D modeling systems, simulators, etc.) running under the control of an operating room Windows systems NT;

Proprietary (native) APIs created by manufacturers of 3D accelerators exclusively for their Chipsets in order to use their capabilities in the most efficient way.

DirectX is a highly regulated, closed standard that does not allow changes until the release of its next, new version. On the one hand, this limits the capabilities of software developers and especially accelerator manufacturers, but it greatly simplifies the user's configuration of software and hardware for 3D.

Unlike DirectX, the OpenGL API is built on the concept of an open standard, with a small base set of features and many extensions that implement more complex features. The manufacturer of the Chipset 3D accelerator is required to create a BIOS and drivers that perform basic Open GL functions, but is not required to provide support for all extensions. This gives rise to a number of problems associated with writing drivers for their products by manufacturers, which are supplied both in full and in truncated form.

Full version An OpenGL-compatible driver is called ICD (Installable Client Driver - client application driver). It provides maximum performance, tk. contains low-level codes that provide support not only for the basic set of functions, but also for its extensions. Naturally, taking into account the concept of OpenGL, the creation of such a driver is an extremely complex and time-consuming process. This is one of the reasons why professional 3D accelerators are more expensive than gaming accelerators.

It doesn't matter how big and rich the virtual 3D world will be. A computer can only display it in one way: by putting pixels on a 2D screen. In this part of the article, you will learn how the image on the screen becomes realistic, and how scenes become similar to those that you see in the real world. First, we'll look at how realism is given to one object. Then we will move on to the whole scene. And finally, we'll look at how the computer implements motion: realistic objects move at realistic speeds.

Before the image becomes realistic, objects go through several stages of processing. The most important stages are shape creation, texture wrapping, lighting, perspective creation, depth of field and anti-aliasing.

Form creation

If we look out the window, we will see that all objects have a shape, they are created from straight and curved lines of different sizes and positions. In the same way, when looking at a three-dimensional graphic image on a computer monitor, we will observe an image created from various shapes, although most of them already consist of straight lines. We see squares, rectangles, parallelograms, circles and rhombuses. But most of all we see triangles. In order to make a reliable picture with curved lines, as in the world around us, one has to compose a form from many small molds. For example, the human body may require thousands of these molds. Together they will form a structure called a scaffold. A wireframe is very similar to a sketch of an object, you can easily identify an object from the wireframe. The next step after creating the form is also equally important: the wireframe must receive a surface.

The illustration shows a hand skeleton made from a small number of polygons - 862 in total

Surface textures

When we encounter a surface in the real world, we can get information about it in two ways. We can look at the surface, from different angles, and we can touch it and determine if it is soft or hard. In 3D graphics, we can only look at the surface, while getting all the information available. And this information is made up of three components:

  • Color: What color surface? Is it uniformly colored?
  • Texture: Is the surface flat or does it have dents, bumps, straightening, or something similar?
  • Reflectivity: Does the surface reflect light? Are the reflections clear or are they blurry?

One way to give "reality" to an object is to select a combination of these three components in different parts of the image. Look around you: your computer keyboard has a different color/texture/reflectivity from your desk, which in turn has a different color/texture/reflectivity from your hand. In order for the color of the image to look like the real thing, it is important that the computer can choose the color of a pixel from a palette of millions of different colors. The variety of textures depends both on the mathematical model of the surface (from the skin of a frog to the jelly-like material) and on the texture maps (texture maps) that are superimposed on the surfaces. It is also necessary to instill in objects those qualities that cannot be seen: softness and hardness, warmth and coldness through various combinations of color, texture and reflectivity. If you make a mistake in at least one of these parameters, the feeling of reality will instantly dissipate.


Adding a surface to a wireframe starts to change
an image from something mathematical to a picture,
in which we can easily find a hand.

Lighting

When you enter a dark room, you turn on the light. You do not think about how the light, coming out of the light bulb, is distributed throughout the room. But when developing 3D graphics, you need to constantly take this into account, because all the surfaces surrounding the wireframe must be illuminated from somewhere. One method, called the ray-tracing method, plots the path that an imaginary ray will take after exiting the lamp, reflecting off mirrored surfaces, and eventually ending at the object. The beam will illuminate it with different intensity from different angles. The method seems quite complicated even when building rays from a single lamp, but in most rooms there are many light sources: several lamps, windows, candles, etc.

Lighting plays a key role in two effects that give a sense of weight and solidity to objects: shading and shadows. The first effect of shading is to change the light intensity of an object from one side of it to the other. Thanks to the shading, the ball looks round, high cheekbones stick out on the face, and the blanket seems voluminous and soft. These differences in light intensity, together with the shape, reinforce the illusion that the object has depth in addition to height and width. The illusion of weight is created by the second effect: the shadow.


Highlighting an image not only adds depth
object through shading, but also "binds"
object to the ground through the shadow.

Optically dense bodies cast a shadow when illuminated. You can see a shadow on a sundial or look at the shadow of a tree on the sidewalk. In the real world, objects and people cast shadows. If shadows are present in the 3D world, then it will even more seem to you that you are looking through a window at the real world, and not at a screen with mathematical models.

perspective

The word perspective seems like a technical term, but it actually describes the simplest effect that we all observe. If you stand on the side of a long straight road and look into the distance, it will seem to you that the right and left lanes of the road converge to a point on the horizon. If trees are planted along the roadside, then the farther the trees are from the observer, the smaller they are. You will notice that the trees converge to the same point on the horizon as the road. If all objects on the screen converge to one point, then this will be called perspective. There are, of course, other options, but basically in three-dimensional graphics, the perspective of one point, described above, is used.

In the illustration above, the hands appear to be separated, but in most scenes, some objects are in front and partially block the view of other objects. For such scenes software should not only calculate the relative size of objects, but also take into account information about which objects cover others and how much. The most commonly used for this is the Z-buffer (Z-Buffer). This buffer got its name from the name of the Z-axis, or an imaginary line that goes behind the screen through the scene to the horizon. (The other two axes are the X-axis, which measures the width of the scene, and the Y-axis, which measures the height of the scene).

The Z-buffer assigns each polygon a number based on how close to the front edge of the scene the object containing that polygon is. Typically, lower numbers are assigned to the polygons closest to the screen, and higher numbers are assigned to the polygons adjacent to the horizon. For example, a 16-bit Z-buffer would assign the number -32.768 closest to the screen and 32.767 the farthest.

In the real world, our eyes cannot see objects covered by others, so we have no problem identifying visible objects. But these problems constantly arise before the computer, and he is forced to solve them directly. As each object is created, its Z-value is compared with the value of other objects occupying the same area in X and Y coordinates. The object with the smallest Z-value will be fully drawn, while other objects with higher Z-values ​​will only be partially drawn. Thus, we do not see background objects protruding through the characters. Since the Z-buffer is activated before the full drawing of objects, parts of the scene hidden behind the character will not be drawn at all. This speeds up graphics performance.

Depth of field

Another optical effect, depth of field, is also used successfully in 3D graphics. We will use the same example with trees planted on the side of the road. As the trees move away from the observer, another interesting effect will occur. If you look at the trees closest to you, then the distant trees will be out of focus. This is especially evident when viewing a photo or video with the same trees. Directors and computer animators use this effect for two purposes. The first is to enhance the illusion of depth in the observed scene. Of course, the computer can draw every object in the scene exactly in focus, no matter how far away it is. But since the effect of depth of field is always present in the real world, drawing all objects in focus will lead to a violation of the illusion of the reality of the scene.

The second reason for using this effect is to draw your attention to the right subjects or actors. For example, to enhance your focus on a movie character, the director will use a shallow depth of field effect where only one actor is in focus. On the other hand, scenes that are supposed to amaze you with the majesty of nature use the deep depth of field effect to bring as many objects into focus as possible.

Antialiasing

Anti-aliasing is another technology designed to trick the eye. Digital graphics systems are very good at creating vertical or horizontal lines. But when diagonals and curves appear (and they appear very often in the real world), the computer draws lines with characteristic "ladders" instead of smooth edges. To convince your eyes that they are seeing a smooth line or curve, the computer adds pixels around the line with different shades of line color. These "gray" pixels create the illusion of no "steps". This process of adding pixels to trick the eye is called anti-aliasing, and is one of the techniques that distinguishes 3D computer graphics from "hand-drawn" graphics. The task of keeping the lines and adding just the right amount of "smoothing" colors is another tricky job for the computer to create 3D animation on your display.

Three-dimensional graphics today have firmly entered our lives, that sometimes we do not even pay attention to its manifestations.

Looking at a billboard depicting the interior of a room or an ice cream commercial, watching the frames of an action-packed film, we don’t even realize that the painstaking work of a 3d graphics master is behind all this.

3D graphics is

3D graphics (three-dimensional graphics)- this is a special kind of computer graphics - a set of methods and tools used to create images of 3D objects (three-dimensional objects).

A 3D image is not difficult to distinguish from a two-dimensional one, since it involves the creation of a geometric projection of a 3D scene model onto a plane using specialized software products. The resulting model can be an object from reality, such as a model of a house, a car, a comet, or it can be completely abstract. The process of building such a three-dimensional model is called and is aimed primarily at creating a visual three-dimensional image of the modeled object.

Today, based on three-dimensional graphics, you can create a high-precision copy of a real object, create something new, bring to life the most unrealistic design ideas.

3d graphics technologies and 3d printing technologies have penetrated many areas of human activity, and bring huge profits.

3D images are bombarding us daily on television, in movies, when working with computers and in 3D games, from billboards, illustrating the full power and achievements of 3D graphics.

Achievements of modern 3D graphics are used in the following industries

  1. Cinematography and animation- Creation of three-dimensional characters and realistic special effects . Creation of computer games- development of 3d characters, virtual reality environment, 3d objects for games.
  2. Advertising- the possibilities of 3d graphics allow you to profitably present the product to the market, with the help of three-dimensional graphics you can create the illusion of a crystal-white shirt or delicious popsicles with chocolate chips, etc. At the same time, in a real advertised product, there may be many shortcomings that are easily hidden behind beautiful and high-quality images.
  3. Interior design- Design and development of interior design also cannot do today without three-dimensional graphics. 3d technologies make it possible to create realistic 3d models of furniture (sofa, armchair, chair, chest of drawers, etc.), exactly repeating the geometry of the object and creating an imitation of the material. With the help of three-dimensional graphics, you can create a video showing all the floors of the designed building, which may not even have begun to be built.

Stages of creating a three-dimensional image


In order to get a 3D image of an object, you must perform the following steps

  1. Modeling- building a mathematical 3D model of the general scene and its objects.
  2. Texturing includes overlaying textures on created models, adjusting materials and making models realistic.
  3. Lighting setup.
  4. (moving objects).
  5. rendering- the process of creating an image of an object according to a previously created model.
  6. Compositing or layout- post-processing of the received image.

Modeling- creation of virtual space and objects inside it, includes the creation of various geometries, materials, light sources, virtual cameras, additional special effects.

The most common 3d modeling software products are: Autodesk 3D max, Pixologic Zbrush, Blender.

Texturing is an overlay on the surface of the created three-dimensional model of a raster or vector image, which allows you to display the properties and material of the object.


Lighting
- creating, setting the direction and setting up light sources in the created scene. Graphic 3D editors, as a rule, use the following types of light sources: spot light (diverging rays), omni light (omnidirectional light), directional light (parallel rays), etc. Some editors allow you to create a volumetric glow source (Sphere light).

Imagine how the object will fit into the existing building. Viewing various versions of the project is very convenient in a three-dimensional model. In particular, you can change the materials and coverage (textures) of project elements, check the illumination of individual areas (depending on the time of day), place various interior elements, etc.

Unlike a number of CAD systems that use additional modules or third-party programs for visualization and animation, MicroStation has built-in tools for creating photorealistic images (BMP, JPG, TIFF, PCX, etc.), as well as for recording animation clips in standard formats (FLI, AVI ) and a set of frame-by-frame pictures (BMP, JPG, TIFF, etc.).

Creating realistic images

Creating photorealistic images begins with the assignment of materials (textures) to various elements of the project. Each texture is applied to all elements of the same color lying in the same layer. Given that the maximum number of layers is 65 thousand, and colors 256, it can be assumed that an individual material can really be applied to any element of the project.

The program provides the ability to edit any texture and create a new one based on a bitmap image (BMP, JPG, TIFF, etc.). In this case, two images can be used for the texture, one of which is responsible for the relief, and the other for the texture of the material. Both relief and texture have different placement parameters per element, such as: scale, rotation angle, offset, way to fill uneven surfaces. In addition, the bump has a “height” parameter (changeable in the range from 0 to 20), and the texture, in turn, has a weight (changeable in the range from 0 to 1).

In addition to the pattern, the material has the following adjustable parameters: scattering, diffusion, gloss, polishing, transparency, reflection, refraction, base color, highlight color, the ability of the material to leave shadows.

Texture mapping can be previewed on standard 3D solids or on any project element, and several types of element shading can be used. Simple tools for creating and editing textures allow you to get almost any material.

An equally important aspect for creating realistic images is the way of visualization (rendering). MicroStation supports the following well-known shading methods: hidden line removal, hidden line shading, permanent shading, smooth shading, Phong shading, ray tracing, radiosity, particle tracing. During rendering, the image can be smoothed (stepped out), as well as a stereo image can be created, which can be viewed using glasses with special light filters.

There are a number of display quality settings (corresponding to image processing speed) for ray tracing, radiosity, particle tracing methods. For faster processing graphic information MicroStation supports graphics acceleration methods QuickVision technology. To view and edit the created images, there are also built-in modification tools that support the following standard functions (which, of course, cannot compete with the functions of specialized programs): gamma correction, tone adjustment, negative, wash, color mode, crop, resize, rotate , mirroring, converting to another data format.

When creating realistic images, a significant part of the time is occupied by the placement and management of light sources. Light sources are divided into global and local lighting. Global Illumination, in turn, consists of ambient light, flare, sunlight, skylight. And for the sun, along with the brightness and color, the azimuth angle and the angle above the horizon are set. These angles can be automatically calculated by the specified geographical location of the object (at any point on the globe indicated on the world map), as well as by the date and time the object was viewed. The light of the sky depends on the cloudiness, the quality (opacity) of the air, and even on the reflection from the ground.

Local light sources can be of five types: remote, point, conical, surface, opening for the sky. Each source can have the following properties: color, luminous intensity, intensity, resolution, shadow, attenuation at a certain distance, cone angle, etc.

Light sources can help identify unlit areas of an object where additional lighting is needed.

Cameras are used to view project elements from a specific angle and to move the view freely throughout the file. Using the keyboard and mouse control keys, you can set nine types of camera movement: flying, turning, descending, sliding, avoiding, rotating, swimming, moving on a cart, tilting. four various types movements can be connected to the keyboard and mouse (the modes are switched by holding the Shift, Ctrl, Shift + Ctrl keys).

Cameras allow you to view the object from different angles and look inside. By varying the camera parameters (focal length, lens angle), you can change the perspective of the view.

To create more realistic images, it is possible to connect a background image, such as a photograph of an existing landscape.

mob_info