Nvdia and AMD both offer tools to select the optimal graphics - TopicsExpress



          

Nvdia and AMD both offer tools to select the optimal graphics settings for the games you own, and both do a fine job balancing quality and performance. They really work pretty well, but I just like doing things myself. Its the PC gamer way, right? We tinker on our own terms. If youre new to graphics tuning, this guide will explain the major settings you need to know about, and without getting too technical, what theyre doing. Understanding how it all works can help with troubleshooting, setting up the most gorgeous screenshots possible, or playing with tools like Durantes GeDoSaTo. And I think a basic knowledge of the technology in our games makes us better at appreciating and critiquing them. We start with the fundamental concepts on this page. For the sections on anti-aliasing, anisotropic filtering, and post-processing that follow, I consulted with Nicholas Vining, Gaslamp Games technical director and lead programmer, as well as Cryptic Sea designer/programmer Alex Austin. I also received input from Nvidia regarding my explanation of texture filtering. Keep in mind that graphics rendering is much more complex than presented here. Im a technology enthusiast translating these systems into simple analogies, not an engineer writing a technical paper, so Im leaving out major details of actual implementation. THE BASICS - Resolution and FPS A pixel is the most basic unit of a digital image—a tiny dot of color—and resolution is the number of pixel columns and pixel rows in an image or on your display. The most common display resolutions today are: 1280x720 (720p), 1920x1080 (1080p), 2560x1440 (1440p), and 3840 x 2160 (4K or ‘ultra-HD’). Those are 16x9 resolutions—if you have a display with a 16x10 aspect ratio, they’ll be slightly different: 1920×1200, 2560x1600, and so on. - Frames per second (FPS) If you think of a game as a series of animation cells—still images representing single moments in time—the FPS is the number of images generated each second. Its not the same as the refresh rate, which is the number of times your display updates per second, and is measured in hertz (Hz). 1 Hz is one cycle per second, so the two measurements are easy to compare: a 60 Hz monitor updates 60 times per second, and a game running at 60 FPS should feed it new frames at the same rate. The more work you make your graphics card do to render bigger, prettier frames, the lower your FPS will be. If the framerate is too low, frames will be repeated and it will become uncomfortable to view—an ugly, stuttering world. Competitive players seek out high framerates in an effort to reduce input lag, but at the expense of screen tearing (more on that below), while high-resolution early adopters may be satisfied with playable framerates at 1440p or 4K. The most common goal today is 1080p/60 FPS. Because most games dont have a built-in benchmarking tool, the most important tool in your tweaking box is software that displays the current framerate. ShadowPlay or FRAPS work fine. - Upscaling & downsampling Lately, Im seeing a few games offer rendering resolution settings—namely Ryse: Son of Rome and Shadow of Mordor. This setting lets you keep the display resolution the same (your displays native 1080p or 1440p, for instance) while adjusting the resolution of the game (but not the UI). If the rendering resolution is lower than your display resolution, it will be upscaled to fit—and, as expected, look like garbage. If you render the game at a higher resolution than your display resolution, which is an option in Shadow of Mordor, the image will be downsampled (or downscaled) and will look much better—see supersampling on the next page for more on that—at a high cost to performance. - Performance Because it determines the number of pixels your GPU needs to render, resolution has the greatest effect on performance. This is why console games which run at 1080p often upscale from a lower rendering resolution—that way, they can handle fancy graphics effects while maintaining a smooth framerate. - Virtual sync & screen tearing When a displays refresh cycle is out of sync with the games rendering cycle, the screen can refresh just as the game has finished supplying a frame and started on another one. The effect is a ‘break’ called screen tearing, where were seeing portions of two or more frames at the same time. It is also our number one enemy after low framerate. One solution to screen tearing is vertical sync (vsync). It’s usually an option in the graphics settings, and it prevents the game from messing with the display until it completes its refresh cycle. Unfortunately, vsync causes its own problems, one being that it contributes to input lag when the game is running at a higher framerate than the displays refresh rate (this AnandTech article explains it in technical terms). - Adaptive Vertical Synchronization The other big problem with vsync happens when the framerate drops below the refresh rate. If the framerate exceeds the refresh rate, vsync locks it to the refresh rate: 60FPS on a 60Hz display. Thats fine, but if the framerate drops below the refresh rate, vsync forces it to jump to another synchronized value: 30 FPS, for instance. If the framerate fluctuates above and below the refresh rate often, it causes stuttering. Wed much rather allow the framerate to sit at 59 than punch it down even further every time it dips. To solve this, Nvidias Adaptive Vertical Synchronization disables vsync anytime your framerate dips below the refresh rate. It can be enabled in the Nvidia control panel and I recommend it if youre using vsync. - G-sync & FreeSync New technology is starting to solve this big mess. The problem all stems from one thing: displays have a fixed refresh rate. But if the displays refresh rate could change with the framerate, we could eliminate screen tearing and eliminate the stuttering and input lag problems of vsync at the same time. Of course, you need a compatible video card and display for this to work, and there are two technologies now coming to market: Nvidia has branded their technology G-sync, while AMDs efforts are called Project FreeSync. ANTI ALIASING If you draw a diagonal line with square pixels, their hard edges create a jagged ‘staircase’ effect. This ugliness (among other artifacts) is called aliasing. If resolutions were much higher, it wouldn’t be a problem, but until display technology advances, we have to compensate with anti-aliasing. There are many techniques for anti-aliasing, but supersampling (SSAA) is useful to explain the process. It works by rendering frames at a higher resolution than the display resolution, then squeezing them back down to size. On the previous page, you can see the anti-aliasing effect of downsampling Shadow of Mordor from 5120x2880 to 1440p. Consider a pixel on a tile roof. It’s orange, and next to it is a pixel representing a cloudy sky, which is light and blueish. Next to each other, they create a hard, jagged transition from roof to sky. But if you render the scene these pixels live in at four times the resolution, that one orange roof pixel becomes four pixels. Some of those pixels are sky-colored and some are roof-colored. If we take the average of all four values, we get something in between. Do that to the whole scene and the transitions become softer. Thats the gist, at least, and while it looks very good, supersampling is extremely computationally expensive. You’re rendering each frame at a resolution two or more times higher than the one you’re playing at—even with our four GTX Titans, trying to run supersamling with a display resolution of 2560x1440 isnt practical. That’s why there are so many more efficient alternatives: - Multisampling (MSAA): Achieves good results, but is much more efficient than SSAA. This is typically the standard, baseline option in games, and its explained very simply in the video below. - Coverage Sampling (CSAA): Nvidia’s more efficient version of MSAA. - Custom-filter (CFAA): AMD’s more efficient version of MSAA. - Fast Approximate (FXAA): Rather than analyzing the 3D models (i.e. MSAA, which looks at pixels on the edges of polygons), FXAA is a post-processing filter, meaning it applies to the whole scene after it has been rendered, and its very efficient. It also catches edges inside textures which MSAA misses. - Morphological (MLAA): Available with AMD cards, MLAA also skips the rendering stage and processes the frame, seeking out aliasing and smoothing it. As Nicholas Vining explains: Morphological anti-aliasing looks at the morphology (read: the patterns) of the jaggies on the edges; for each set of jaggies, it computes a way of removing the aliasing which is pleasing to the eye. It does this by breaking down edges and jaggies into little sets of morphological operators, like Tetris blocks, and then uses a special type of blending for each Tetris block. MLAA can be enabled in the Catalyst control panel. - Enhanced Subpixel Morphological (SMAA): Another post-processing method, described as combining MLAA with MSAA and SSAA strategies. You can apply it with SweetFX. - Temporal (TXAA): Supported on Nvidias Kepler GPUs, TXAA combines MSAA with other filters, and can help reduce the crawling motion on edges, which looks a bit like marching ants. It cannot, however, remove actual ants from inside your display. You should probably just throw that display out. As Vining again explains: The notion here is that we expect frames to look a lot like each other from frame to frame; the user doesnt move that much. Therefore, where things havent moved that much, we can get extra data from the previous frame and use this to augment the information we have available to anti-alias with. - Multi-Frame (MFAA): Nvidias latest, exclusive to Maxwell GPUs. Whereas MSAA samples in set patterns, MFAA allows for programmable sample patterns. Nvidia does a good job of explaining MSAA and MFAA simply in the video below. ANISOTROPIC FILTERING - Bilinear and trilinear filtering Texture filtering deals with how a texture—a 2D image (and other data)—is displayed on a 3D model. A pixel on a 3D model won’t necessarily correspond directly to one pixel on its texture (called a ‘texel’ for clarity), because you can view the model at any distance and angle. So, when we want to know the color of a pixel, we find the point on the texture it corresponds to, take a few samples from nearby texels, and average them. The simplest method of texture filtering is bilinear filtering, and that’s all it does: when a pixel falls between texels, it samples the four nearest texels to find its color. Introduce mipmapping, and you have a new problem. Say the ground you’re standing on is made of cracked concrete. If you look straight down, you’re seeing a big, detailed concrete texture. But when you look way off into the distance, where this road recedes toward the horizon, it wouldn’t make sense to sample from a high-resolution texture when we’re only seeing a few pixels of road. To improve performance (and prevent aliasing, Austin notes) without losing much or any quality, the game uses a lower-resolution texture—called a mipmap—for distant objects. When looking down this concrete road, we don’t want to see where one mipmap ends and another begins, because there would be a clear jump in quality. Bilinear filtering doesn’t interpolate between mipmaps, so the jump is visible. This is solved with trilinear filtering, which smooths the transition between mipmaps by taking samples from both. - Anisotropic filtering Trilinear filtering helps, but the ground still looks all blurry. This is why we use anisotropic filtering, which significantly improves texture quality at oblique angles. To understand why, visualize a square window—a pixel of a 3D model—with a brick wall directly behind it as our texture. Light is shining through the window, creating a square shape on the wall. Thats our sample area for this pixel, and its equal in all directions. With bilinear and trilinear filtering, this is how textures are always sampled. If the model is also directly in front of us, perpendicular to our view, thats fine, but what if its tilted away from us? If were still sampling a square, were doing it wrong, and everything looks blurry. Imagine now that the brick wall texture has tilted away from the window. The beam of light transforms into a long, skinny trapezoid covering much more vertical space on the texture than horizontal. Thats the area we should be sampling for this pixel, and in the form of a rough analogy, this is what anisotropic filtering takes into account. It scales the mipmaps in one direction (like how we tilted our wall) according to the angle were viewing the 3D object. This is a difficult concept to grasp, and I have to admit that my analogy does little to explain the actual implementation. QUALITY SETTINGS & POST-PROCESSING - Quality settings What quality settings actually do will vary between games. In general, they raise and lower the complexity of game assets and effects, but going from low to high can change a bunch of variables. Increasing the shadow quality, for instance, might increase the shadow resolution, enable soft shadows as well as hard shadows, increase the distance at which shadows are visible, and so on. It can have a significant effect on performance. Texture quality, which lowers and raises the resolution of textures, tends to affect performance and visual quality a lot. - Ambient occlusion Ambient lighting exposes every object in a scene to a uniform light—think of a sunny day, where even in the shadows, a certain amount of light is scattered about. Its paired with directional light to create depth, but on its own its flat. Ambient occlusion attempts to improve the effect by determining which parts of the scene shouldnt be exposed to as much ambient lighting as others. It doesnt cast hard shadows like a directional light source, rather, it darkens interiors and crevices, adding soft, diffused shading. Screen space ambient occlusion (SSAO) is an approximation of ambient occlusion used in real-time rendering, and has become commonplace in games in the past few years—it was first used in Crysis. Sometimes, it looks really dumb, like everything has a dark anti-glow around it. Other times, its effective in adding depth to a scene. All major engines support it, and its success will vary depending on the game and implementation. - High dynamic range rendering (HDRR) High dynamic range was all the rage in photography a few years ago. The range it refers to is the range of luminosity in an image—that is, how dark and bright it can be. The goal is for the darkest areas to be as detailed as the brightest areas. A low-dynamic-range image might show lots of detail in the light part of a room, but lose everything in the shadows, or vice versa. In the past, the range of dark to light in games was limited to 8 bits (only 256 values), but as of DirectX 10 128-bit HDRR is possible. HDR is still limited by the contrast ratio of displays, though. Theres no standard method for measuring this, but LED displays generally advertise a contrast ratio of 1000:1. - Bloom The famously overused bloom effect attempts to simulate the way bright light can appear to spill over edges, a visual cue that makes light sources seem brighter than they are (your display can only get so bright). It can work, but too often its applied with a thick brush, making distant oil lamps look like nuclear detonations. Thankfully, most games offer the option to turn it off. - Motion blur Motion blur is pretty self-explanatory: its a post-processing filter which simulates the film effect caused when motion occurs while a frame is being captured, causing streaking. Many gamers Ive seen in forums or spoken to prefer to turn it off: not only because it affects performance, but because it just isnt desirable. Ive seen motion blur used effectively in some racing games, but Im also in the camp that usually turns it off. It doesnt add enough for me to bother with any performance decrease. - Depth of field (DOF) In photography, depth of field refers to the distance between the closest and furthest points which appear in focus. If Im taking a portrait with a small DOF, for instance, my subjects face might be sharp while the back of her hair begins to blur, and everything behind her is very blurred. In a high DOF photo, on the other hand, her nose might be as sharp as the buildings behind her. In games, DOF generally just refers to the effect of blurring things in the background. Like motion blur, it pretends our eyes in the game are cameras, and creates a film-like quality. It can also affect performance significantly depending on how its implemented. On the LPC, the difference was negligible, but on the more modest PC at my desk (Core i7 @ 3.47 GHz, 12GB RAM, Radeon HD 5970) my average framerate in BioShock Infinite dropped by 21 when going from regular depth of field to Infinites DX11 Diffusion Depth of Field.
Posted on: Sun, 07 Dec 2014 18:32:39 +0000

Trending Topics



Recently Viewed Topics




© 2015