Graphic Cards Some knowledge about Graphics Card

star89

Disciple
Hello Guys,


I'm going to buy a HD6850 card in a week but I'd a little query about graphics card(just need some insights of the capabilities of the gfx card),
Actually when I was deciding on to which gfx card to buy, I saw that different games have different requirements for graphics processing and with gfx processing it means only video processing(may be I'm wrong) ,so if a gfx card can play HD movie(1080p) which I think almost every low end card can,so why can't it play a game on full resolution then?
I know I'm missing something here, so it will be very nice of you if you can help.
 
I am not sure but I guess PC games need to keep large frame in buffers so large GPU ram is required. Apart from that a lot of geometrical as well as physics calculations are required to generate scenes in games. On higher resolutions, frame size increases as well as data required to generate that scene which low-end graphics cards can not handle smoothly.

Correct me guys, if I am wrong.
 
You can play on full resolution although depending on the game it might look like a slideshow.
Rendering games is much more taxing than displaying videos, simple as that.
 
Videos are encoded basically with a frame (image) followed by mathematical calculations which describe what the next frame looks like. So the GPU/CPU is doing relatively simple calculations.
For games the GPU is given geometry information of various surfaces and the textures that have to be applied to them (plus lighting & shadows). Then all these things can be viewed from any angle so the actual view to be displayed has to be calulated.
 
@star89 what @kreacher has said is the essence of the difference; videos and other pre-rendered clips are basically codecs which contain the relevant data in the form of a basic algorithm that the CPU OR a GPU simply needs to decode and read.

So let me give you a more general elaborate flow chart.

WHAT happens when you start to play a video file?

You locate the video you want to play --> double-click on it [this initiates the appropriate program / protocols] --> the CPU engages with the program in question --> the program in question helps the CPU / GPU decode and reintegrate the data blocks that are formed onto the screen as images / films / video clips.

In this process the CPU / GPU is just reading information and applying it to the situation, RAM usage is not high because the RAM is just a transit point for data and hard-drive usage is moderate because DATA is being read of the same.

What happens when you play a game?

You initiate the game, initially a few cut-scenes and logos are displayed, these are examples of pre-rendered media, not very unlike the videos discussed earlier.

Once you enter a game, the CPU needs to read and in a few instances even write data on the hard-drive [save games, checkpoints]. Now unlike the videos which are data chunks encoded into a certain format, a game is an application unto itself --> so the CPU start loading the map OR immediate map items / textures / lights et al. Now this puts up a wall of calculation, something that was not happening while viewing a simple HD video --> after loading all this data onto a canvas [and doing the major PHYSICS algorithms], the CPU forwards this to the graphics card which starts applying its own calculations and stores ready frames on its buffer V-RAM --> now as you have a dynamic camera in a game, the CPU needs to take that into account and keeps doing minor correction for light, draw distance and it stores all this in RAM until the graphics card deems it necessary to be accessed [RAM usage heads north] --> as you interact with the word and vice-versa the CPU also has to continually update your character statistics [more work for the CPU] --> as you go from different lighting scenarios you also make it harder for the graphics card, infact simulating textures like water [not looping pre-rendered textures] take a lot out of the graphics card and depending on detail can even bring the most powerful cards onto their knees.

Few factors that further compound problems of playing games --
  • There were earlier two distinct types of shaders, Vertex-shaders and Pixel-shaders and depending on the graphics card these were a fixed number. Since the advent of Dx 10 in the market this was revised in the favour of a common pool of Geometry-shaders / Stream-shaders / -processors. That depending on the scene will decide how many of these will act as Vertex-shaders and the rest as Pixel-shaders. This has made the entire scenario more balanced and more efficient.
  • The dedicated V-RAM of a graphics card will offer a boost in performance if your scene is very large and thus large textures must be stored relatively close on hand to be accessed. This does not mean an entry-level graphics card with an inflated V-RAM cache will perform a mid-range card with smaller RAM cache. This is because the latter will address its memory management in a more efficient manner.
  • Screen resolution -- very important, basically a larger screen requires a graphics card that has a faster pixel fill-rate and voluminous V-RAM in which it can store completely uncompressed texture data that will drive up image quality.
  • Screen Refresh rate -- another important parameter often ignored, most monitors come with an optimum refresh-rate [in case of LCD panels ~60Hz is average; for 3D panels it goes upto ~120Hz and for CRT monitors ~100Hz]. Now depending on the refresh rate of your screen, if your frame rates fall too low you will notice --> screen / frame tearing, stuttering et al this is accentuated by the screen refresh rate. There is an option in games called V-Sync that can be turned on but extracts a huge toll from your graphics cards and thus it is advised to be switched off if your graphics card cannot take the games load.

Hope this answers your query, Cheerio!!
 
Last edited by a moderator:
Sorry but theres a lot of misinformation in the post above.
I would try explaining but I'm no expert on GPU pipelines.
Just cautioning others who might mistake it for correct info.

"videos are basically codecs" .. no they are data comprised of keyframes and differences in subsequent frames.
"vertex shaders and pixel shaders.. were revised in favour of a common pool of geometry shaders" .. pixel/vertex/geometry shaders are all separate types.
Vsync isnt a toll on the GPU.. it generates frames to coincide with a screen refresh and this obviously limits your max FPS to say 60Hz in case of LCDs. Some say this actually makes things smoother without fluctuations from min to max FPS.

I wish we had left it with Kreacher's succint post.
 
no they are data comprised of keyframes and differences in subsequent frames.
"vertex shaders and pixel shaders.. were revised in favour of a common pool of geometry shaders" .. pixel/vertex/geometry shaders are all separate types.
Vsync isnt a toll on the GPU.. it generates frames to coincide with a screen refresh and this obviously limits your max FPS to say 60Hz in case of LCDs. Some say this actually makes things smoother without fluctuations from min to max FPS.

Okay, obviously you have something personal against my posts, I am an Animation student and although I am also no expert on this point, I have highlighted a few points that you either did not read OR have not fully comprehended --

For the shaders I have embedded a hyper-link to the wiki-page.

For V-Sync, yes I agree I was wrong. But V-Sync is not a frame-rate CAP it is synchronization of the individual frames with the refresh rate in question for minimizing the screen / texture tearing issue(s). Again appropriate links were embedded.

@star89 what @kreacher has said is the essence of the difference; videos and other pre-rendered clips are basically codecs which contain the relevant data in the form of a basic algorithm that the CPU OR a GPU simply needs to decode and read.

So let me give you a more general elaborate flow chart.

WHAT happens when you start to play a video file?

You locate the video you want to play --> double-click on it [this initiates the appropriate program / protocols] --> the CPU engages with the program in question --> the program in question helps the CPU / GPU decode and reintegrate the data blocks that are formed onto the screen as images / films / video clips.

In this process the CPU / GPU is just reading information and applying it to the situation, RAM usage is not high because the RAM is just a transit point for data and hard-drive usage is moderate because DATA is being read of the same.

What happens when you play a game?

You initiate the game, initially a few cut-scenes and logos are displayed, these are examples of pre-rendered media, not very unlike the videos discussed earlier.

Once you enter a game, the CPU needs to read and in a few instances even write data on the hard-drive [save games, checkpoints]. Now unlike the videos which are data chunks encoded into a certain format, a game is an application unto itself --> so the CPU start loading the map OR immediate map items / textures / lights et al. Now this puts up a wall of calculation, something that was not happening while viewing a simple HD video --> after loading all this data onto a canvas [and doing the major PHYSICS algorithms], the CPU forwards this to the graphics card which starts applying its own calculations and stores ready frames on its buffer V-RAM --> now as you have a dynamic camera in a game, the CPU needs to take that into account and keeps doing minor correction for light, draw distance and it stores all this in RAM until the graphics card deems it necessary to be accessed [RAM usage heads north] --> as you interact with the word and vice-versa the CPU also has to continually update your character statistics [more work for the CPU] --> as you go from different lighting scenarios you also make it harder for the graphics card, infact simulating textures like water [not looping pre-rendered textures] take a lot out of the graphics card and depending on detail can even bring the most powerful cards onto their knees.

Few factors that further compound problems of playing games --
  • There were earlier two distinct types of shaders, Vertex-shaders and Pixel-shaders and depending on the graphics card these were a fixed number. Since the advent of Dx 10 in the market this was revised in the favour of a common pool of Geometry-shaders / Stream-shaders / -processors. That depending on the scene will decide how many of these will act as Vertex-shaders and the rest as Pixel-shaders. This has made the entire scenario more balanced and more efficient.
  • The dedicated V-RAM of a graphics card will offer a boost in performance if your scene is very large and thus large textures must be stored relatively close on hand to be accessed. This does not mean an entry-level graphics card with an inflated V-RAM cache will perform a mid-range card with smaller RAM cache. This is because the latter will address its memory management in a more efficient manner.
 
Last edited by a moderator:
OHK so what i would tell you about how a gpu essentially works is like this(i am not talking about displaying video as it is very trivial and you shouldnt care about it) :
so when you launch a game the cpu obviously loads various textures and files and then it gives this information to the gpu as to how the images should like like on your monitor.now the gpu works its magic by decompressing textures(which are compressed so they take less space on your hdd) and stuff,but then what the gpu mainly does is that it begins constructing the frames you see on your monitor as a "skeleton" first ,without any colour or texture but purely constructed out of triangles and polygons.this is what is referred as vertex calculations.now after making this skeleton of ploys and triangles,it then fills it with colour and textures as necessary and this process is known as rasterization.now that the frame is ready it stores it in its VRAM till the time comes to display it.now it has to do all this from the beginning to the end,about 60 time per second(in the best case scenario or recommened one,pick what you want),to give you a fulfilling experience,and so you know how much work it has to do.NOW TO DEBUNK THE BIGGEST MYTH IN INDIAN MINDS-THAT GFX CARDS WITH THE BIGGEST VRAM ARE THE FASTEST-IT JUST IS FALSE!!,after what i explained just now we get to know that the VRAM will only run out of space if the gpu is very fast(and i mean really fast,but with fast gpus also come big VRAMS so ...),the game is being run at very high res and at a very high quality,but then again if you consider a gtx 680,it is very fast and if you pair it with 512 MB of VRAM of course it will run out of memory but if you pair a gt 520 with 2GB memory and then people consider it very fast due to its large memory then that is just wrong!(my friend once asked me if an xbox had about 50 gb of VRAM and he got a slap from me).More often than not people dont understand this logic and fall prey to marketing gimmick schemes like the gt 520 2gb card on offer on flipkart,i dont know what the hell the manufacturer was thinking when making it(probably to rip off people).

OHK so IN A NUTSHELL THIS HAPPENS WHEN YOU LAUNCH A GAME :

CPU CALCULATES STUFF(some physics and stuff )loads maps and textures and then sends it to the gpu telling it about what to display-TEXTURES DECOMPRESSED(sometimes this is done by the cpu too like in id's RAGE)--then THE FRAME IS CONSTRUCTED LIKE A SKELETON WITHOUT COLOUR OR TEXTURE OUT oF POLYGONS AND TRIANGLES(GEOMETRIC AND VERTEX CALCULATIONS)--THEN COLOUR AND TEXTURE IS FILLED KNOWN AS RASTERIZATION(PIXEL pipelines used)--THEN FRAMES STORED IN VRAM--THEN DISPLAYED IN THEIR EYEPOPPING GLORY(all this is then repeated many times a second,to provide the illusion of video or to maintain good fps)--THEN YOU HAVE FUN!!
 
Back
Top