4800 series are the latest offering by Amd-Ati. The 4850 along with their current flagship product the 4870 was an highly anticipated graphics card solution, knowing that it would be priced aggressively….and it is now, infact making it the best bang for buck enthusiast gpu solution available.
I have two cards for this review to compare w/ - a 4850 and a 8800GTS 512; which is almost equal in performance to the 9800GTX. Infact, clock per clock, the 8800GTS 512 wins the lead…but that’s not important here.
4850 is at a less than 200$ price point now. It was first rumored to to have 480 shader processors, the parallel processing units that make up the core’s processing power. But it is now known that such wasn’t the case. The shader processors were maybe rumored like that by ati themselves, to present to their competitors an unexpected surprise later on. While nvidia earned much by their RnD products which did work out great, Ati wasn’t having a great time around…but they might have been like the duck that seems unflattered on the water surface floating on it, but does a great deal paddling all the time…its little feet responsible for the thrust and turns it desires.
In my opinion, Nvidia made a mistake by choosing to make the ‘fastest single gpu solution around’ w/o taking into account ATLEAST how fast its going to be, lets not talk about those specific target die size records breaks and totally ignore power consumption(we welcome higher power consumptions and prefer it over lower). Had been the ati offering not as it is now, i.e., 480SP then the gtx 280 would have shone somewhat(somewhat because knowing enthusiasts were tired of g80 core shrinks and optimizations and releases as a card) as a new card, even though performance differences over the previous generations is not at all reasonable(see the jump from 7900gtx to 8800gtx), even if there are some major architectural differences over the g80 and the gt200 or the d10-u. now the fastest solutions price equaled to twice the price of the something-near-to-the fastest solution, the 4870. so they got into a situation…their perspective followed for sometime now is being paid of.
4850/4870 use the same rv770 core, the difference lies in clocks and in the memory architectures they are equipped with.
[BREAK=4850 Specs]
RV770 pro specs
o 800 stream processing units
o 956 million transistors on 55nm fabrication process
o core clock - 625mhz
o memory clock - ~2000mhz effective
o Up to 24x Custom Filter Anti-Aliasing (CFAA) for improved quality
o 1Ttrillion floating point calc./S (1teraflop power)
o 16 rops
o 40 tmus
o 256-bit GDDR3/GDDR4 memory interface
PCI Express 2.0 x16 bus interface
Microsoft® DirectX® 10.1 support
Unified Superscalar Shader Architecture
DXTC and 3Dc+ texture compression
High resolution texture support (up to 8192 x 8192)
Accelerated physics processing
Dynamic Geometry Acceleration
Programmable Tessellation unit
Accelerated geometry shader path for geometry amplification
Memory read/write cache for improved stream output performance
DisplayPort output support on 24- and 30-bit displays at all resolutions up to 2560x1600
ATI CrossFireXâ„¢ Multi-GPU Technology(2,3,4 gpus)
Super AA (ATI CrossFireXâ„¢ configurations only)
Adaptive super-sampling and multi-sampling(like TSAA in Nvidia)
Gamma correct for AA
All anti-aliasing features compatible with HDR rendering
ATI Avivoâ„¢HD video and display technologyUnified Video Decoder 2 (UVD)
ATI PowerPlayâ„¢ Technology5 Constantly monitors GPU activity, dynamically adjusting clocks and voltage based on user scenario, triggers thermal actions as required
Specs table I made
Sapphire HD4850 Specs:
pretty much stock everything...see above for specs memory is clcked@993mhz
[BREAK=Pics]
accessories - hdmi dongle, avi-analog converter, composite cable, S-video to video converter, driver cd, crossfire bridge, manual
naked
fan ratings
transparent plastic cover covering the HS underneath
The opponent
[BREAK=Hardware/Software used]
Test Configuration:
Oses used w/ their recent service packs and required hotfixes:
Windows XP x86 prof. edition sp3
Windows Xp x64 prof. edition sp2
windows Vista Ultimate x64 sp1
Processor:
e2160 @ 3.6ghz (conroe core m0) 400fsb
Motherboard:
Biostar p35d2a7 ver 5.x (bios p35ba919)
PSU:
Corsair HX620/vx450/Pov Black Diamond/chieftec 550(for overclocking stability checks - friends' friend's psus)
Display:
Dell 2408WFP
Drivers:
Catalyst 8.7 beta
forceware 175.80
GPUs:
Sapphire 4850 512mb
Core clock: 625
Mem clock: 993 x2
Msi 8800GTS 512 oc
Specs:
Core clock: 678
Mem clock: 974 x2
FRAPS is used for benching realtime gaming performance and for some titles their benchmarks are used; whichever are better for gameplay performance testing. Using FRAPS for Fps results is straining and time consuming, but done wisely, it presents u the average fps u want w/ pretty less deviations from a certain value.
I have maxed out the settings in games during benchmarking. Various other combination s of settings that I deemed superior for both gameplay and reasonable performance have been chosen, although the basic maximum settings for each title have been used ..I wanted to present to u all every bell and whistle of the graphics engines available as much I can and benchmark the stuff using those settings.
PS - As a rule, I have left a space on the graph between each consecutive resolution to maintain better reading. But due to omitted results for some resolutions in some graphs it might lead to confusion as more than 1 ‘gap’ or that amount of multiple spaces are left, if u see what I mean. During such situations, just remember that I have left a space between each res. And that the remaining spaces are for the omitted results. Please see the relevant legend-helper to figure out which is what. Shouldn’t be difficult. I have even mentioned most of the settings in the pics.
ready now....
[BREAK=Test Drive Unlimited]
Test Drive Unlimited results
16xAF forced via resp. ctrl panels
AA was forced using the ‘enhance application settings’ feature via nvidia control panel for 8x AA
For nvidia, transparency AA was set to SSAA via nv ctrl panel
For Ati, Adaptive AA was set to SSAA w/ the slide for SSAA moved to the performance side(left)
windows xp x86 used
FRAPS w/ 'American Duel' race was used for benching. it followed a pattern of test like DIRt..read ahead for dirt
2x/4x AA was set ingame. not forced.
8x AA was was not forcible via Ati Catalyst Ctrl Center(CCC). So those results are not included for the ati card. Blank spaces there.
TSAA/Adaptive AA helps in TDU…also makes the blurriness of some fence textures lessen…so I have enabled it.
ati - “Adaptive anti-aliasing is a technique that applies a combination of multi-sampling (MSAA) and super-sampling (SSAA) on 3D objects to improve edge smoothness and fine detail. This feature renders 3D objects containing transparencies more realistically, providing exceptional levels of image quality while maintaining performance.â€
w/ TSAA it g92 GTS wins by a fair margin but when AA is turned on – 2x or 4x, and the resolution is transcended(any above 1024x768), it starts hurting its performance and falls behind the 4850.
for all resolutions, w/o AA but w/ TSAA 8800GTS512 wins by somewhat considerable margin.
w/o TSAA & w/o AA, @ 1920 1200, 8800GTS 512 wins by a large difference. So it is for other resolutions. This game prefers nvidia g92.
[BREAK=Dirt]
Dirt results
Fraps used in the 4th 4x4 vehicle race
..strategy was to play at very easy setting; the vehicles were racing slowly for the same reason. benching was done with the path involving keeping at a approx 5 feet distance from the vehicles (staying at the back; looking at them) and following them as they traversed at a rather medium speed for 1 lap of the race. their positions interchanged during the repeated tests, but the average fps differences recorded din't go above 1.5fps. surprisingly.
windows Xp x86
4850 looks better here
[BREAK=FarCry]
FarCry HDR+8xAA results
my all time favourite. The gameplay rocks a lot. Great AI for NPCs, nice environment, the 64-bit patches alongwith 8x AA makes this reach its climax. 32-bit version benefits from the 64bit patches and the IQ is same over both 64 and 32-bit. And the AA experimental patch is available for only x86 version.
Executable from the bin32 directory is used. (x86 version benched)
64-bit upgrade + 64-bit Exclusive content patches installed
Also used is the 1.4 experimental hdr+AA patch.
Xp x86 is used for testing
ingame settings set to very high; ultra high for water.
16AF enabled by setting the: r_Texture_Anisotropic_Level = "16"
also e_vegetation_sprites_distance_ratio was set to 10(default value is 1)
r_Hdrrendering = 2 [all these three in system.cfg in the main directory]
Fraps; pier level; a fight w/ the mercs in the lower jungle, their positions were fixed and hence fps was almost same through every repeated test
4850 leaps a lot ahead than the g92 gts w/ 8x AA and 1600 1200+ resolutions. Little testing done here though. The game is at its max. Almost…some cvars could be edited to increase for eg the grass view distance etc..but I have used only those settings that are generally better known for better IQ w/ FarCry
One cannot neglect this title yet. Have a look at the IQ comparisons. Had lots pms by people asking for FarCry settings, how to enable HDR etc…please install the patches I did, and play the game w/o or w/ AA w/ the 1.4 pre release patch. Its definitely a beauty in its own.
[BREAK=IQ comparison: Far Cry]
FarCry IQ tests
settings same as in the previous benchmarking
GTS 512
the very high settings water reflections are not rendered ….is it due to some malfunctions resulting from the 1.4 pre-release patch? or due to viewing distance being lowered purposely for the g92?
slightly better image, leaving the water part.
[BREAK=Oblivion]
Elder Scrolls 4: Oblivion(HDR + AA) w/ Quarl's QTP3 results
a mesmerizing game...gets u glued on for hrs at a stretch...background music is tempting, dialogue is almost unprecedented. not to mention the graphics w/ QTP3 and 16xAF. nice
patched to version 1.2.0416
16xAF via resp. ctrl panels
QTP3 textures mod
8x AA forced via CCC/nvidia ctrl panel alongwith HDR(HDR is enabled via ingame settings)
all settings maxed out; all sliders slided to the max
Vista Ultimate x64 sp1 is used
Fraps used for real-time benching a fight in the jungle near imperial city(near the dungeon Nenalata for Umbacanno mission)
oblivion performs better than xp in vista I think.
GTS 512 is a better performer here
[BREAK= IQ comparison: Oblivion 16xAF + QTP3]
Oblivion IQ tests
QTP3 is responsible for the excellent texturing and also for the parallex/bump maps making it look great. QTP3 is a great mod for oblivion…4096x 4096 textures I suppose. Some of the Best textures out there used in games and 20 times better and bigger in size(literally, the mod size is 2gb for textures and and oblivion folder goes to 7gb from the basic 4gb installation) than the textures used in rpg/strategy game. Oblivion is classified as an rpg.
settings are same as in the previous oblivion benchmark; but AA is not forced via the ctrl panel. Its HDR only for the comparison.
Look at the granularity of the pillar texture in the first set. 4850 pic is much sharper. For the second set, the difference lies in between the shadows rendered in the circled part between her breasts and also of the lace’ shadow. The shadows are much richer in case of the 4850. and perhaps more deeply rendered as can be seen.
This is with 16x AF forced via the control panel. All settings are maxed out ingame. Bloom is then obviously set to off. No AA is forced via control panel for the IQ comparisons. Windows Xp x86 prof. edition is used. Driver is 175.80, one of the best for 8800GTS512. and 8.7 beta.
[BREAK= PT Boats DX10 benchmark]
PT Boats DX10 Benchmark results
I have included this benchmark as it is said to be a pure dx10 derivation. dx9 mode is available too but i didn't use that. Very Realistic waters.
nvidia cards perform better. Even AA wise.
16x, 8xq, 16xq are not available for the 4850…so left blank spaces for them…don’t confuse w/ the spaces left between 2 diffrent resolutions. Also 1920 x 1200 4x is not doable on the 4850 and so is 1280 x 960 8xAA I left the spaces blank.
[BREAK=Call of Juarez DX10]
Call of Juarez DX10 results
Call of Juarez Enhancement Pack changes the game substantially introducing various changes including gameplay modifications:
- Improved Windows Vista support.
- DirectX 10 version with improved visuals added.
Next generation material shades
Fully dynamic realtime shadowing
true HDR + AA
particles via geometry shaders
soft edged grass
- DirectX 10 benchmark added.
- Re-designed stealth levels for better gameplay.
- Improved animations in cut-scenes.
- Many minor fixes.
Fraps was used during a gunfight in episode X(10th)
The fight lasted for only 32-34 secs. A position down the hill, during the beginning of the episode. Taking into account of where and when and how many and in what groups the npcs attack(had three groups coming consecutively) I timed the fight and made it a playground for fps benchmarking. Player faced a side of the hill, after stepping out of a room, reducing the fps as and when. This was not difficult to do. Had variations of 1 fps and when they turned to 0.5, I recorded the results finally. 4 runs per setting of a resolution, more if unreasonable deviation was experienced That makes 28 x 5 x 2(had another fight to bench/compare w/ just in case) = roughly 280 runs [28 is for 7 resolutions and their 3 AA modes]
Ati game, ati leads.
I have included benchmarks/IQ comparisons of the most graphically advanced titles around…the dx10 version of Call of Juarez is one of them and thus also one more reason to use the broken OS. vista. This version was released long back and has some great dx10 features. Also unlike crysis, u can’t manually edit any for example config.ini or cvars files to make the dx10 effects appear in dx9..some really dx10 only shaders are used I wonder if it’s a true Dx10 game. But a lot of effort for quite sometime is put into the Dx10 version, that can’t be denied.
[BREAK=Call of Juarez DX10 benchmark]
Call of Juarez DX10 Benchmark results
included dx10 benchmark….if only this had some good AI etc it would have saved gallons of my time above. But I ran it too just cause it took so less time and did only 2 runs per resolution since there was no change in fps at all whatsoever, but ran it for only 5 resoluton settings. I used SSAA, super sampling AntiAliasing which is a more strenuous AA and also did run it w/ the e2160 proccy @ default 1.8ghz
big lead by ATI
[BREAK= IQ comparison :Call of Juarez DX10]
Call of Juarez DX10 Iq tests
4850 512; the whip is not blurred….trees and their shadows seem to be better
8800GTS 512; the whip is blurred for some part….trees seem different… is…. –tive lod bias not coming into play? Can’t be that….the IQ is then supposedly better for the 4850 that way…but wait, look further
4850 512; grass isn’t rendered at some places…thus resulting in better fps on the 4850? That’s what it seems like…in that case the built in dx10 benchamrk will have a definite advantage for the 4850. To add to that, Ati cards own that game. Is Ati cheating this way?
8800GTS 512; the grass is proper, extensively present.
[BREAK=Call of Juarez DX9]
Call of Juarez DX9 results
considering that dx9 version of coj is inferior a lot to the orig dx10 as it looks like(and not tweakable to make it look like the other), I haven’t benched at all those resolution settings like I did for dx10 version. However the dx9 version does benefit from the textures and stuff from the dx10 version and hence I thought of benching it too using fraps…
same scene w/ fraps as in dx10
32-bit executable from the same game installation was used
Windows Xp x86 prof. edition was used
interesting results….the g92 beats it at some settings. even when AA is enabled. I knew this would turn up something. That’s why I felt like benching it.
[BREAK=Lost Planet v1.4 DX10 64-bit benchmark -Snow]
Lost Planet v1.4 DX10 64-bit Benchmark results -Snow
Inbuilt benchmark was used for both Snow and Cave to get the results. It had lots of AI calculations seeing its nature. I ran the bench twice per resolution setting
Patch 1.4 has some pretty neat DirectX 10 enhancements –
Specialized DirectX 10 Geometry Shaders
Better depth of field effects
Improved motion blurring
Fur Shading
and makes the dx9 verion look bad compared to it
This is an ‘nvidia game’. It performs better w/ nvidia cards. A game by Capcom, it has some really great graphics. And the dx10 mode actually performs better than dx9 on a g80 gtx/ultra, not on g92 etc.
Cards begin to close down in on each other when resolution is highered/AA is enabled...GTS 512 performs better though.
also the gts512 is unable to do 1920 1200 8x AA, its average fps falls to 5 or so. SO i have left a relevant blank space.
[BREAK=Lost Planet v1.4 DX10 64-bit benchmark -Cave]
Lost Planet v1.4 DX10 64-bit Benchmark results -Cave
also the gts512 is unable to do 1920 1200 8x AA, its average fps falls to 5 or so.
[BREAK=Assassin’s Creed DX 10.1 64-bt]
Assassin's Creed DX10.1 64-bit results
it is said to be the first DirectX 10.1 Title, the unpatched version, and that’s what I used fo benchmarking. They say it removes a render pass I dx10.1 mode…I didn’t notice any special glitching due to that. Another title by Capcom.
Dx10.1 is said to improve AA performance for Ati cards. Indeed it did. But when enabled through the ingame settings, the output is displayed with a milky haze, the shadows lose their charm and the game becomes too bright. I was disappointed at first but later when I choose other settings for resolution say 2x aa instead of 4, it became normal, splendid w/ the settings. And even better, when the previous res+ aa setting was choosen, it didn’t produce the bad effect again, the game was rich w/ both the smoothness of AA + shadows + the rich culture of the assassin brotherhood.. So dx10.1 and AA it is. I was going to install 1.02 patch but I would have regretted if I had.
My method of Benchmarking using FRAPs:
It(Assa) has no method for timedemo creation. My benchmarking procedure includes the map of Damascus. It really has nice shadows and it was also used by anandtech for their testing purposes. They also used the ingame cutscenes for testing, which I thought was wrong…fps was always high in such scenes and they had very less relation in terms of performance to the actual gameplay. Heres what I did. I traversed to the assasin’s Bureau in that city. Automatically got Saved to that location, I climbed to the top of the roof. Then I choose 2 points - 1 was the top of the roof where I stood. The other was remotely located at some direction where the city map ended. First I spent time figuring out how to get there w/o less errors – which path gave me less errors when I traversed through it. All the paths I chose were through the top of the buildings + I had to do a little timing for jumps but that was not the important part.. It was easy remembering where to jump and from which stand to wall and where to point the mouse while moving so, cause the player moves in the direction of the mouse + the direction keys. And speakin of pointing the mouse, I made it a point to choose the path where instinctively to move ahead, to get to the end point, I needed to move the mouse to look at the crowed(nps) places or places where fps dropped(complex geometry). Thus I choose the path depending upon the movement of mouse and the errors(in diverting from the desired path) probability to arise while traversing through the path.
I took 4-5 runs for every resolution settings like this, say 5 runs of benchmarks for 1680 1050 2x AA and the timed the destination to see that I don’t take more time or less time each run to reach there, thus changing the average fps, most likely increasing it. The timing was 1 min and 05 secs or so. I allowed a 3 second breach in every run but more than that say 5 secs more and the average fps would start going up by 0.4 fps or so. So it wasn’t very easy.
Assassins Creed favors nvidia I suppose, but with AA on - 2x or 4x and in dx10.1 mode, 4850 stomps the opponent.
[BREAK=AMD Ping pong Demo DX10.1]
AMD Ping Pong DX10.1 global illumination demonstration
Said to be the ‘first dx10.1 game’, ping pong is a great demonstration of what directX 10.1 is capable of.
It is rather a small game. The player is spawned in a room that filled up with 4000 balls that the player has to put into the big holes there. Global illumination is demonstrated in this demo. The various lights are reflected from the balls of so softly that it gives them an immediate color bleeding effect. And all these balls in the air at the same time moving constantly still mange to keep the fps at bay. The graphical quality of the scene is astonishing. Maybe geometry instancing in proper play now maybe. the color bleed due to global illumination is awesome to look at. moreover, the light bounces off other surfaces too in global illumination.
Imagine fps with a thousand such balls in crysis.
fraps is used for test at 2 resolutions
starting of the game, balls coming out from the sides...ball count gets to 4000 eventually in a few secs. should have captured screenie few secs later.
aww..just lookit em illuminate
some really nice shading effects are present in this demo and I bet they surpass crysis!
1280 720 – 24 fps
1920 1200 – 19 fps
[BREAK=Stalker w/ Global Illumination ON]
S.T.A.L.K.E.R. Globlal Illumination results
Stalker fans like S.T.A.L.K.E.R. a lot, but they are fans …and fans like…the fanned object a lot….. hmmm
It is a good game and a love/hate game too. the hate part will be lessened after enabling all candy + using global illumination
I found stalker giving very high FPS in most scenes at maximum ingame settings, in most gunfights like 60 + for 1920 1200 on both cards. So thought must do something that is different yet important from a gaming/graphics point of view that will redefine the bench scene
For that I have enabled global illumination in Stalker. For this, u need to type:
r2_gi on
also, global illumination does come with a performance hit and I have clipped the rendering by a command to an extent. It is:
r2_gi_clip 0.008 (0.001 was the default value and it does reduce the beauty due to light reflections from the default value but makes the gameplay have somewhat reasonable fps. I balanced the clipping from avoiding ,much loss of the desired effect too, more values above that and the scene gave higher fps but more reduced multiple reflections than the difference experienced from increasing from the default value to 0.08)
Also used is the float32 mod for stalker that modifies the shaders used for better lighting/shading effects.
Also I set the ingame AA slider the the left, meaning the least or no AA used…that is not AA really but blurring stuff used which I turned off. It wasn’t real AA. Even more AA can be enabled in Stalker via the Ctrl panel. Forcing it. But that gave much reduced performance like the global illumination did, and AA is not really difference-making in stalker I feel at mid to high resolutions or even lower ones; very less jagged edges. If anyone feels so or if I might be wrong w/ the jaggedness part, I might add the benches for AA with stalker.
following are the screenshots w/ global illumination on/off on the HD4850.
windows Xp x86 is used
r2_ gi off
r2_gi on
3 fps with the default r2_gi_clip value. HD4850. wow. but see the beauty of those fine reflections..its amazing to play it this way even though its poorly implemented. the ping pong demo is a nice implementation.
the spiral passage is all lit up from everybody’s gunfire! Global illumination is experimental!
off
on
off
on
fraps, the place used is strelok's hideout...a gunfight in a spiral staircase..the above gi on/off screenshots are from the same area.
ok..the benchmark
Errm…Nvidia card rocks here…an 8800GTX supposedly beats the g92 when it comes to global illumination in stalker. And the gt200 might just makes playing stalker with better global illumination clip values and more lights. There are other parameters I left aside, for r2_gi. Eg., r2_gi_photons which also has an impact on the performance and the GI resulting. Found the clip cvar the best to experiment with!
so global illumination in stalker is 'rendered' lot better w/ nvidia cards.
[BREAK=Crysis Cpu1 benchmark]
in the following Crysis Benchmarks, Various combination's of settings are used. However, it shouldn't be any trouble figuring out what setting is used since i mentioned on almost every graph what settings are used. I will mention those settings that aren't specified on the graph or those that i find important to highlight.
Natural mod is used across all the benches but not through out all of the settings tested at. It is used for that resolution where it is specified on the graph that it is so
Vhigh cvars - cvars in the cvargroups folder are manually edited to get the options which otherwise can only be enabled in dx10 mode. it is an easy method to do so + there are some more variables to put into a cfg file to get teh complete dx10 effects for xp, well almost. the cannot-be-enabled-on-XP part is considered neglible in terms of IQ.
TRipleC Pack mod is also used where it is specified. that mod is installed w/ 'lv5' settings, the highest it can offer. do try it out.
Crysis CPU1 Benchmark results
for crysis the included benchmarks are used. No FRAps for Crysis. Any custom timedemos created manually in Crysis are devoid of AI and have problems. So when they are run, its not a correct test for performance.
the benchmarks are good and have good AI to calculate...so i used em'. If anyone has a better idea like using the HOC or the guru3d crysis benchmark timedemos then i am sorry for that. I thought of this as a better way.
only the graph pics should do the job fine in understanding the benches, but i have done some mentioning here too.
Crysis Cpu1 included benchmark;
this is the least strenuous benchmark in crysis, don’t expect these high frames to show up all the times
no mblur means its being disabled by r_motionblur=0 insertion in a cfg file(doesn’t get disabled w/ vhigh cvars mod just by setting ingame motion blur option to OFF)
XP x86 only
*GTS 512/Gt 512 are not even(it should be 'even' but its like the '4850 comes with free AA') able to play at 1920 x 1200 8x AA, churning less than 3 fps so the black bar is not present for the two. 4850 is a better performer with AA involved for almost all settings.
*1280 x 1024 16xq & 1600 x 900 8xq & 1600 1200 16x AA modes are not avail. ingame for ati 4850 - so a blank space is there for these settings for 4850
*also, but not surprisingly enough, 4850 is better at 1920 1200 w/ natural mod or when vhigh cvars are used w/o AA. ati's rv670 and rv770 perform better at vhigh settings(/further tweaked like natural mod settings) compared to the high or medium settings performance in crysis; other than the rv770's performance being better with aa enabled
*this benchmark seems to be the less strenuous of the three crysis benchmarks. this bench allows 1920 1200 8x aa for the 4850...with 8 fps ..its 2-3 fps for the gts 512 though.
[BREAK=Crysis Gpu benchmark]
Crysis GPU Benchmark results
xp x86, xp x64, vista x64 all are used for this benchmark results.
In settings not mentioning any os, its Xp x86 version
when x64 is mentioned its xp 64 version + 64-bit crysis
where vista x64 is mentioned its 64-bit crysis on vista 64
where vista x86 is mentioned its 32-bit crysis run on vista 64
this should clear any confusions, if any
Natural mod is used for every result.
Where I have not specifed anything; its xp x86 high + natural mod.
Eg., 1920 1200 2x – the settings are – 1920 1200; 2xAA, Windows Xp x86 sp3, dx9-high, natural mod installed (no vhigh cvars or anything cause its not specified)
*4850 is ahead of 8800GTS 512 all the time with minor performance differences but has a major difference in performance at 1600 1200 4x AA.
*however unexpectedly the difference reduces at 8xAA for the same resolution
*this bench allows 4x AA for teh 4850 @ 1920 1200. gts 512 begins to get to its knees with like 5 fps at that settings. so the bar is not present for that.
[BREAK=Crysis Cpu2 benchmark]
Crysis CPU2 Benchmark results
This is the most strenuous Crysis Benchmark, if u get avg15 fps in this one, consider crysis playable through most part for u
Natural mod + vhigh cvars used at the same time in this benchmark, for whatever settings its tagged with.
TripleC pack mod; Lv5 settings, used for only for this benchmark, in some settings(no need to worry about all this see pics, just doing my typing part)
*no AA comparisons here but this benchie allows only 2x aa for the 4850 to get fps near/above the 10 fps mark like in the previous 2 benches which allows 8x and 4x aa.
*most strenuous of all crysis benches. if u get 15 fps for this one, crysis is smooth for u. or atleast quite a lot playable
*the 4850 performs better with this bench at almost all settings
[BREAK=IQ comparison: Crysis]
Crysis IQ tests
Natural mod 2.0.2
windows xp x86
high settings
1920 1200; 0x AA
4850, recovery level
8800GTS 512
4850, Contact level
8800GTS 512, Contact level.
Unexpectedly, the vegetation seems blurred out for 4850…for the 8800GTS 512, its crisp. Maybe its the edge AA in crysis that is causing this trouble for the 4850. no real IQ differences apart than this. I was a bit surprised since I expectedt the 4850 IQ to be better than gts512.
4850
GTS 512
4850
GTS 512
4850
GTS 512
[BREAK=OCing em’ - rv770pro 512 & g92gts 512]
Overclocking results
First,
the 8800GTS 512 from Msi, the oc edition.
Stock speeds are 678 and 974 (x2) respectively.
GHEtto mod
a fox-2 blower attached to the HS, and in an angle as shown above to blow air in the HS. A 120mm fan is added to the top w/ the air moving away from the HS.
OC w/ default fan@100(no mod) w/ mx2:
Core - 790
Memory – 2050
OC achieved w/ the ghetto mod w/ mx2:
Core - 815mhz
Memory - 2040
Ghetto mod does something..didn’t expect that much of an overclock. Also to note that even though the mx2 doesn’t specify any cure time(time needed for the paste to function to its max potential), the card ended up ocing better after some 10 hrs of usage/gaming. Its never went above 775 w/ the default fan/blower w/ cover on@ 100%.
the Sapphire HD4850 512 overclock
the fanfix out for 4850 is limited to 700mhz through the CCC, an amd gpu overclock utility has been released which allows to take the gpu/memory higher but still it interferes/overrides with the profile settings, making the fan fix obsolete, maybe a new version of this utility will fix this. And I decided to oc it with the stock things that I got. So to make the fan 100% and still achieve an higher overclock, I removed the blue wire as shown in the pic. Man..that was noisy
also applied is the mx2 thermal compound to the core. The default goo was replaced. However, I oced the card with the default goo too. Believing that every component quality used in the 4850 is good, I thought maybe they ‘d end up with the goo crap good too.
Default – 625 core / 993(x2) mem
Overclock results with stock goo:
Core - 760
Memory – 2440
stock Goo is good. Nice goo. Don’t like that word very much though.
Overclock results with mx2:
Core – 755
Memory - 2360
Maybe there’s some mistake in the application, but hardly there is, I used the right amount needed to attain a sustainable overclock. Maybe the paste needs some time?
[BREAK=Conclusion]
Sapphire HD4850 conclusion
Pros hyeah:
• Best Card for the buck
• great performance
• priced aggressively, might get cheaper in a short while
• Availability
• Better IQ than nvidia in some titles
• Stock cooling material build quality good
• Good overclocking potential (Aftermarket cooler combination will be nice, w/ a lot lesser drum than the 100% fan will play)
• Runs very cool idle @ 35C (100% fan speed)
• Pretty much great performance potential architecture/specifications wise, drivers might perhaps help in unleashing some more
• More future proff ….:ahem:
• Runs quiet at the default fan speeds
Cons :ashamed: :no:
• Runs hot at default fan speeds
• Overclocking limited via CCC and customized fans peed/OC profile gets overridden by amd’s own utility. CCC limits the clock to 700/1200
• Some games still devout to nvidia
• Trouble forcing AA via CCC
• No extra things in bundle(sapphire)
thanks to u for reading & thanks to me for buying the cards:S
Has it helped u in ur buying decision? Is my Benching procedure right w/ Fraps? Found out any new things that u were not aware of? let me know what u think!
suggestions/critics are welcome
Prathamesh/Fcry64