CPU/Mobo AMD Bulldozer Discussion Thread

Status
Not open for further replies.
Yup the reviews are in and they all have one thing in common--all reviewers have been left underwhelmed by bulldozer's performance.

Its just as I had guessed, the FX 8150 slots between the i7 2600k and i5 2500k in price and performance, though 80% of the time the FX is closer to the i5 in performance than to the i7.
 
This is Fail...big time fail. Not able to match the i7...barely able to catch the i5...just about/not about (depending on which review u read) to catch the old gen PII...at double the power consumption (well almost)..forget it brother, AMD u dont get my money

And this is from someone who has been using AMD for the last 13 years...

Oh and I bet that there are going to be a lot of failed AMD mobos soon...
 
^^ well said Mav.

This really is disappointing...

And I can't believe AMD had to ruin the FX brand with these processors...

I too used a lot of AMD before 'Wolfdale'..and was hoping that time time AMD could give a Intel a run for its money but this is highly disappointing.

Someone said above that it should be 12K here....

and who's gonna buy it for that much when you get the 2500K for about the same?

AMD this is a big let down....an old hindi saying K.L.P.D. big time!!!
 
waited so long for this upgrade well 'all dreams shattered' .... sandybridge get my money

--- Updated Post - Automerged ---

mrcool63 said:
In the end we are all gamers right.. game performance matters
bro gaming performance mainly depends on GPU so AMD/ATI GPU + Intel CPU is what a gamer wants
 
if they would have used the old K10 architecture and if the would have added two extra cores then AMD 8 Core processor would have been able to process 43 to 45fps in 2nd pass which would have been a huge improvement plus in cinebench r11.5 benchmark AMD 8 Core K10 architecture processor would have been able to give 8 to 9 points easily... Clock For Clock performance of bd processors is lower than phenom processor... AMD used such an architecture for its new generation processor which is not only slower clock for clock than previous phenom processor but also consume more power as compared to SB processors... :facepalm:
 
not impressed at all ...gaming performance is also dissapointing even a overclocked fx 8150@ 4.6 trails behind significantly when compared to intel 2600k @stock speeds:no: Since i use my pc for gaming only i am more concerned with the gaming performance .

farcry 2 and crysis 2

AMD FX 8150 processor review
 
I think AMD needs to come out with a CPU driver so that it's cores can be used more effectively.
I don't think Bulldozer is the best performer, but I think it can perform better if we get software optimized to use it's core modules. But that isn't going to happen any time soon, so it's hardly a compelling buy.

But for someone on an older AMD platform it'll make a decent upgrade option. I'm on an ancient Athlon X2 3600+ and an AM2+ (DDR2) motherboard. I'll pop an AM3 Athlon II X4 in my old motherboard soon, and then upgrade to to a 990 chipset motherboard later, and finally a Bulldozer, or Piledriver CPU.
Going the AMD way only makes sense to people doing slow upgrades like me.
 
^^

I doubt you actually get CPU drivers. It could be a BIOS flash or chipset drivers which interconnect to devices hooked onto the mother core.
 
^^The trouble is that they would have run out of room.

There are a few issues we are not looking at.

1. Most games where the huge leads are shown are tested at 1024x or 1680x. You can't really be a serious gamer if you play at that resolution, can you? At higher (and more realistic) resolutions the CPU impact is far lower and apart from a few games the gap between CPUs is pretty much non-existent - even the Phenom 980 performs the same as the 2600k at 2560x in Metro2033, for example, let alone the FX. Those scenarios are more GPU-imited, and are more appropriate for gamers. At 1920x, a 8150 performs identically to a 2600k in Crysis 2 as per the Guru3D review. I don't see any reason for it being inferior, but maybe it's just me. Or maybe I'm blind and am the only one who can't make out a difference between 100 and 200fps, given my monitor only goes to 60Hz refresh anyway. I congratulate those of you on this thread who play games at 1024x and your 'superior' experience with better CPUs :)

2. The target CPU speed for the FX range was supposed to be 30% over Thuban. They didn't manage to hit it because of foundry issues (presumably). Obviously that aggressively impacted their performance in single thread and lightly threaded apps. In heavily threaded apps wth lots of FP load, the FX does compete equally with the 2600k. Had they hit the target clockspeed, things would have looked different.

3. If they had added two cores to the 1100T the motherboard would have blown some power circuitry even if rated to 140W. Clearly the approach required was different. Lengthen the pipe and pump up the clock. [Incorrect information mentioned here earlier, now edited]

4. For those about to buy, my observation is that if you are throwing random loads at the PC, a SB is the way to go. If you are a gamer with a proper monitor, you can get by with a Phenom 980x or pretty much anything with a decent clockspeed about 3GHz, architecture and vendor matter little. The only valid application of BD is in a fully professional content creation environment with a lot of heavily threaded workload, where the BD competes with the 2600k at a lower price.

5. One of the biggest issues that all the review sites do uncover is how badly Win 7 handles BD. Two INT cores is not easily understood by the OS, and this may lead to a huge performance hit. HT on the other hand is coded into the OS so Win 7 knows when the physical core is handling a thread and when the scheduler has taken over. In dev builds of 8 (which are coded correctly for BD) you can see increases of up to 15% already on existing benchmarks. Maybe just about enough to compete with the 2500k. Unfortunately when 8 is out the 2500k will have been retired. Wherever AMD is at that point in time, we'll have to evaluate given the existing situation.

6. I have two big disappointments. One is the huge hit on IPC over the previous generation of chips. This is not a new thing. Intel went through it twice, and more recently AMD suffered when it moved from VLIW5 to VLIW4 in their transition from 58xx to 68xx cards, which they could overcome only by adding more resources on the card itself. BD has done pretty much the same thing. The second disappointment is the power consumption, which I am hoping is a stepping issue and will be somewhat addressed in the shipping product because it's hitting the BD harder than the absolute performance.

All in all yes, a pretty disappointing release from AMD. I think someone dropped the ball very badly on this and all they can do to fix it is starve the supply and work quickly on the next versions of the chip.

This is a difficult choice to make in my situation (. Clearly the older Thubans deliver equivalent high-resolution gaming performance to Sandy Bridge, and since I don't use any professional apps at all it's tempting to not upgrade at all (or only a better CPU than my aging x2 550) till IB releases and/or AMD saves (?) their business.

@asingh, you do get CPU drivers. One example is the DC optimiser that was required around the time the Athlon dual-core series launched, as apps did not know how to use the second core even if the OS did.
 
  • Like
Reactions: 2 people
So, much like Llano being just a preview, whereas Trinity, coming next year being the main excitement, AMD repeats show with Bulldozer, just dropping what could have definitely been better, will be better on Windows 8, meaning, it's successor, Piledriver, coming 2012 again, will be the main attraction.

Leaving this year pretty dry for AMD (Llano's yields being pathetic, overshadowing it's success)
 
3. If they had added two cores to the 1100T the motherboard would have blown some power circuitry even if rated to 140W. Clearly the approach required was different. Shorten the pipe and pump up the clock.

I Disagree with this statement...

Does the 16core interlagos chip blows away the server motherboards? If AMD wanted to release the new processors using the old K10.5+ architecture then they would have done that... last year they had managed to produce a 6core 95w processor using 45nm process and with 32nm process creating an 8 core phenom processor wouldn't have been a difficult task for them... But they went ahead with the new architecture... And if they would have added two extra cores to 1100T and would have released a new processor than the processor definately would have performed better than 8150 you can be rest assured... And people who want to build Rendering or Video editing rig then they definately would have gone with that processor coz it would have offered more performance per dollar than 8150... 1100T processes 33FPS in x264 benchmark with two added cores this number would have increased to 43-45... In Cinebench R11.5 benchmark 1100T manages to score 5.98 with two added extra cores the number would have increased to 8 pts... What i am trying to convey is that people who dont want the best processor on the planet and want a processor which should complete their task ASAP like rendering, video encoding etc etc would have definately gone with it just the way many people ditched i7 860 and i7 920 and went ahead with the 1090T coz 1090T was cheaper and performed better in tasks they wanted to build a rig for... By adding two extra cores AMD would have even managed to give i7 980x a run for its money...
 
@Hades, the 1100T is already at 125W. Do the math. 20 watts per core, two cores add 40W to the dissipation. No mobo can handle 160W in stock form and the market for such a CPU would be very limited, and the motherboards much more expensive. Then you have to leave room for turbo, and for overclocking. You're then looking at a motherboard to handle about 200W of heat, so the cost advantage the AMD platform has traditionally enjoyed would be wiped out totally.

Plus, such a chip could not have been the first in its generation, but the last. Even if a die shrink down to 32nm got you a 10% power saving (unlikely) you're still looking at over 150W. At 20% power saving, you would still not be within the 140W power envelope of a modern mobo. Frankly, they needed a new architecture. This one may be it, but it may not be either. At least it's not a rehash. The trouble with it is twofold - one the lower IPC (assumign they will fix that with higher clock rates in the future) and the power consumption (I wouldn't hold my breath to see if they will fix this, track record always shows otherwise).

As for the Interlagos - server CPU selection is never made on absolute performance. It's always about power efficiency and cost efficiency given the massive parallelism required for servers. I'm sorry but you can't compare the server CPU choice with desktop, the two are totally different animals.

My point is simple. The BD is a different compromise. It's a compromise for sure, but IMO you can either choose the super-hobbled Z68 with a super CPU or a bells-and-whistles laden 990FX with a lower CPU performance from a Thuban or Bulldozer. It will depend on your situation and choices. I'm pretty sure the Thuban could never become 8 cores with fully independent caches it would simply be too large a processor.
 
By using 32nm process they could have produced Phenom II x8... See by producing it using 32nm process they not only would have been able to lower the power consumption but they also would have been able to squeeze in extra cores (extra capacitors)... For eg Gulf town processor (980x/990x)... Intel managed to not only produce a beast chip by using 32nm process but also added whopping 12MB cache which takes a lot of die area and HT technology and managed to keep the TDP of the chip to 130w... See if AMD would have been able to lower the power consumption of per core from 20 watts to 15 to 16 watts (x8 = 120/128) by using 32nm process then they would have been easily be able to produce a Phenom II x8 of 130w TDP... Plus IINM Phenom processors cores share L3 cache... They dont require independent L3 cache... the two core per module thing (Intel Hyper Threading Like Technology) really didn't help AMD in increasing the performance... Not even per clock performance... last generation Phenom processor performs better... IMO even if they would have slightly tweaked the old k10 architecture to improve per clock performance then they would have been able to produce a 6 core processor which would have ended up faster than Intel quadcore offering with Hyper threading technology...
 
Status
Not open for further replies.