Graphic Cards Is it worth it to buy 9070xt now?

I am aware of the vbios flash. But never achieved stability at lower voltages at desired frequency on an AMD gpu. Probably they use lower quality silicons. Which is highly likely, since they cant sell them well
None of what you said made sense to me. Are you talking about undervolting here? Because results can be different from card to card and just because you couldn't achieve stability at a desired frequency curve doesn't mean that is same for others. Could also be that you were expecting too much from the GPU and dialing in aggressive numbers. Also AMD acquires their silicon from TSMC just like NVIDIA (also from globalfoundries but I don't know the numbers) so I doubt they use lower quality silicon.
Back then the number one reason behind why AMD cards couldn't sell well was due to their buggy drivers which meant every driver version or a game or a software was a gamble which nobody liked. Now its the opposite with NVIDIA releasing buggy drivers (just head over at their subreddit) but because NVIDIA GPUs have good raytracing, CUDA, AI, DLSS, Frame gen and whatnot people still stick to it..
 

Before anyone jumps in with "but doesn't the 9070 XT have more physical shaders?"

My guess is that the 9070 XT's architecture isn't making proper use of all its shaders.

Also, the BIOS flash seems to allow the 9070 to OC higher than the XT's base clock by around 200 MHz, which is making up for the hardware gap.

Case in point - the 3080 can be OC'd to match a stock 3080 Ti (I can personally attest)

I remember the Fury closed the gap with the Fury X similarly.

It's nice to see some old-school vBIOS performance modding again after a long time
But one point is, if the price difference is not the big between XT and non XT, it's a safer bet to get the XT because you get the target performance out of the box without any headaches of fiddling around. The Vbios flash and all can make sense if in future the non XT card gets a massive price cut which I think will happen.
None of what you said made sense to me. Are you talking about undervolting here? Because results can be different from card to card and just because you couldn't achieve stability at a desired frequency curve doesn't mean that is same for others. Could also be that you were expecting too much from the GPU and dialing in aggressive numbers. Also AMD acquires their silicon from TSMC just like NVIDIA (also from globalfoundries but I don't know the numbers) so I doubt they use lower quality silicon.
Back then the number one reason behind why AMD cards couldn't sell well was due to their buggy drivers which meant every driver version or a game or a software was a gamble which nobody liked. Now its the opposite with NVIDIA releasing buggy drivers (just head over at their subreddit) but because NVIDIA GPUs have good raytracing, CUDA, AI, DLSS, Frame gen and whatnot people still stick to it..
He was talking about binning, Binning is a real thing. Higher SKU do get better binner silicon. No need to question his knowledge.
 
He was talking about binning, Binning is a real thing. Higher SKU do get better binner silicon. No need to question his knowledge.
I wouldn't use term "lower quality silicons" to talk about binning process. Binning is about sorting dies into different tiers because not all of them are fully functioning when manufactured, They are still made with the same quality silicon used across the board unless they change source in between manufacturing period.
Also even with high quality silicons the yield could be bad due to other factors in manufacturing like the size of the die, how new their process is and what they are trying to achieve with the yield.
While lower quality silicon might reduce the yield for the die manufacturer that doesn't matter for us consumers as you are still getting what you were promised.

Performance difference due to binning is indeed real but in this day and age where manufacturers release processors tightly clocked and what gain is left gets pushed further by motherboards that come with auto-OC feature like PBO and GPU's with their factory OC editions you aren't really seeing a difference between bins anymore.
I never questioned his knowledge, for all I know he is more knowledgeable than me. I just read what he said and couldn't understand what he was talking about. Especially the last part where he said AMD doesn't sell well because they use lower quality silicons.
I hope I'm not starting an argument here just for the sake of it but rather a discussion.
 
  • Like
Reactions: PunkX 75
I wouldn't use term "lower quality silicons" to talk about binning process. Binning is about sorting dies into different tiers because not all of them are fully functioning when manufactured, They are still made with the same quality silicon used across the board unless they change source in between manufacturing period.
Also even with high quality silicons the yield could be bad due to other factors in manufacturing like the size of the die, how new their process is and what they are trying to achieve with the yield.
While lower quality silicon might reduce the yield for the die manufacturer that doesn't matter for us consumers as you are still getting what you were promised.

Performance difference due to binning is indeed real but in this day and age where manufacturers release processors tightly clocked and what gain is left gets pushed further by motherboards that come with auto-OC feature like PBO and GPU's with their factory OC editions you aren't really seeing a difference between bins anymore.
I never questioned his knowledge, for all I know he is more knowledgeable than me. I just read what he said and couldn't understand what he was talking about. Especially the last part where he said AMD doesn't sell well because they use lower quality silicons.
I hope I'm not starting an argument here just for the sake of it but rather a discussion.
You need to learn more instead of being arrogant. Binning does mean sorting silicon by its quality. Hence his use of the term higher quality silicon or better silicon. This is why more often than not higher MSRP products often are able to OC higher than the cut down variants of the same die at a nominal voltage.

Please educate yourself on the subject of binning and what it "basically" means.
 
You need to learn more instead of being arrogant. Binning does mean sorting silicon by its quality. Hence his use of the term higher quality silicon or better silicon. This is why more often than not higher MSRP products often are able to OC higher than the cut down variants of the same die at a nominal voltage.

Please educate yourself on the subject of binning and what it "basically" means.
Is there a reason you are being this provocative with a 'holier-than-thou' approach to a conversation?

Even in your sales thread, you were extremely demeaning in some of your replies.

At no point was he being arrogant, and he was respectful and detailed in his response. Doesn't sit well with you? Well and good. Does it give you the right to respond in this manner? No.

May I remind you that you are new to this forum, and there is a modicum of etiquette that is maintained here. I would recommend a more nuanced and logical approach in your responses, without resorting to personal attacks.

OLX/Reddit-type of disrespectful responses tend not to fit into a forum such as this.
 
Last edited:
You need to learn more instead of being arrogant. Binning does mean sorting silicon by its quality. Hence his use of the term higher quality silicon or better silicon.
Can we not have a civil discussion without hurling words at each other? You realise TSMC and other similar manufacturers buy their silicon wafers from their suppliers right? They aren't in silicon manufacturing business but rather semiconductor business who make chips out of silicon wafers. They purchase them in bulk for manufacturing chips and if they get a bad yield they don't call out their suppliers and say "oh you sold us lower quality silicon wafers" they improve their yield by improving the process, This happens every time they manufacture a next gen chip BTW. The initial batches always produces more lower binned dies than higher binned, they wouldn't say it happened because of "lower quality silicon".

Also we are getting sidetracked here. I think he confused silicon wafers with silicon (or maybe I assumed he just meant silicon but in reality he actually meant to say silicon wafers). So if we assume he meant to say that then he "basically" said "AMD uses lower binned dies (which is what lower quality silicon wafers are because "lower" means they couldn't meet the standard to be categorized as higher binned) in their GPUs which is why they don't sell well". Then explain to me why do AMD have so many GPUs in their lineup? It's not like Intel where this can be true because they literally released 5 GPUs and called it a day. AMD has launched wide range of GPUs from trash tier to higher end ones like 7900XTX or 9070XT. If we just look at their RX 6000 series they had 11 GPUs that were readily available for us consumers (more if you count GPUs made for laptops or those which released in limited quantity or for specific countries only, not that I personally know of).
This is why more often than not higher MSRP products often are able to OC higher than the cut down variants of the same die at a nominal voltage.

Please educate yourself on the subject of binning and what it "basically" means.
I thought they were able to achieve higher OC because they had better components used on their PCBs? In the end the die is the same right? Now I don't know if AIB manufacturers pay for higher binned dies or not but seems unlikely because of two reasons:
1. AIB manufacturers already have such thin margin of profits that I won't understand why they will want to pay more for similar performance (just check the performance difference between cheaper card vs more expensive ones of the same GPU, difference is in few percentage).
2. If they are paying more for higher binned dies then why not just up the tier of their GPU from like for example 3080 to 3080 TI (they are both binned from the same GA102 chip), They will make more money this way.

Now to counter my 2nd point I know there are variants in what is considered "lower bin" or higher bin" depending what statistics you use, Like for example some chips can be consider higher binned if we only measure how well they achieve higher frequency at lower voltage but they might still have enough defects to not pass for a "higher binned" category. Essentially winning the "silicon lottery" as a consumer if you get that chip but they are so rare and few that I wouldn't call it "AMD using lower quality silicon" if that doesn't happen to you (with you being the @CasualGamer91).
I apologize if I have derailed this thread by making such a long post.
 
Are you guys outsourcing replies with the help of ChatGPT? @YeAhx and @PunkX 75
You're welcome to run an AI-text checker.

Speaking for myself, and saving you some trouble :joyful:

1.png
 

Before anyone jumps in with "but doesn't the 9070 XT have more physical shaders?"

My guess is that the 9070 XT's architecture isn't making proper use of all its shaders.

Also, the BIOS flash seems to allow the 9070 to OC higher than the XT's base clock by around 200 MHz, which is making up for the hardware gap.

Case in point - the 3080 can be OC'd to match a stock 3080 Ti (I can personally attest)

I remember the Fury closed the gap with the Fury X similarly.

It's nice to see some old-school vBIOS performance modding again after a long time
Pretty decent gains in CP 2077!

 
Does this come with the risk of bricking the card permanently ?
It would, as with any form of modding/flashing. You need to know what you are doing. This isn't something for most users.

Although, permanent bricking isn't common, as there are ways of reverting to the OG vBIOS, if something goes wrong.
 
FSR 3.1 is very close to DLSS now and they keep updating and improving it, If I play on an nvidia card obviously I would still be running DLSS but for AMD cards FSR is a godsend and remember it still runs on all GPUs so if DLSS in a particular game is causing you issues (ghosting/trails, artifacts etc.) then FSR saves the day.

As for the pricing AMD's might seem higher because its NVIDIA who won't keep pricing at an expected level. They really don't care because they aren't profiting much from consumers, their main profits come from data center sector.


If I read correctly the vbios flash allows for unlocking OC limits set by AMD to that of RX 9070XT which leads to higher overclocking potential, it doesn't unlock "cores" as those are still faulty (what made the card into a 9070 as opposed to its XT variant).
I hope you meant FSR4. It's close and unfortunately only on 9070 series.
Nvidia price are justifiable..they are being generous to sell a 5070 at 60k including GST. DLSs works really good, runs cooler, eats less power.

You never own a amd GPU dude, stop being delusional. Amd gpu sucks, runs hotter, undervolting is also unstable. Gpu board manufacturer do cost cut heavily on amd graphics card like asus, zotac, gigabyte, msi. Everybody knows stock will not going to be cleared unless there's chip shortage..

Let the market speak for itself
Holy **** dude. Wtf happened here!? I'm sure you must been annoyed with someone to rant like this.

I have a 6700XT myself, undervolted to 1075mv(Stock being 1200mv), and no bro, it runs cool and quiet and stable with this Undervolt. Don't get agitated man.
 
I hope you meant FSR4. It's close and unfortunately only on 9070 series.
No, I was comparing FSR to DLSS. DLSS4 and FSR4 are based on frame gen and of course not working on GPUs other than the intended ones. I don't know how well frame gen of both cards are so can't say but I do know about FSR and DLSS and have found them to be a mixed bag overall. Ghosting/trailing around characters are more prominent in FSR than DLSS and FSR trying to compete with DLSS sometimes oversharpens the image which comes out as grainy. FSR has its advantages like being able to run on any GPU and easier implementation.
I think they should have changed the name of frame gen tech to be a separate thing. This only confuses people.
 
FSR4 is a new ML based upscaler. It is very different from FSR3.1.
I'm not talking about FG.
No, I was comparing FSR to DLSS. DLSS4 and FSR4 are based on frame gen and of course not working on GPUs other than the intended ones. I don't know how well frame gen of both cards are so can't say but I do know about FSR and DLSS and have found them to be a mixed bag overall. Ghosting/trailing around characters are more prominent in FSR than DLSS and FSR trying to compete with DLSS sometimes oversharpens the image which comes out as grainy. FSR has its advantages like being able to run on any GPU and easier implementation.
I think they should have changed the name of frame gen tech to be a separate thing. This only confuses people.
 
FSR4 is better than DLSS 3 but still inferior to DLSS4. Bigger problem is backward compatibility of FSR4. If AMD keeps it restricted to Radeon 9000, devs will not adopt it.
Unlikely anytime soon.

From what I know and read elsewhere, FSR4 uses instructions not available/supported on the 7000 series (fp8).

Maybe they'll emulate it in software or figure out an alternative path but considering the time it takes them to add ROCm support (it took a year for 7000 series I think) I wouldn't hold my breath.
 
FSR4 is a new ML based upscaler. It is very different from FSR3.1.
I'm not talking about FG.
Well FSR4 has frame gen capabilities as well and only works on AMD's newer card just like NVIDIA and their DLSS4 so you can see why I would try to compare apples to apples and oranges to oranges only and yes FSR4's image quality is really good, It seems like AMD has been able to fix their FSR (remember how bad FSR 1 was? it was a joke) and bridge the gap closer. My initial comparison (if you call one line of sentence that) was between FSR 3.1 and DLSS 3.0. The FSR you get from softwares like lossless scaling are bad and the performance cost is not acceptable for the result we get.
 
Try Optiscaler and use XESS with DLSS inputs on AMD 6000 or 7000 series card, then compare it with FSR3.1 implementation in the same game. I have a 6700XT and on Alanwake2 and Cyberpunk, you can clearly see just how much better even XESS looks when compared to FSR3.1/3.0.
FSR4 is way closer to DLSS but the support drop for old generation is sad.