AMD FSR ,Extended Life for your NVidia 10 series GPU

roofrider

Well-Known Member
Disciple
HUB's Tim: "...no mention of AI/temporal, although I would be surprised if there wasn't a temporal element that used information for multiple frames." Time will tell.
That particular image has FSR at Quality Mode rather than Ultra Quality though, if that matters. Edit: ah, this was already brought up, my bad.





So this means our bois are going to keep selling used 10 series cards at high prices? Shhh, keep this a secret.
 
Last edited:

t3chg33k

Well-Known Member
Adept
HUB's Tim: "...no mention of AI/temporal, although I would be surprised if there wasn't a temporal element that used information for multiple frames." Time will tell.
That particular image has FSR at Quality Mode rather than Ultra Quality though, if that matters. Edit: ah, this was already brought up, my bad.





So this means our bois are going to keep selling used 10 series cards at high prices? Shhh, keep this a secret.
The RTX cards have dedicated Tensor cores for DLSS. Without additional silicon on older cards, it is nearly impossible to implement real-time machine learning without significant processing on existing cores. This simply seems to be a "one size fits all" algorithmic image processing. If they cannot add accurate information to the image when upscaling which DLSS 2.0 does, it is unlikely it will match native rendering, especially when you have different types of scenes across different types of games.
 

roofrider

Well-Known Member
Disciple
GPUs are anything but my forte. You think it won't match DLSS 1? Many (hopefuls maybe) are expecting it to be close to DLSS 1.5 or just good enough.

Granted there's no hardware acceleration and AMD aren't claiming it to be anything beyond spatial upscaling just yet, some think there could be ML in it, but I don't have anything to back this so won't put this forward. Probably the hardware part will come with RDNA 3, unless there's already hardware that can partially support these computations in RDNA/2 (read somewhere, not sure).
Strictly speaking dedicated hardware isn't needed really needed to leverage temporal data, is it? There was something about TSR achieving this on reddit - link

If it indeed falls short of DLSS 1 then it'd be a disaster almost, given the hype they created. If it's decent enough, then wider adoption is a good thing for the market and being open source it will be improved upon. Maybe the consoles will also benefit from this since they are all AMD.
It's unlikely that Nvidia is going to support FSR right away so all those 10 series cards aren't going to get this now. But if FSR works out then NV will be forced to introduce something like DLSS Compatible™ down the road.
 
Top