Nvidia officially announces Pascal based GTX 1080 and GTX 1070 gpu's

Status
Not open for further replies.
If you know where to look, the 3d mark scores of the 1080 have already been leaked and the performance at 1800 odd mhz is roughly the same as a higher clocked 980 ti which is kind of disappointing. I personally have a 980 ti and a 970 in two machines. I will probably hold off upgrading until gp100 arrives on the scene.

If you're talking about the one on videocardz I already saw that earlier this week but I always disregard such leaks. It's always better to wait and watch for real world benchmarks post launch when it comes to cpus/gpus.
Besides that it's unfair to compare gpus going just by the clock speeds, the die size is almost half of what the 980Ti is, it's bringing a host of improvements on the DX12 end and will be way more efficient thanks to the move to 16nm FF.
Plus that leak seems to suggest it's a reference card, those clock speeds are ridiculous but it'd still be interesting to see what kind of headroom it has when OC'ed.
It may or may not be a great card at that price but people here are excited because of what the 1070 claims to offer under $400.
 
@Lord Nemesis The issue wasn't whether it had 4GB vram or not, yes it had full 4 GB vram but the problem was the performance hit encountered when accessing the remaining .5 GB, I'm not going to post a wall of text like you always do because I don't have the time and patience especially on a Sunday to post about something that has been well documented and explained by Nvidia themselves.

Sorry, as far as I see, there is still no proof to give credibility to the theory that the frame rate issue is connected to the memory design. Since you say that is all proven and documented by no less than nvidia themselves, can you give a link where nvidia has clearly confirmed that the memory controller design is responsible for frame rate hits. I do not mean confirmation on the architecture used which everybody knows already (multiplexed memory designs are pretty common and I don't think 970 is the first GPU to use it). but for the fact that the design leads to consistent real word frame rate hits when ever the memory usage goes above 3.5 GB. I think there are a number of people who did use the SoM high resolution pack and also tested other games as well with 3.5 GB + memory usage and still reported no performance hiccups.

FYI, each bank of 512MB RAM can be accessed at max 28 Gbps and that holds true for the last 512MB as well. It is not slower than the rest. If you have to read 128 MB content that is placed in a single contiguous block, you get the same 28 Gbps bandwidth for all of the banks. But since 970 has 7 controllers and 980 has 8 controllers, they can access that number of banks in parallel and to make use of that fact, the data has to be spread across multiple banks. since the last block is on its own, it there is nothing to access in parallel, but that single block can still be accessed at the peak rate of 28 Gbps as with the other blocks. If there were even more banks, they could have been accessed in parallel. 28 Gbps is still considerably faster than system RAM. Even in the worst of cases, the card should still be considerably faster than if the card were equipped with 3.5 GB RAM.

Its possible either nvidia doesn't know the real reason for the frame rate hits or they don't want to reveal it.
 
Last edited:
@Lord Nemesis The issue wasn't whether it had 4GB vram or not, yes it had full 4 GB vram but the problem was the performance hit encountered when accessing the remaining .5 GB, I'm not going to post a wall of text like you always do because I don't have the time and patience especially on a Sunday to post about something that has been well documented and explained by Nvidia themselves.
As for all users not encountering issues on Shadow of Mordor maybe some of those users who didn't experience performance issues didn't have the high resolution texture pack installed.
This will only get more apparent as time passes by and games get more demanding this year, people will be forced to reduce texture quality and resolutions even though the card is more than capable of handling it just so that they don't cripple performance by utilising the .5 GB in question.



P.S A couple of friends who picked up the 980Ti for 50k+ a few months back are on suicide watch :p

You know, i was almost about to purchase an EVGA 980Ti SC with Back plate for 42k. Now i will wait for 2 months,first thing is to purchase a PS4.
 
@Lord Nemesis I'm not gonna spend my free time digging up threads, articles and videos on what is practically an ancient issue and not even related to this thread topic. I think pcper did a good breakdown on this subject after Nvidia issued a statement so look that up if you want to go into further technical details.
The cause of the problem was a botched gimping process to make the 980 and its pricing relevant going by the memory system diagram they themselves provided, they thought no one would probably notice or exceed the threshold since it wasn't aimed at the 1440p or 4K crowd but people still went ahead and ran games on those resolutions.
Just like how lower tiered AMD cards are going head to head in DX12 titles atm this too will become more evident as more demanding games show up and games start utilising more than 3.5GB vram.
As for whether Nvidia was incompetent or lying, I'm inclined to believe it was the latter.
 
Both the models (1080 and 1070) will apparently have 8 GB VRAM which is nice for the price I guess. Not to mention the performance-per-watt and performance-per-dollar, which I guess is a big step forward?

Does anyone know whether the new 16 nm chip process applies only to the actual core GPU chip or also the GDDR5X VRAM chips?
 
Both the models (1080 and 1070) will apparently have 8 GB VRAM which is nice for the price I guess. Not to mention the performance-per-watt and performance-per-dollar, which I guess is a big step forward?

Does anyone know whether the new 16 nm chip process applies only to the actual core GPU chip or also the GDDR5X VRAM chips?

16nm is only the GPU core. The memory will be manufactured by a third party - NVIDIA would have nothing to do with it other than qualification
 
If you're talking about the one on videocardz I already saw that earlier this week but I always disregard such leaks. It's always better to wait and watch for real world benchmarks post launch when it comes to cpus/gpus.
Besides that it's unfair to compare gpus going just by the clock speeds, the die size is almost half of what the 980Ti is, it's bringing a host of improvements on the DX12 end and will be way more efficient thanks to the move to 16nm FF.
Plus that leak seems to suggest it's a reference card, those clock speeds are ridiculous but it'd still be interesting to see what kind of headroom it has when OC'ed.
It may or may not be a great card at that price but people here are excited because of what the 1070 claims to offer under $400.

We still don't know what improvements there are to DX12 games right now. NVIDIA's DX12 drivers are a mess right now. The only real DX12 game right now - Quantum Break runs really poorly with tonnes of crashes and driver errors. When it does run, it runs well below (~40%) less performance compared to the AMD competition. We'll only know what the situation is when drivers get fixed.

As for die size - a lower die size should lead to a much lower price - however NVIDIA is price gouging here thanks to no competition right now. A mid range GP104 chip for 699$. Give me a break please! I'll only pay that kind of money for GP100. In fact, I'd not mind spending 1000$ - just give me the real deal.

Also this generation, the 1070 seems to be pretty gimped compared to the 1080. In the previous gen, the 970 was a stellar deal and actually ate a ton of 980 sales as the performance difference was marginal at stock and nil when overclocked. They learnt their lesson and have cut the chip more massively. They've also gotten rid of GDDR5X from the 1070.

If a person has a 780/770/780 ti, these cards are probably a good buy. If one has a 970 or higher, its best to sit tight for now.
 
Love it or hate it, nvidia always had it, 1080 looks promising after so long it feels to upgrade :-) fingers crossed hopefully the benchmarks justify everything
 
Does anyone know whether the new 16 nm chip process applies only to the actual core GPU chip or also the GDDR5X VRAM chips?

http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory

NVIDIA's DX12 drivers are a mess right now.

Drivers cannot fix hardware limitations i.e., Async compute ico 9xx series.

Quantum Break runs really poorly with tonnes of crashes and driver errors. We'll only know what the situation is when drivers get fixed.

This title will remain a shitshow because of the Windows store app limitations, no driver/patch is gonna fix this either.

As for die size - a lower die size should lead to a much lower price

Are you a semiconductor expert from TSMC, GloFo or Samsung ? Do you have in depth knowledge about the economics behind semiconductor lithography. The 980Ti had 8 billion transistors with a die size of 600mm2 and with the risky move to 16nm FF after ages on the tried and tested 28nm they've managed to squeeze in 7.2 billion on the 1080 with a die size of 300mm2 with a newer architecture, you cannot expect lower prices on the basis of die size itself when they've made technological advancements on other fronts.
Plus he's been bragging about spending billions on r&d, can't really expect em to give it away for free.

Also this generation, the 1070 seems to be pretty gimped compared to the 1080. In the previous gen, the 970 was a stellar deal and actually ate a ton of 980 sales as the performance difference was marginal at stock and nil when overclocked. They learnt their lesson and have cut the chip more massively. They've also gotten rid of GDDR5X from the 1070.

Again you're just basing your opinion on one slide and a leaked benchmark, the 1080's have apparently been shipped already to reviewers, let's wait and watch and then discuss on the pricing and performance, right now you're just coming across as a butthurt 980Ti owner like my friends.
 
  • Like
Reactions: rakesh_ic
Drivers cannot fix hardware limitations i.e., Async compute ico 9xx series.

This title will remain a shitshow because of the Windows store app limitations, no driver/patch is gonna fix this either.
Async compute has nothing to do with issues here. I have been writing GPU code all my life - right now what NVIDIA has in DX12 is a broken driver. The same command buffer reuse stuff which behaves erratically in DX12 performs fine in the vulkan beta driver.

Are you a semiconductor expert from TSMC, GloFo or Samsung ? Do you have in depth knowledge about the economics behind semiconductor lithography. The 980Ti had 8 billion transistors with a die size of 600mm2 and with the risky move to 16nm FF after ages on the tried and tested 28nm they've managed to squeeze in 7.2 billion on the 1080 with a die size of 300mm2 with a newer architecture, you cannot expect lower prices on the basis of die size itself when they've made technological advancements on other fronts.
Plus he's been bragging about spending billions on r&d, can't really expect em to give it away for free.
As a matter of fact I do work for the last of the three and have a fair idea of what this chip should really cost :rolleyes:.

Again you're just basing your opinion on one slide and a leaked benchmark, the 1080's have apparently been shipped already to reviewers, let's wait and watch and then discuss on the pricing and performance, right now you're just coming across as a butthurt 980Ti owner like my friends.

Heh - this is rather juvenile. I have an attic full of old GPUs from every gen. If there is performance gain to be had, I'll buy a GPU in a heartbeat as the costs are inconsequential to me in the scheme of things as the card will return its value in no time since I don't just play games on it.
 
Async compute has nothing to do with issues here. I have been writing GPU code all my life - right now what NVIDIA has in DX12 is a broken driver. The same command buffer reuse stuff which behaves erratically in DX12 performs fine in the vulkan beta driver.
http://wccftech.com/nvidia-gtx-1080-asynchronous-compute/

If there is performance gain to be had, I'll buy a GPU in a heartbeat
So wait for the reviews to trickle out instead of just bashing it for no damn reason. It has been known for a while that HBM2 wasn't going to make it out this year, the cards that you are looking for aren't coming out until late 2016 or early next year when big Pascal and Vega duke it out. You are looking for a supercar and complaining about lack of those features in sedans.
The 1070 especially appears to be a good vfm card at least on paper when compared to what's available in the market at the price point atm.

Mod edit: Please avoid personal attacks
 
Last edited by a moderator:
Dunno why people are really quick to judge these things. The wise will also wait for AMD's offerings before any purchase is made, unless of course you are a you-know-what. And oh fanboys will be fanboys.
If you know where to look, the 3d mark scores of the 1080 have already been leaked

Not a good argument. I don't think that i even need to quote what people said about the AMD R9 3XX cards and "leaked" bench results before they were officially made available and proper reviews could be had? Turn out to be a joke.

TL;DR: take them with a pinch of salt.

Edit: I'm sure that some of you might have already known this but release of these new cards doesn't and won't simply mean that 9XX cards will become irrelevant. Sure the value will fall (like tradition shows) but i don't think that it will be huge so GTX 9XX owners don't be desperate to sell your card off for cheap unless you really need (or rather, want) that "8GB."
Relevant reddit megathread. https://www.reddit.com/r/pcgaming/comments/4iajaq/nvidia_megathread/
 
Last edited:
Alienware-GTX-770-giveaway.png


*_*
 
I initially thought that 1070 might have a similar memory architecture as 970 since both 1080/70 have been confirmed to have same amount of RAM without revealing the rest of the specs of 1070. But may not be the case. They already have a differentiation on memory type used.

@Lord Nemesis I'm not gonna spend my free time digging up threads, articles and videos on what is practically an ancient issue and not even related to this thread topic. I think pcper did a good breakdown on this subject after Nvidia issued a statement so look that up if you want to go into further technical details.

I will then assume you have no new information on the subject that actually links the the memory architecture to the particular issue that people faced. I thought you have some new analysis or information that I missed. I have already gone through the the analysis by AnandTech and pcper and few other sites long back and none of them showed any evidence to link the architecture to the particular issue faced by users. In fact, they state the contrary that the impact on game performance should not be as dramatic as people are making it out to be because of various other factors involved. Even when 4 GB is required, It may not be on par with having all of it accessible in parallel, but it would be considerably faster than a 3.5 GB card. The whole controversy reeks of a layman way of linking an observed effect and a cause without practical evidence to conclude that it is indeed the case.

BTW, why would you even mention the issue when you think its unrelated topic for this thread? You do remember that it was you that mentioned it in this thread right?
 
I initially thought that 1070 might have a similar memory architecture as 970 since both 1080/70 have been confirmed to have same amount of RAM without revealing the rest of the specs of 1070. But may not be the case. They already have a differentiation on memory type used.

It's a similar cutdown gpu this time too but people won't notice because it has sufficient vram unless it becomes apparent when used for VR. It'll be interesting to see what kind of gimping and performance gap exists between the 1070 and 1080 given the $220 difference, most of which is probably going towards GDDR5x. They probably know the performance numbers from Polaris 10 and decided to go with cheaper GDDR5 to reduce its price and take the first shot before AMD even has a chance to show what they have.
I'm kinda surprised they pulled out working silicon so fast considering they passed off Maxwell chips on the pm 2 as Pascal at CES earlier this year.

evidence to link the architecture to the particular issue faced by users

Huh, I never blamed the architecture for the issue, the 980 has a pretty solid design. It was only when they crippled the chip by disabling SMMs and reducing the L2 cache that this issue surfaced.

In fact, they state the contrary that the impact on game performance should not be as dramatic as people are making it out to be

The drop in performance in real world applications indeed wasn't as much as it was portrayed it would be going by the mem test and drop in bandwidth because there's enough headroom, Shadow of Mordor too took a hit of around 25% at higher resolutions but we'll only see with time what kind of performance drops we get with more demanding games in the future.

The whole controversy reeks of a layman way of linking an observed effect and a cause without practical evidence to conclude that it is indeed the case.

Hmm no, people wanted a reason to get the pitchforks out for once and shut down the Nvidia shills that populate most forums, imagine if something like this had happened to AMD, those trolls would've been out in full force and made an even bigger issue out of it.
It might seem like they made a mountain out of a mole hill but it's better as a whole for consumers when that happens since companies will be more careful the next time, there were similar reactions when the Pentium bug and TLB bug on those first gen Phenoms were discovered and none of those mistakes have been repeated yet because of the outcry that time.
Nvidia already had a reputation of pissing off people with Gameworks and building closed ecosystems, plus it was also hard to believe such a huge lapse in communication between engineering and PR regarding the specs especially after the wood screw drama.

BTW, why would you even mention the issue when you think its unrelated topic for this thread?

It was a reply to the other guy because he mentioned Nvidia and promises in the same sentence, he took it as a joke and moved on but you had to come defend Jen Hsun's honor as usual :p
 
By architecture, I am referring to the memory architecture of 970, not the GPU architecture. People linked it to the issues that were faced.

Also I am, not defending anybody, the issue is interesting to me because I see the same kind of flawed linking of effects and causes when people are doing root cause analysis at work. For example, a server throughput has gone down. The guys observe that the memory usage on the server is high. So they jump the gun and conclude that the system throughput has gone down because the process does not have enough memory. They then get the memory increased on the server instances or get more instances added. Often it does solve the problem in the short run, but it may come back to bite again a while later. The real cause may be few lines of shitty code that is eating CPU more than necessary or consuming more memory than required. Finding the real root cause and fixing a few lines of code may result in resolving the issue for good without taking in extra infra costs, but people don't go so far.

Regarding 1070, I read from one of the rumor mills that this is not going to have memory design similar to 970. But then, since nvidia din't reveal any of the specs and not just the clocks, it might as well have it.
 
Lol I've seen my own share of memory leaks handled by incompetent programmers but on a much smaller scale while porting games not on the enterprise level.
Also I was just pulling your leg, I've already told you in the past I may be an AMD fanboy but I think Nvidia is a great tech company that puts out great products, it's just these shenanigans they pull off occasionally that they don't have to that really ticks me off. For all the master race nonsense that goes off I think the AMD-Nvidia fanboy wars are the absolute worst and more pathetic than the Sony pony vs. Xboner conflict.
People just refuse to discuss objectively when it comes to these companies (and no I'm not pointing fingers at you but I'm just fed up of the flame wars everywhere), the worst cesspool currently are the comments on WCCFtech articles.
At the end of the day competition is good for us end users, I'm just waiting for Pascal+Polaris reviews to come out so I can finally slap in a gpu after ages.

P.S nazi mods it was a joke not a personal attack, I don't really think @Chaos works at a sweat shop for reals :rolleyes:
 
Status
Not open for further replies.