Graphic Cards Good connector design versus 12HPWR

cranky

Skilled
I'm late to the 12HPWR connector party, having only recently picked up a RTX4090. Better late than never, right?

It came with a quad 8 pin PCIe to 12HPWR connector. While I transition to a new RMe1000 (another story of poor design choices), I thought I'd make some observations about the 12HPWR connector and some of its pitfalls. While most of the very nasty stuff on YouTube is about the adapter, the connector itself is not free of concerns. Most of the connector design flaws are not covered in any of what the major tech channels talk about, so I'll try and move beyond the 'User error' hyperbole. I'll preface by saying this is relevant to owners of RTX4090, and very heavily overclocked RTX4080 cards. For lesser cards, anything said here is not relevant, but the '600W' claim rings hollow.

For those who've not gone through any of my content, be warned, it will be long, filled with rants and some it it will be indecipherable. I offer no advice, and give no guarantee that my numbers are accurate. Of course I take no responsibility if you still burn your card. Sorry.

The specification for the connector system is outlined in the datasheet here: < https://www.amphenol-cs.com/product-series/minitek-pwr-cem-5-pcie.html# > All 4 required components are covered in the document, though the datasheet is relevant to the whole system and does not expand on the specification of the individual components (those are available under the product details if you are interested). Molex has a functionally equivalent range called Microfit, which is basically the same thing. It's actually a pretty smart system overall, but the RTX implementations have been very poor. I'll expand on why I think so, and welcome your comments, observations and corrections.

There are a few points to note from the datasheet above. The first is that a crimped 16AWG terminal is required for the connector system to maintain its specification. Another is the maximum operating temperature of 105C. And obviously, the limited number of insertion cycles (30 cycles as noted in the datasheet) to cause a doubling of contact resistance. All of these are important and relevant to the observations below.

Let's talk about resistance. Resistance is not a fixed value for any resistor, all resistive materials exhibit both thermal coefficient (of which we'll talk about) and voltage coefficient (of which we won't). All resistive materials exhibit increased electrical resistance as the temperature increases (positive thermal coefficient). This property is used to provide negative thermal feedback in amplifiers (my actual day job), and very useful in that application. Most of the time though, this increased resistance is undesirable as it affects efficiency. It also tends to go into a positive feedback loop in conductive applications, where increased resistance causes more heating, which increases resistance, which increases temperature, and so on. I don't think the smaller connector size is in itself a source of concern. In most connectors contact pressure is a bigger determiner of resistance than contact area. So the fact that it is smaller is not a major concern. There are other bigger issues at play (no pun intended).

First, the (non) crimped terminals. The nVidia adapter has soldered wires on a bar. Solder is a very, very poor conductor of electricity. Like a tenth of copper, at the same volume and thickness. For high current connections, crimping is mandatory. What's worse is that only the edge of the wire that is touching the connector is (semi)direct contact, the rest of the current has to travel through the cross section of the wire (sort of okay) and the external solder structure (not OK). This will most certainly drop the capability of the connector significantly, I'd say to about half. In addition, it's almost certain unleaded solder was used here, which has silver in it. Silver tends to leach plating materials and degrade contact resistance over time. The only possible advantage the nVidia adapter has is that is is able to sink significantly more heat than a native cable, since there's just so much more copper to sink the (unnecessary) heat into. Overall though, I'd say this is much worse than a native cable and definitely does not meet the specification outlined in the Amphenol document. Crimped connectors, when properly made, are cold-welded and can transfer significantly higher amounts of energy. They're also mechanically more stable. So a crimped connector should be fine, right? Ummm, no. Not even a little bit.

Let's talk about temperature next. The connector system is qualified to a maximum operating temperature of 105C. This includes any self-heating, plus the ambient, plus (hopefully) a safety margin. As most of you who own GPUs know, the 12VHPWR connector (as well as 8-pin PCIe connectors) is placed directly under the heatsink and in the path of the downdraft from the GPU cooler. As a result, the 'ambient' it is operating at is no longer the actual case temperature, but the air temperature coming off the heatsink. For a GPU operating at 60C heatsink temperature, the connector will see an operating temperature close to 55C. As is normal, this will also be at a time when the highest current loads are needed and one will be close to breaching the 30C internal temperature rise. These two are 90C together, leaving just a 15C margin. Is this enough? Well, yes, maybe, if you never reach the 600W mark, you won't be pushing the maximum capability of the connector. Since the RTX4090 is specced to 450W, this is OK, right? Well, no, not really. Since Amphenol does not produce a temperature derating specification, we're left to guess at it. Typically most parts maintain spec to 70C (Internal + Ambient), and derated to zero at maximum operating temperature. At 90C, you can expect about half the current capability, or 5A per pin = 360W maximum. This means for a high temperature operating condition, the card is no longer able to meet factory specification. But this is not the last of it either.

If you examine all the pictures on the 'net of burned connectors, you'll see all the burns (almost all) are on the positive pins. But we all know that the current through positive and negative currents have to be equal, so why are the positive pins burning? The answer is in the board side male connector. The connector uses a right angle mount, so the positive legs have to travel quite a distance before they sink into the power planes on the PCB. The ground connectors, OTOH, meet the ground plane in the first 2mm or so. The longer legs on the positive pins offer significant thermal impedance and cause higher temperature rise. As the temperature goes up, the electrical resistance goes up as well. This causes greater voltage drop in the connector, raising the temperature and then raising the resistance further. The positive feedback loop we referred to earlier is coming full circle. To make this worse, some manufacturers are using split planes for the positive supplies, offering even lower thermal sinks for the positive rail pins. This leads to temperature differentials between the pins, and if they are routed to different parts of the PCB separately, the chances of burns increase significantly.

Let's talk about hard numbers (you can skip this section). I'm using, at this initial phase, HWInfo voltage readings to derive resistance approximations for the connector and with an undervolted card running 2.7GHz continuously, there's not much to worry about. I get a 0.15V difference between slot voltage and 12HPWR voltage at ~350W, which works out to a resistance of about 5mOhm from PCIe slot to 12HPWR pin. Since the current through the PCIe slot is minimal, I am choosing to ignore the voltage drop between it and the PSU. This includes the impedance of the wires and PSU side connectors as well. This is well within the 6mOhm spec for the connector, but one must note that the connectors aggregate a dissipation of 4.5W ((350/12)*0.15) from PSU to 12HPWR and do get quite warm to the touch. A quick check shows the connector body at about 55C within a few minutes of this kind of load, and 65C after about fifteen minutes. This is also because the GPU fans are continuously blowing a stream of hot air on the connector. We are only at 70% of full load. Extrapolating the numbers from here gets tricky because we've not established how much of the dissipation is happening within the connector and how much in the wire. If we worked that out we could get to the actual thermal resistance of the connector and how much dissipation it can safely endure in a 50C ambient (a number used to test PSU's, and a good approximation of its environment). Using a blue sky number of 80%, let's assume the connector dissipates 3.5W, and has temperature rise of 15C on its body. That's a thermal rise of 4C/W, which is decent for a plastic shell. Keep reading though.

Lastly, let's talk durability. Every number specified in the datasheet is qualified to a doubling of contact resistance. The default spec is 6mOhm per contact, doubling this is 12mOhm. At a contact resistance of 12mOhm and a current of 57amps (9.5A per pin) we're looking at 39 watts of dissipation inside the connector. Coming down to the 600W nVidia spec, that's 50A, or 30W. Looking at the 450W max for the 4090, we're at 16W, and at my undervolted 380W unit, I'm going to be hitting around 12W (compared to 4.5W now). For temperature calculations, we take the power number, multiply it by the C/W in the previous para, and get 156, 120, 64 and 48 degrees. Above ambient, which is about 50C, so you add it to the previous number to see what you are gonna get inside a case (not open test bench in a controlled environment at 21C, as some YouTubers have decided to test it).

How will we know if we're getting there? Simply look at the numbers from HWinfo, I will be doing that till I work out a proper method of voltage or temperature monitoring, or both. For every 120 watts of power draw (10 amps) you want to see a maximum of 50mV drop in the 12HPWR voltage compared to PCIe slot voltage. This is not accurate by any means, but will help you to understand if your connector is going bad. It is also not a failsafe. The pin voltage could be OK but one pin could be not making contact, which will cause its temperature to rise rapidly. I'm not gonna get into how to connect the plug and how to check it, enough has been said by enough number of people. At the end of the day, this is not a connector that should be used inside a hot environment, and should not be used by anyone except professionals, and should always be monitored for temperature. It is baffling how this was qualified for consumer use as it requires a reasonable level of skill to be used properly, is exposed to hot air and high current simultaneously due to design oversight or lack of knowledge, and thus is almost always operating at the edge of its specification. In addition, the limited number of use cycles and the necessity to replace the adapter or cable assembly frequently is not environment friendly if followed rigorously, and a fire hazard if not.

The surest way to avoid any issues is to monitor the temperature and the voltage drop regularly, and see if an undervolted card will be sufficient for your needs. When choosing, a native cable will always be better (assuming it's properly made) than the bogus nVidia adapter. Lastly, and there's no way to ensure this, try and select a cable assembly that has a nylon connector rather than thermoplastic. We can't do anything about the board connector in such an instance though. I'm pretty sure mine will die at some point, so I'm using the system rarely and working on a robust monitoring and warning system, and trying a way to shut off the system at a certain temperature.

Anyway, rant time over, if you read this far, I commend your dedication.
 
Last edited:
First, the (non) crimped terminals. The nVidia adapter has soldered wires on a bar. Solder is a very, very poor conductor of electricity. Like a tenth of copper, at the same volume and thickness. For high current connections, crimping is mandatory. What's worse is that only the edge of the wire that is touching the connector is (semi)direct contact, the rest of the current has to travel through the cross section of the wire (sort of okay) and the external solder structure (not OK).
Doesn't solder have less resistance than crimp? And Silver is the best conductor of all metals, makes no sense that it would increase resistance if added to the solder alloy.

The main reason crimping is preferred for heavy current applications is that they need thick cables, which are very difficult to solder as the wire can sink a lot of heat and even melt the insulation before it is hot enough for solder to stick. Crimping on the other hand is a purely mechanical joint.
 
Doesn't solder have less resistance than crimp?

Nope. Solder forms barrier layers with about 15% of copper conductivity, whereas crimps will make a direct metal to metal connection. Better still, the cold weld formed after a few heat cycles makes it a single metal junction. A good crimp is usually indistinguishable from a solid single metal. It requires appropriate tools - each connector has a very specific crimp tool with the correct torque/pressure range for that connector, and a fixed life cycle.

And Silver is the best conductor of all metals, makes no sense that it would increase resistance if added to the solder alloy.

That's not what I said :). The resulting alloy of tin and silver is about is not equal to copper (by a long way), My comment about silver was about leaching of plating on PCBs and connector terminals, and that is what leads to increased resistivity over time. Over the already much worse conductivity of solder. This is one reason why service lifetimes for electronic products has rapidly reduced over the last few decades. Cold solders are a way of life with ROHS solders, partly due to the brittle nature of tin and the chemical aggression of silver.

When soldering parts to a PCB, the layers are typically very, very thin. This allows us to get away with lower conductivity. When we need to go wire-to-board, the best practices involve crimp termination of some sort, as solder is pretty poor at handling any sort of stress over time.

The main reason crimping is preferred for heavy current applications is that they need thick cables, which are very difficult to solder as the wire can sink a lot of heat and even melt the insulation before it is hot enough for solder to stick. Crimping on the other hand is a purely mechanical joint.

I'll try and link some NASA documents that make pretty detailed observations about solder vs crimp. One of the biggest issues is wicking of solder onto individual strands and reducing conductivity even within the cross-section, and making the whole assembly prone to mechanical fatigue over time. And yes, the NASA documents are what we follow as best practices even today - they date back about 40 years now, from memory, but physics hasn't changed much since then. Soldered wires (directly soldered) are considered pretty poor practice. Not that we never use them, we do, but not in any sort of application where there may be mechanical stress or thermally induced movement. Even 0.25mm of movement over a few years can end in deep trouble.
 
Oh you don't have to sell me on crimping - I've been doing it since the last couple of years for my electronic projects. I can't solder worth a damn - anything I touch has a tendency to vapourize lol. Now I feel much better :D

I figured soldering was used for boards where movement would be almost zero, and crimping wires was useful for quick parts replacement and to handle some movement.
 
Oh you don't have to sell me on crimping - I've been doing it since the last couple of years for my electronic projects. I can't solder worth a damn - anything I touch has a tendency to vapourize lol. Now I feel much better :D

I figured soldering was used for boards where movement would be almost zero, and crimping wires was useful for quick parts replacement and to handle some movement.
The only time we ever use solder on wire is on things like front panel switches and buttons that present only solder tabs. Anywhere a crimp is available, we use a crimped connector, and wire-to-board is usually a quick disconnect or a crimp sleeve before soldering, if we ever need bare wire to board. We are guilty of overusing the crimp tools a bit though, usually they need replacing after a specific number of crimps as they lose pressure due to wear of the teeth and the ratchet mechanism.

Crimp is actually a step behind wire wrap, which is the gold standard for wire to board, but that is not used by anybody these days, that I know of. I don't think even the tools are readily available anymore.
 
Back
Top