C1E Enhanced Halt State after overclock?

Status
Not open for further replies.
Hi all again,

I tried enabling Speedstep and C1E Enhanced Halt State but it still won't reduce voltage :( although it reduces multiplier. Speedstep has only 'Auto' option, there's no 'Enabled' option.

Any solution???
 
windhawk91 said:
Hi all again,

I tried enabling Speedstep and C1E Enhanced Halt State but it still won't reduce voltage :( although it reduces multiplier. Speedstep has only 'Auto' option, there's no 'Enabled' option.

Any solution???

How are you checking change in voltage? Tried using CPU-Z?
 
H@cKer said:
even if one of them is working the voltages and multipliers are lowered and if EIST is turned on it lowers performance:ohyeah:

@pappu I still hv to learn a lot from u guyzz

as some1 said, you certainly need to learn a lot, dont just go ahead and open google and try to search for definitions....overclocking is not about theoritical concepts but a lot more goes into it.....in concept ofcourse both C1E and EIST are almost the same.

@windhawk, make sure both EIST and C1E is enabled. Try with latest version of CPUZ and keep a track only when computer is idle and doing nothing.....then start orthos or something and see if there is a jump in volts....
 
Enabled them both. On 6x multiplier voltage is about 1.29V and on 8x multiplier its 1.314V which is just a ~0.024 difference. When voltages were on AUTO, it used to be 1.314 on 8x multiplier and 1.190 on 6x multiplier which is ~0.124 difference.
 
it basically depends from mobo to mobo and bios to bios. There's nothing you can do to change the amount of volts it drops. Also when you set it to auto, was the system overclocked or was it @ stock?
 
Here goes my contra-view. Race cars don't have air-conditioners, ever wonder why?

There are a few reasons why one would be better served by switching power management off in a overclocked system.

1. Less thermal shock on the chip. Contrary to belief, changes in temperature cause more stress on components than absolute temperature itself. And changes in voltage cause changes in temperatures. This may not be visible at a software monitoring level, but it's a definite effect at the die level. A CPU that runs 24x7 with a 50% overclock and 50 degrees should have a longer life than one that does 100% under supercooling and then brought back to room temp.

2. Less issues with stability. Let's take a CPU that runs at 266x8 at load and 266x5 in idle. This would be a typical case of an Intel CPU out of the box. The VID is 1.25 and 1.05, respectively. Now we're going to raise the FSB to 400x8, for a 66% overclock, and we need to push 1.4V (say) into the chip. So the situation is still happy. Now when power management kicks in, the multi will drop to 5, and the CPU will back off down to 1.05V, as programmed. Unfortunately, the FSB is still 400, so the chip is running at 2GHz, not 1.3GHz as the designer intended.

The fact is that most modern chips can run those kind of speeds at that kind of voltage, but it's still very far out of spec. Can you/should you? Depends on your chip. Also, Intel CPUs take this kind of abuse, as the CPU tells the board what voltage it needs to operate depending on its speed and workload (which is why you don't see much of a drop in your case). It depends on how well your CPU and board communicate with each other, and why there's so much performance difference between some BIOS version and others (there are more reasons, such as signal levels and timing, but that's another topic).

3. Less strain on power regulation circuitry, for reasons of thermal shock as well as pulse behaviour of capacitors and MOSFETs, specially when pushing extreme voltages and speeds into a CPU via an energy-storing inductor that can dump large pulse currents back into the source.

The higher the quality of the overall system, the less these issues impinge on your experiments.

Who was it that actually ran benchmarks on overclocked systems with and without power management enabled? AT, I think, and I remember which way the results went. I think those were with the Phenoms, but there is a definite negative impact - even if it's only the additional load of one transistor calculating the load on the CPU, why would you want it?

I think thebanik's post is the most accurate (though I don't necessarily agree with the point of view), nothing is gospel, so don't treat it as such, the only way out is to test the whole system, average three or five runs each time, record results and then arrive at conclusions for your board, your CPU and your operating conditions, without expecting that it's universal truth.
 
techie_007 said:
Why? Thats about the difference you will observe :P You dont expect voltage to halve, do u? :bleh: So long as it changes, that means its working fine.
No, I expected it to drop atleast 0.075V or more since it was dropping 0.125V :bleh:

thebanik said:
it basically depends from mobo to mobo and bios to bios. There's nothing you can do to change the amount of volts it drops. Also when you set it to auto, was the system overclocked or was it @ stock?
It was @ stock on Auto. If I use Auto voltages on the OC'd system, voltages go waaay... above safe voltages :rofl:

sangram said:
Here goes my contra-view. Race cars don't have air-conditioners, ever wonder why?

There are a few reasons why one would be better served by switching power management off in a overclocked system.

1. Less thermal shock on the chip. Contrary to belief, changes in temperature cause more stress on components than absolute temperature itself. And changes in voltage cause changes in temperatures. This may not be visible at a software monitoring level, but it's a definite effect at the die level. A CPU that runs 24x7 with a 50% overclock and 50 degrees should have a longer life than one that does 100% under supercooling and then brought back to room temp.

2. Less issues with stability. Let's take a CPU that runs at 266x8 at load and 266x5 in idle. This would be a typical case of an Intel CPU out of the box. The VID is 1.25 and 1.05, respectively. Now we're going to raise the FSB to 400x8, for a 66% overclock, and we need to push 1.4V (say) into the chip. So the situation is still happy. Now when power management kicks in, the multi will drop to 5, and the CPU will back off down to 1.05V, as programmed. Unfortunately, the FSB is still 400, so the chip is running at 2GHz, not 1.3GHz as the designer intended.

The fact is that most modern chips can run those kind of speeds at that kind of voltage, but it's still very far out of spec. Can you/should you? Depends on your chip. Also, Intel CPUs take this kind of abuse, as the CPU tells the board what voltage it needs to operate depending on its speed and workload (which is why you don't see much of a drop in your case). It depends on how well your CPU and board communicate with each other, and why there's so much performance difference between some BIOS version and others (there are more reasons, such as signal levels and timing, but that's another topic).

3. Less strain on power regulation circuitry, for reasons of thermal shock as well as pulse behaviour of capacitors and MOSFETs, specially when pushing extreme voltages and speeds into a CPU via an energy-storing inductor that can dump large pulse currents back into the source.

The higher the quality of the overall system, the less these issues impinge on your experiments.

Who was it that actually ran benchmarks on overclocked systems with and without power management enabled? AT, I think, and I remember which way the results went. I think those were with the Phenoms, but there is a definite negative impact - even if it's only the additional load of one transistor calculating the load on the CPU, why would you want it?

I think thebanik's post is the most accurate (though I don't necessarily agree with the point of view), nothing is gospel, so don't treat it as such, the only way out is to test the whole system, average three or five runs each time, record results and then arrive at conclusions for your board, your CPU and your operating conditions, without expecting that it's universal truth.
Thanks for that insight. Helped :)

@all
So, anyway my final decision was to keep C1E and EIST disabled since the voltage drop observed was too minimal (less than 0.025).

Topic can be closed or pinned xD.
 
sangram said:
1. Less thermal shock on the chip. Contrary to belief, changes in temperature cause more stress on components than absolute temperature itself. And changes in voltage cause changes in temperatures. This may not be visible at a software monitoring level, but it's a definite effect at the die level. A CPU that runs 24x7 with a 50% overclock and 50 degrees should have a longer life than one that does 100% under supercooling and then brought back to room temp.

I have two profiles in my BIOS.
One is the stock one where every component is at stock frequencies with proc at 23 idle.
Another is the overclocked one where everything is fed more voltage and proc is at 34 idle.

I load the second one only when i play games or watch HD and then revert back for 24x7. Is this bad? I do this to save power and it saves like 40W. But after reading ur post i am concerned whether this will damage ma PC...
Please clarify...:)
 
Damage as in instantaneous? Probably not, but it does shorten the life of the CPU. You may not be affected, but if you do, you probably know what's causing the issue. In the long run, the overclocked rigs draw more power, but that's about it. The life expectancy drop will be noticed if you tend to keep PCs for very long (20 years) and have a lot of them. Otherwise, the degradation is too slow to be normally noticed.

What it does is hamper max clocks slightly, which is why all benchmark runs are usually conducted with PM switched off. In any case, read the text you quoted. Your case qualifies under none of the two examples, so I wouldn't worry about it.

BTW that temp reading on the M2A-VM is fake, trust me, I've been through 2 of those boards. Add 15 degrees for the real temps, specially if you're using Speedfan.
 
sangram said:
Damage as in instantaneous? Probably not, but it does shorten the life of the CPU. You may not be affected, but if you do, you probably know what's causing the issue. In the long run, the overclocked rigs draw more power, but that's about it. The life expectancy drop will be noticed if you tend to keep PCs for very long (20 years) and have a lot of them. Otherwise, the degradation is too slow to be normally noticed.

What it does is hamper max clocks slightly, which is why all benchmark runs are usually conducted with PM switched off. In any case, read the text you quoted. Your case qualifies under none of the two examples, so I wouldn't worry about it.

BTW that temp reading on the M2A-VM is fake, trust me, I've been through 2 of those boards. Add 15 degrees for the real temps, specially if you're using Speedfan.

So, i can keep two profiles...:)

I have a question. The voltage i set in BIOS is not the one i am seing in CPU-Z. For example if i feed 1.425V, i see 1.52V or something in CPU-Z. How do i know which is correct? Temps are as seen on Everest Ultimate. Will it be correct?
 
Neither is correct. The right reading is taken with a voltmeter at the the Vreg outputs.

When you set a voltage in the BIOS, the hardware tries to maintain that voltage under all conditions. This is why the reading varies from idle to load, and from what is set in the BIOS. Software readings are an approximation, but if you've to believe something, believe the BIOS.
 
sangram said:
Neither is correct. The right reading is taken with a voltmeter at the the Vreg outputs.

. This is why the reading varies from idle to load, and from what is set in the BIOS. Software readings are an approximation, but if you've tWhen you set a voltage in the BIOS, the hardware tries to maintain that voltage under all conditionso believe something, believe the BIOS.

Though I certainly am astounded by your knowledge of electronics but there I would mostly disagree with you. :D, though I agree with you that software readings are not always accurate and a voltmeter is the way to go for accurate readings but setting a value in Bios is not the best way to guess the volts it will set. Again you are making the same mistake of using theoretical concept and not taking into consideration practical limitation of the statement, "When you set a voltage in the BIOS, the hardware tries to maintain that voltage under all conditions". Why do you think more serious overclockers go for various mods on their mobo, if that was the case?

@clown_abhi, some bios and mobos have a tendency of overvolt, which is just like vdrop and vdroop but just the opposite in some sense. In some cases its a boon and in some its a bane.
 
Maybe I need to expand that statement, because that is the very reason that volt mods are required.

Basically the BIOS tells the regulator chip the voltage it needs to operate the CPU. Before it does that, it requires the user to select 'auto' (so it reads the valuee of a CPU register) or a preset, fixed voltage. The regulator checks with the BIOS what the allowable range of voltages for the mobo is, and if it matches, then it sends the right duty cycle to the regulators.

It measures the output from the regulators, and if it is above the threshold, it increases the duty cycle, and if it is below, it reduces the cycle, ensuring the transistors stay 'on' longer, thus increasing the voltage, till the output of the transistor matches the one the regulator is set to. Obviously as the load on the output increases, the duty cycle drops as well, and this is mostly a transparent process to the user. One must remember that all modern mobos have PWM regulators, which don't use linear voltage ramps. Any closed feedback system will try to maintain output voltage at a pre-set level.

How does all of this matter? What typically happens in the whole system is that the reading in BIOS is the 'intended' target, and is a fixed value. The one read off software from the main I2C bus (to which the monitoring chip is connected) is reading the voltage from either the regulator output or the transistor output, and thus may be lower or higher than the BIOS 'setting' - and to boot, it is variable. For the 'correct'/'reference'/intended VID, one needs to refer to the BIOS as that is the reference. Actual VID is only measurable physically and not through software or BIOS, which is what I initially meant but didn't want to deep dive into the whole regulation scheme. VID is the actual voltage going into the CPU, not the software's guesstimation of that voltage. You'll notice that the latest and greatest boards actually have physical test points to check voltage - this is a huge step forward as anything apart from a physical reading is actually a shot in the dark.

About voltmods: In a typical mobo regulator setup, the voltmod involves 'fooling' the regulator into thinking it is setting a particular voltage level when in fact it is setting another, which is why it typically involves low-level resistor changes in the feedback loop of the regulator. In open-loop systems such as ones still used in graphics cards, the voltmod simply raises the output voltage by changing the reference voltage for the output transistors.

Of course you hit a wall, when the regulator is unable to increase the duty cycle and gain any improvement in output voltage. A voltmod may or may not cure this, depending on how the mobo is set up, and what the capabilities of the output transistors are. Typical regulators used in mobos (a few that I have) can hit 2.8-3.14 volts without any issues, but are typically clamped down to the 1.7V range using feedback resistors. Some of the newer regulators actually clamp at down to 1.5V range and for newer CPUs the shift to these kind of regulators will limit the kind of voltages you can apply to them.

Actually it's a lot more complicated than this once you bring in Vdroop and termination impedances, but I'll leave that for another discussion.
 
  • Like
Reactions: 1 person
Status
Not open for further replies.