Here goes my contra-view. Race cars don't have air-conditioners, ever wonder why?
There are a few reasons why one would be better served by switching power management off in a overclocked system.
1. Less thermal shock on the chip. Contrary to belief, changes in temperature cause more stress on components than absolute temperature itself. And changes in voltage cause changes in temperatures. This may not be visible at a software monitoring level, but it's a definite effect at the die level. A CPU that runs 24x7 with a 50% overclock and 50 degrees should have a longer life than one that does 100% under supercooling and then brought back to room temp.
2. Less issues with stability. Let's take a CPU that runs at 266x8 at load and 266x5 in idle. This would be a typical case of an Intel CPU out of the box. The VID is 1.25 and 1.05, respectively. Now we're going to raise the FSB to 400x8, for a 66% overclock, and we need to push 1.4V (say) into the chip. So the situation is still happy. Now when power management kicks in, the multi will drop to 5, and the CPU will back off down to 1.05V, as programmed. Unfortunately, the FSB is still 400, so the chip is running at 2GHz, not 1.3GHz as the designer intended.
The fact is that most modern chips can run those kind of speeds at that kind of voltage, but it's still very far out of spec. Can you/should you? Depends on your chip. Also, Intel CPUs take this kind of abuse, as the CPU tells the board what voltage it needs to operate depending on its speed and workload (which is why you don't see much of a drop in your case). It depends on how well your CPU and board communicate with each other, and why there's so much performance difference between some BIOS version and others (there are more reasons, such as signal levels and timing, but that's another topic).
3. Less strain on power regulation circuitry, for reasons of thermal shock as well as pulse behaviour of capacitors and MOSFETs, specially when pushing extreme voltages and speeds into a CPU via an energy-storing inductor that can dump large pulse currents back into the source.
The higher the quality of the overall system, the less these issues impinge on your experiments.
Who was it that actually ran benchmarks on overclocked systems with and without power management enabled? AT, I think, and I remember which way the results went. I think those were with the Phenoms, but there is a definite negative impact - even if it's only the additional load of one transistor calculating the load on the CPU, why would you want it?
I think thebanik's post is the most accurate (though I don't necessarily agree with the point of view), nothing is gospel, so don't treat it as such, the only way out is to test the whole system, average three or five runs each time, record results and then arrive at conclusions for your board, your CPU and your operating conditions, without expecting that it's universal truth.