|22-02-2012, 07:30 PM||#145|
Join Date: Nov 18 2009
During weekend I did some further tests with the board.
I followed both recomendations - from intel and from you guys. This configuration allows to run X48 with quadcore and both PCI-E slots occupied, and 8gb ram without reboots or memory errors.
This confirmed me that VTT and Mem voltages have to match to run system properly, but there is another - third factor which requires memory to have higher voltage with exception of FSB and host.
This setting allow to startup system with much less memory voltage. However any system load caused reboot soon or later.
During last day the configuration of skew rates at 6-8-0-0 has been confirmed as most stable at VTT 1,2v. VTT 1.20v and 1,265v are now being tested.
Both values are on half way. If memory was on 1,9v system was stable with risk of instant death of memory. If NB was on 1,45, while memory on 1,65v the system was unstable due memory and NB was boiling. The values found now seems promising, however i had to add back fan to NB cooler.
At all both values are bit high, but now I am sure that it cannot be fixed in any other way.
Bit testing of clocking to 333, 366 an other lower FSB values show that fully loaded system at stock voltages can be stable only on 333Mhz FSB. Only problematic part of the system is memory controller at these frequencies. When its fully loaded high frequency of 1600Mhz cause it to consume a lot of voltage to prevent overheating.
Intel specification is in this right, and fact that I was manage to run two Nanya memory sticks on 1600mhz and 1,5v is bit miraculous. 4 sticks 1600Mhz should not be possible even on 1,9v.
Current result has to be tested against summer heat, but now it seems promising (just cooler of NB is bit noisy).
Also I added 500W Seasonic PSU. Whole system shows great voltage stability. Its CPU power rail has good connector without any visible "tricks". Now the CPU has the most reliable power I ever installed. Voltage reported is 1,32v under load, 1,34v idle.
Finally after years the system build is reaching its end.
Last edited by Offler; 24-02-2012 at 07:43 PM.
|24-02-2012, 09:01 PM||#146|
Join Date: Nov 18 2009
Finally after months the system build is reaching its end.
Complete hardware description of X48 chipset on this specific board:
Manufacturing process: 90nm
NB core (400mhz)
Memory Controller hub up to 1600Mhz, 2 DDR3 channels
Host interface (in LGA 775 socket), Up to Core 2 quad
32x PCI-E 2.0 channels
Main point of any issues with this board is its DDR3 memory controller. Memory Voltage has to be adjusted with reference to VTT and finding correct value require hours of testing. When all banks are occupied, onchip memory lines are all active which is the cause of higher temperature. 1600Mhz CL8 is worth its, especially when read and write rate measured by Everest crosses 8gb/s limit.
Any "random reboots" on this board are related to FSB communication which is dependand on three factors - Memory and memory controller voltage, VTT voltage and NB core voltage. These three parts form up link which transports data to and from memory, no matter whether its from CPU Host or PCI-E / other interface.
Therefore, even hours of memory testing with LinX are useless when the memory frequency is higher than 800Mhz at 400Mhz FSB. Half of memory bandwidth is really free for PCI-E. Running multiple HD video streams caused high memory load and high PCI-E traffic. Those tests must run at the same time.
The chipset really supports host independed DMA accesses for PCI-E devices to the memory. Effectively with the configuration of 1600Mhz dual channel the PCI-E devices can access memory without affecting CPU.
As a bonus I take some information from intel's Ark. Xeon 3360 which I own does not support FSB parity. Its strange because Q9550 has this feature. Effectively I have 64bit wide host bus, q9550 has only 62 + 2bit ECC wide bus
CPU performance is however limited by high memory latency, no matter of cache size. At all even with 8gb of memory I was able to perform my standard 771mb Linx problem size in less than 14 second. Same test at cl7 was done in 13,8 seconds...
CPU Xeon X3360 - 3400Mhz 1,3v (quad core) (OC potential up to 3600Mhz - CPU limited)
NB X48 - 400Mhz FSB 1,39v (OC potential up to 500Mhz with Core 2 duo, 450 with Core 2 quad - chipset limited)
Mem: Adata 1866+ @ 1600Mhz - 8-6-6-21 at 1,77v (chipset limited voltage)
PCIE 2.0 16x (bottom slot): Ati HD 5850
PCIE 2.0 16x (upper slot): Avermedia Trinity (Tv Tuner) Still reserved for PCI-E card able to read/write approx 8gb/s
PCIE 1.1 4x: OCZ RevoX2 100Gb (750mb/s)
PCI 1 : Intel PRO GT 1gbit ethernet (up to 125mb/s)
PCI 2 : Creative Xfi Fatality (unknown)
PCI 3: Unoccupied, reserved for SCSI adapter (up to 160 mb/s)
PSU Seasonic 500w.
Memory latencies might be re-tested, however read/write rates and desired capacity have been reached. Graphic card might be upgraded in next 2 years.
Goal of building workhorse machine based upon Core 2 quad architecture with independed PCI-E and host memory access and resources has been reached. Newer systems on intel I7 architecture does not comply to this because CPU and PCI-E always have to share the available memory resources, but the CPU perform ance is much better of course.
8,0gb u/d ______1,0gb u/d_____ 1gb u/d
PCI-E ---------- NB ---------- DMI ---------- OCZ revodrive x2 (750gb/s)
CPU ------------ NB ===== Memory
So if the DMI start to write data at rate of 1gb/s, PCI-E start to write data at 8gb/s there is still enough bandwidth for the CPU to write data to memory at the same time, and at the maximum speed of used interface, so the Chipset always works on 100% of its potential for any connected device. Therefore this system does not have any bottlenecks, most frequencies on system level are synced at 400Mhz.
The slowest part of the system is still its harddrive. System can support up to 10x faster one.
Specific system behaviour:
Ability to run multiple aplications without affecting each other. For example HD playback + 3d game. Not completed yet. AMD UVD underclocks GPU when running movie with hardware accelerated decoding.
Faster application loading - completed
Lower memory utilization (not capacity, but time of "usage") - completed and tested via Linx + HD video
PS: Maximum chipset temperature under heaviest load was 63 degrees of celsius. Thats just to describe how heavy testing this chipsed passed here Its current cooling (heatpipe + 9cm fan) will cool it down to 48-53 degrees but I havent tried the heaviest load yet
Last edited by Offler; 24-02-2012 at 09:38 PM.
|27-02-2012, 10:05 AM||#148|
Join Date: Nov 18 2009
At all the system is still very sensitive to any bios changes. Its running on the egde of stability. Which reminds me to save my bios settings asap. Today i cleared them by accident and it took 6 hours to restore previous stable state. Madness.
Bios values are sometimes not correctly displayed. Even latencies such as tWTR. Automatic settings are also not running fine, so all latency settings have to be triple checked with everest and other software.
Part of the boards mystery was also fact that tWTR was set to 1 by automatics, however SPD on memory expected 4. Tadaa now i know why the mem and NB needs so much volts. Such bios failures definitely made this board almost unclockable...
The board is not so picky as it appears, but this and similar issues that were never diagnosed.
Edit: Tightening the latencies works well when it comes to increase total CPU performance. Difference between automatic and manually tightened latencies is from 2 to 4 percent in raw performance (LinX). So much can do 1 extra cycle. Same CPU with integrated memory controller such as I7 with latency less than 40ns can have up to 20% better measurable performance only due better memory hub.
Thats why synchronous clocking 400/1600 allowed me to better results on selected frequency. CPU on such configuration can use single channel on 100%. But 75% effectivity is still not much. Like every fourth bit is being lost in communication. And this is caused only by high memory latencies.
Edit 2: So last stage require few things to do before its finished.
FSB 400Mhz - reached
CPU frequency 3400Mhz - reached
Mem frequency 1600Mhz - reached
Mem Latency 7-7-7-21CR2 - unreachable 8-6-6-21CR1 current (Measured performance almost same)
Mem Read 9500+ mb/s - reached at tRD 7 (stability has to be checked)
Mem Write 8500+ mb/s - reached
Latency : 53ns or less - reached at tRD 7 (stability has to be checked)
NB voltage : 1,265v - Unreachable 1,456v - current
Mem Voltage: 1,712v - Unreachable 1,79v - current
LinX Performance at problem size 771mb : 47,9 gflops peak or more - reached at tRD 7 (stability has to be checked)
After that I try to set tRD to 7 and see if it can run like this for longer time. Also bit lower memory voltage and lower latencies will be re-tested. If it crashes or shows error i will revert it back...
LinX. Problem size 771mb (LDA 9992), 2000x pass.
ProgDVB - running video stream for at least 3 hours, with Linx running on background on "below normal"
OCCT graphics test with LinX running below normal for 1 hour.
Playing Civilization 4 for at least 6 hours (this game seems to be best when it comes to crashing due bad configuration).
PCI-E speed test
PCI-E speed test + LinX running below normal (to verify whether the PCI-E can work with memory independently of cpu load)
Everest memory test
Linx + PCIe speed test is great for testing stability as a 3 minute torture test. I was able to detect an overshoot. When torture from PCI-E ended while CPU torture was still in progress the linx test failed. So the signal strenght and voltage for memory were bit too high. Well lets see ...
Last edited by Offler; 28-02-2012 at 06:08 AM.
|29-02-2012, 08:20 PM||#149|
Join Date: Nov 18 2009
So yesterday I have performed some new level of torture testing using LinX and PCIESpeedtest combined.
PCIEspeed test uses only first CPU core and one GPU core. Therefore setup LinX to use 3 threads and set affinity to all cores with exception of Core 0. Run LinX and PCI-E test simultaneously and wait until PCIe test finishes.
If the linx shows no error your system is most likely stable.
Combination of those tests can give very accurate report about system stability, in 3 minutes. Similar torture tests should run for hours for same result.
This helped me to improve system stability, and do quick changes in Bios. Results are also more accurate.
Independend PCI-E communication with memory has been confirmed.
PCI-E and CPU can really communicate with memory in a manner when they do not affect each other. PCI-E generated data at speed of 6,7gb/s CPU generated more data from LinX at aggregate stream of 13,9gb/s. Both LinX and PCI-E test were running smoothly and unaffected with the mentioned affinity. Read values of pciespeedtest and gflops/time values did not change when the torture tests were running standalone or combined.
Other side findings:
You surely know that Core 2 Quad splits its host connection between two dies. So the theoretical bandwidth of 12,8gb is splitted to two 6,4gb links. For reference:
Theoretical bandwidths VS real numbers:
PCI-E 8000mb/s 6700mb/s PCI-E Speed test
Host (total) 12800mb/s 9600mb/s Everest
Host /2 6400mb/s 5400mb/s PCI-E speed test
Effectivity of PCI-E : 83%
Effectivity of CPU-PCI communication: 67,5%
According to everest the cpu can read from memory at speed of 9600mb/s, which is at 75% from theoretical bandwidth. Since the CPU has two physical dies the communication has to be splitted. If we apply same effectivity percentage to 6400mb/s of theoretical bandwidth we will get 5400mb/s (+-)
Most PCI-E devices ever tested on PCI-E 2.0 16x have given real measurements of 6700mb/s. So this value seems like real bandwidth of the PCI-E 2.0. Higher values were not measured from what I know.
I compared E8400 with Q9550 at fsb 400Mhz.
Dualcore can access this PCI-E bandwidht at full speed of 6700mb/s, while approximately 3000mb of CPU bandwidth is not utilized. PCI-E is then the limit.
Quad core can access only 5400mb/s. The CPU is the limit.
Considering UP/Down construction of PCI-E and dual character of dual die connection of the CPU, the Quad core can communicate with GPU at 5400mb up and 5400mb/s down at the same time. Dual core can communicate at 6700mb/s up and approximately 3000mb/s down.
FPS-wise the dual core at this FSB frequency has better maximum performance. Quad core has ability to distribute the resources more efficiently. its harder to overload single link or cpu cache than dual link and cache. Also concurrent utilization of resources has half of its impact.
To raise the CPU host effectivity VS GPU the FSB should be 500Mhz, which is not possible.
The testing method will be used for further system tuning. Skyrim has crashed after 5 hours of running - system rebooted. This time voltage overshoot after end of torture has not been confirmed and its entirely an issue of timings and voltage.
The CPU and memory in this configuration are set up almost as expected. Its hard to cause any kind of overload based on bandwidth. The problem in CPU/PCI-E communication was expected, however its impact on total performance is not as bad as expected. Total impact on communication effectivity is 15% and since is expected that GPU can fetch data from memory without any interference from the host there is not probable that it will affect the performance.
System then will show quite high stability of its performance no matter of resource consuming application. On the other hand performance peaks are also not expected.
Yesterday i tested RAM voltage with this method. 1,82v was far too much - system rebooted at the and of test. 1,62v was far too low. 1,72v is the value where the memory and MCH should work fine.
Now I have to re-test NBv, VTT,other voltages and each latency setting with same method.
Currently the system seems to run well, however errors are still measured by LinX and some reboots ocurred...
Tried to set safer CAS latencies to 8-7-7-21CR1. Worked well. No problem was initially detected on this setting so I have tried different NB voltages (mem remains on 1,74v).
1,33v was marked as last bootable, 1,456v is still considered as quite stable voltage.
For the first time I also tried to change voltage of Clock generator to 3,60v. Suddenly system was unable to bootup until i set vNB to 1,41v or more. Thats strange because stronger clock signal did not help to archieve better stability on lowe NB voltages. It did exact opposite. It also means that signal between CPU and NB is overshooting somehow. Raising NB voltage for this case means that it just have to be able read signal of such strenght, or on the other hand it may indicate that data signal coming out of CPU is too strong.
On the other hand it may indicate that 4 cores, 4 dimms, and 2 PCI-E cards are draining all the power off the northbridge
At all I have to test more CPU and CPU Vtt voltages I expected previously. More or less I feel like at the beginning of the tuning two years ago. But at that time it seems that its northbridge what is killing the signal. Now it seems that there is another source of such interference.
Ok I found one of the most stable way how to set power for the cpu.
1. Set CPU VID to 1.3v (for example). Special add set manually to 100,23%
2. VTT set also to 1.300v exactly
3. Disable Voltage Droop Control
Now according to DFI smart guardian the difference between CPU core voltage and CPU VTT was never more than 0.05v and also CPU voltage was never lower than VTT.
These two factors stabilized system as never before. Due this I was able to lower voltage of CPU/Vtt at 1.25v without any impact on stability.
However the northbridge still require its usual - 1.456v for stable running. Ability to boot increased, now its able to do so from 1.33v.
I applied old knowledge from pentium III to the x48. Once lowest stable voltage for boot has been detected add 0,05v for stable running, same for high load, and same for high temperature situations. Total of 0,15v from base tested voltage. Currently highest NB voltage i ever set. First tests seems good. Will see tomorrow.
Now seems fine. As another problem was identified voltage overshoot when PCI-E activity ceases.
Last edited by Offler; 05-03-2012 at 03:01 AM.
|05-03-2012, 06:55 PM||#150|
Join Date: Nov 18 2009
So lets do the final checks of the "project". The goal was to build up 10x stronger PC as I had before.
Pentium III-S 1400 @ 1638Mhz VS Xeon x3360 2830 @ 3400Mhz
465 gflops VS 1280 gflops on single core * 3,65 Multicore effectivity 4672
156Mhz SDR VS 400Mhz QDR
1248mb/s VS 12 800mb/s
156mhz SDR VS 800Mhz DDR dual channel
1248mb/s VS 25 600mb/s
Real memory reads
1202mb/s vs 9500mb/s
Not Completed = See notes
Graphic slot bandwidth:
AGP 4x VS PCI-E 2.0 16x
1064mb/s vs 8000mb/s
Completed See notes
Harddisk read rates
SCSI Seagate Cheetah 15k RPM VS OCZ Revodrive x2
70mb/s VS 690mb/s
Completed = See notes
Single core performance should be better, but overall total CPU cooperation hit the total limit. Real memory rates are not as effective as I expected from synchronized overclocking. This seems to be mainly problem of CAS latencies. Old SDRs I had and chipsed matched very well resulting in 98% effectivity in read. New system barely hit 75%. Also comparing AGP 4x and PCI-E 2.0 16x ... Since I was able to run old system with agp 2x without any impact on system performance it does not matter as much. And since old drive was very very fast, but still classic HDD the new SSD RevoX2 still wins.
Conclusion on Dfi X48-t3rs after 2 years of usage:
CPU VRM - Excellent.8 Volterra chips do their job as expected. CPU voltage is stable even without CPU Droop control.
Memory slots - Terrible. The connection between board and DIMM module is bad.
PCI-E/PCI slots - Are fine. There is nothing to complain about.
X48 chipset - On this board it is able to fulfill its specifications, with one exception - 8gb ram on 1600Mhz should be possible even on 1,265v vNB.
Onboard accesories - Jmicron ATA controller is junk. Other devices seems fine.
It is true that this board does not have big potential on Core 2 quad processors. Testing on this week shows that when most slots on the board are occupied the Northbridge suffers on frequency and voltage overshoots. Increasing level of signal strenghts causes it to fail. Only cure for this is higher vNB. And even when it sounds strange its possibly caused by the high quality of components which were used for the manufacturing.
The heat what X48 produces is incredible. In most cases its much more hot than CPU.
After numerous torture tests I was unable to destroy CPU or memory. Presumably dead WD disk drive has to be re-tested. When Elpida MNHs survived on 1,9v and Linx testing it means that i can expect quite a long-term life of this system.
According to the real performance of this system it seems to be better as a workhorse system with some overclocking potential, but not as a pure-bred overclocking machine. I just wonder why the X48 was not shrinked down to 65nm... But if the shrink should have some power supply issues its quite certain that high voltages could kill it easier than this 90nm furnace.
Test with clock generator and PLL voltage shows why the CPU was not able to be clocked higher. Signalling from CPU to NB was quite strong even on default. Higher Clock generator and PLL voltages improved the signalling further and decreased system stability... In combination with diginal PWM of CPU the northbridge was unable to handle such strong signal under heavy load of 2 separate CPUs and 4 dimm modules. Only correction - higher vNB voltage - was after all the limiting factor.
This kind of power management was just too good for the northbridge. When problems with DDR3 implementation accumulated on this the board was doomed (in eyes of many overclockers). This is one of the few boards which might suffer from components overvoltage. High voltage is not the solution, but the right voltage is.
Most I had to test system like "lower voltage until the boot fails, set higher voltage until the boot fails = right voltage is in the middle". And this takes a lot of time or really good overclocking method (and even mine is not as good as I wanted).
When I replaced PSU last time i completely forget to connect additional power for PCI-E. Will test it, today, but I fear it will not have much impact on system, but I have to take into account that that power lines were designed for multiple graphics. Both PCI-E slots are occupied. Only one of them is graphic card, but this possibility has to be checked.
The cables are connected. No change observed. Everything that happened afterwards was related to some bios settings. Some of them were not saved correctly and when they were reloaded, some values were different.
Last edited by Offler; 06-03-2012 at 10:48 PM.
|08-03-2012, 08:19 AM||#151|
Join Date: Nov 18 2009
Finally rock-solid stable. A little failure and a little victory.
CAS 7 and Performance level 7 is not possible, and that board has to run on 1,444v NB with all 4 Dimms occupied. I wanted better numbers...
Memory still runs on Command Rate 1. Even when larger amount of chips connected to memory controller need more energy effectively draining MCH 4 dimms have slighly better performance in LinX compared to 2 dimm modules. Apparently due amount of chips. This in combination with better CR means that overall performance of the memory is almost same as on CAS 7 with CR 2.
I also attached the screen. not to show great performance - Have seen far better overclocking - but to show that DDR3 1600Mhz 4x2gb on X48 for 24/7 is possible.
Now I will just do some minor adjustments with voltages and latencies...
|16-03-2012, 10:42 AM||#152|
Join Date: Mar 16 2012
Hey Offler ... I have followed this thread for a long long time ^^
But now at the end of your journey ^^ I really don't get the settings right..
As you can see I have just registered here - don't done it earlier cause of my bad english.
Could you please write down the BIOS settings you are using right now for
400 FSB on Quad-Core and 1600Mhz Ram ?
cause you have changed your settings quite often and now I'm confused ...
Hope you will read this and thx for your great testing-journey.
Edit: (my system-specs and BIOS Template)
DFI LT X48-t3rs
Q9550 E0 @ 3400
G.Skill ECO 4GB Kit 1,35v @ 1,65v
Template: (just the important settings)
CPU Clock Ratio: 8.5x CPU N/2 Ratio: Enabled CPU Clock: 400 DRAM Speed: 400/1600 - Target DRAM Speed: DDR3-1600 CPU VID Control: AUTO (1.2950v) COY VID Special Add Limit: Enabled CPU VID Special Add: AUTO DRAM Voltage Control: 1.650v SB Core/CPU PLL Voltage: 1.51 NB Core Voltage: 1.550 CPU VTT Voltage: 1.250 Vcore Droop Control: Enabled Clockgen Voltage Control: 3.45v GTL+ Buffers Strength: Strong Host Slew Rate: Weak GTL REF Voltage Control: Disable x CPU GTL 1/2 REF Volt: 113 x CPU GTL 0/3 REF Volt: 100 x North Bridge GTL REF Volt: 100 DRAM Timing - DRAM CLK Driving Strength: Level 3 - DRAM DATA Driving Strength: Level 8 - Ch1 DLL Default Skew Model: Model 6 - Ch2 DLL Default Skew Model: Model 6 - Enhance Data transmitting: FAST - Enhance Addressing: NORMAL - T2 Dispatch: Disabled Common CMD to CS Timing: 2N CAS Latency Time (tCL): 7 (xmp) RAS# to CAS# Delay (tRCD): 8 (xmp) RAS# Precharge (tRP): 7 (xmp) Precharge Delay (tRAS): 24 (xmp) All Precharge to Act: AUTO REF to ACT Delay (tRFC): AUTO Performance LVL (Read Delay) (tRD): 8 XMP Support: Profile 1 (this is the 1600 one for the G.SKILL ECO DIMMS)
your exact ones just for testing purposes.
Last edited by fluXX; 16-03-2012 at 01:03 PM. Reason: Post my current settings.
|dfi, x3360, x48t3rs|
|Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)|
|Similar Threads for: DFi UT x48-t3rs, X3360|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Initial notes, findings and tips DFI UT X58-T3EH8||eva2000||DFI Intel Motherboard / CPU||0||04-08-2009 03:17 PM|
|[Bjorn3d.com] DFI Lanparty LT X48 T3RS||News||Reviews & News Online||0||09-12-2008 07:41 AM|
|DFI UT X48 T2R - C1 Error||sunjovasie||DFI Intel Motherboard / CPU||3||19-09-2008 12:17 AM|
|[Anandtech.com] DFI X48 LT T2R: Floats like a Butterfly…||eva2000||Reviews & News Online||0||11-05-2008 01:14 AM|
|DFI STREET IS NO MORE ++++ NEW DFI CLUB FORUM IS UP!!!!!!!!||dinos22||General Hardware||52||26-01-2007 09:17 PM|
All times are GMT +11. The time now is 10:25 PM.