You are here:



QUESTION: Good afternoon,

As you have given me sound advice in the past, I have a couple more questions I'm hoping you can help me with so here goes.

First of all, Pc building and repairing is my hobby but as I'm retired, budget is everything so I've built my various pc's bit by bit over time and usually by the time I've got one together for myself a newer platform has already arrived. I think my current specs are more than adequate for my needs and I don't believe Skylake would be a huge performance leap on what I have already. I've only recently (this year) tried my hand at games (used to be just Age of Empires but now I'm giving Sniper Elite III and Farcry a go). I had a P8Z77V-LX and an i5 2500k which I've since sold to a friend and after much research my current specs are:

Asus Z97-A.
i5 4690k.
Corsair H55i Cpu water cooler.
16Gb(4x4) Pc3 10700H DDR3 9-9-9-24 1.5v. (recent upgrade)
120GB SanDisk SSD - for Win 7 Ult 32bit as I can't configure my old webcam and scanner to work on 64bit.
480Gb Crucial Bx200 SSD - for Win 7 Ult 64bit and on which I attempt to play my games. (recent purchase)
1000Gb Western Digital HDD - for storage.
Optiarc DVD RW AD-7203S.
Samsung T24D390 Tv/Monitor Full HD 1920x1080 @60Ghz.
and lastly
4Gb Ati Amd Radeon R9 290 Double Dissipation (XFX Pine Group). (purchased online about 2 months ago to upgrade from a 2Gb 7870).

As I write, Speccy shows my idle temps as CPU 23, Mobo 28, Gpu 36, Drives 24,26 and 29.

I have never used AMD for my Cpu, only Intel and I can't foresee that changing and having tried Radeon and Nvidia for graphics I feel more comfortable with Radeon despite opinions that Nvidia are better and have better driver support. Also I have never overclocked my Cpu or Gpu as it looks a bit intricate to me and I can't afford to mess up and maybe kill a component.

First question: Graphics set up:

Is this idle temp of 36/37 degrees satisfactory for this card. I can tell on examination that this card has never been tampered with to date and was wondering if perhaps I needed to go in and apply a fresh coat of thermal paste. The fans and fins look spotless - no dust/hair/fluff.

When I originally installed the card and tried Sniper Elite III, I was getting frame rates in the 200-400 range (Msi Afterburner/Riva) but my Cpu was hitting 60-70 and sometims above that and the Gpu was heading toward 100 and getting fairly loud so I got worried and shut the game off.
I e-mailed the seller of the card who advised that as my pc monitor was only 1920x1080 and with only a 60Ghz refresh rate that I should go into the Radeon Global settings and limit the frame rate to 60. This I did and now my Cpu now runs around 40-45 and the Gpu generally never exceeds 70 when in Sniper. Is this ok.

When I read about Pc's and gaming I always see people overclocking their Cpu's and Gpu's to try and get the last Ghz out of them without crashing. But what's the point if, without me even doing any of that, my Cpu and Gpu looked as if they were going to explode and I end up actually having to limit the frame rate just to keep temps down. Is there a need for larger frame rates. The guy who sold me the card says that the human eye can't even detect them anyway - any truth to this. By the way he also told me that if I ever swap for an Nvidia card to go into it's 3D settings and set it to Adaptive to keep the frame rate in check.

Apart from this Global frame rate limit in the Radeon Settings there are so many other settings with names I can barely pronounce - should I tweak any of these or does the Radeon card instinctively know what to preset for each game I attempt.
The only other thing I changed was because a message came up on screen saying that I should switch 'Virtual Super Resolution' in the Display Panel to 'ON' and I did but I didn't notice anything different.  

Second Question: Graphics Card Upgrade:

A good friend is buying the GTX 1070 and has offered me (for 150 - which he says is reasonable) his older card, a Gigabyte Windforce 780Ti and while it apparently only has 3Gb GDDR5, he claims that all round it's a better card than my current 4Gb R9 290. Either way I'd have to sell my card first to pay for it but he'll hold on to it if I want it. Should I tell him to go ahead and sell it or should I take it off his hands? My only other alternative would be the new Radeon RX 480 which I believe has performance equalling if not surpassing the current GTX 970 (which by all accounts is an excellent card).

On reading Micro Mart I have come accross 2 issues. Perhaps I could get your opinion on them as it was a bit too techy for me to fully grasp. The first is a simple one - it's a reference card - and by all accounts, the fan is bad and very noisy so the best thing to do would be to wait for the Custom cards to issue which shouldn't be too long. The second is the techy one. The Micro Mart expert says that various tests (Tom's Hardware) have shown that, as the card only has one six pin connector, the card attempts to pull too much power through the pci slot and this can and does cause overheating and could cause mobo damage or worse. He more or less said that as the card only had one six pin connector that AMD should consider including an eight pin on the card to cover the power draw. Is he making sense or worrying to much. This, I believe, would be a nice card upgrade for me and would be financially viable if I got the R9 sold but is it worth it?

Many thanks for your thoughts,


ANSWER: I can certainly address all of your questions here; I'll use bullet points if you don't mind.

- You're absolutely right that Skylake (or the upcoming Kaby Lake) isn't really worth the money if you've got a recent-ish Intel platform; even the Sandy Bridge you used to have would qualify, but the Haswell you currently have also qualifies. Fact of the matter is that, for gaming, CPU performance really hasn't improved that much in the last 4-5 years primarily because the games themselves aren't very aggressively pursuing use of more than 2 cores, and thus are primarily limited by the single-thread/single-core performance of a given CPU (which hasn't improved a lot in recent years; the biggest improvements have come with multi-threading/multi-core advances, and for applications that heavily rely on that, like video editing, the performance gains have been much more substantial). So, its perfectly reasonable to just enjoy the working platform you have now without any fuss. Something else to keep in mind: anything newer than Broadwell (the generation after Haswell; the Core i5/i7 "5000" series) will be forced into Windows 10 in spring 2017 (so Skylake will be forced into Windows 10, for example) - depending on your feelings on Windows 10 this may be fantastic or dreadful news.

- The temperatures you're reporting on the CPU and GPU are absolutely normal, and your idle temperatures are fantastic. The R9 290 series is designed with an upper working limit of 95* C (and it will throttle the GPU down to prevent exceeding that limit), so running in the high 70s or even mid 80s is within design limits for that card. However for an XFX DD card that's a tad warm, and that'd lead me to re-evaluate your case's airflow as the DD boards are (in ideal conditions) capable of somewhat lower temperatures at load. The retailer's advice to use AMD's Frame Rate Target Control (FRTC) is bang-on, and will restrict the workload placed on the overall system by capping frame rate. Alternately you could use vsync, which will lock the game's frame rate to the monitor's redraw (60 Hz = 60 FPS) and achieve the same thing ("whats the difference?" one is simply a frame-rate limiter while the other is actually synchronizing frame rate to field rate). nVidia's "adaptive vsync" is not quite the same feature, however, and instead will switch vsync on and off in response to real-time performance - if the game is capable of maintaining vsync frame rate it will simply stay in vsync, but if performance declines it comes out of vsync as opposed to running at a fractional frame rate (e.g. instead of running at 30 FPS and double-pumping each frame to achieve 60 Hz, it would allow it to run at 42 FPS, if it could do 42 FPS). nVidia suggests use of this mode as a power saving feature as, like vsync or FRTC, it lowers peak frame-rate and you aren't "wasting" resources to draw frames that are ultimately discarded (more on this in a minute), which allows the GPU's adaptive power management ("GPU Boost" on nVidia cards; AMD has a more sophisticated (showing some of their CPU design heritage) variant known as "ZeroCore" which your 290 features) to throttle the card down and thus reduce power draw (and therefore temperatures). The problem with this nVidia feature is that it can cause a significant amount of perceptible stuttering in the image, as the frame-rate jumps around (which is one of the things vsync is meant to solve), and as a result I'm not generally a fan of adaptive vsync. Instead, I'd rely on FRTC or vsync (with a preference towards vsync when possible).

- What about those "discarded frames" and the "limits of the human eye?" (these two actually end up being related) The monitor (any monitor) has a fixed field rate, or refresh rate, and quite simply cannot draw more frames per second than its field rate supports. So in the case of your monitor, it supports 60 Hz ("GHz" is bound to be a typo), which means anything above 60 FPS cannot be accurately drawn (even if the computer can provide it). Now, that doesn't mean running at 120 Hz means the computer just "ignores" everything after frame #60 - instead you'll get vertical tearing, where parts of different frames are drawn incorrectly (and remember this is all flying by at 60 Hz, so each redraw cycle is only visible for around 16 ms) as the system takes whatever is available in the framebuffer. Vsync solves this problem by locking the field rate and frame rate together - everytime the screen is redrawn it takes a new frame (ideally; the situation above with lower-than-field-rate frame rates is the alternative), which prevents tearing, and locks you into a stable frame rate as well. So what about our eyes? Conventional explanations will state that the human eye (probably more accurately the human visual system, which includes the brain) will be tricked into seeing motion at around 20-25 frames per second. This makes complete sense if you look at film content, which is almost entirely done at 24 frames per second. Many things written about videogames will insist 30 frames per second is thus a good "baseline" at which point everything looks like its actually moving, as opposed to looking like a flipbook. However there's some issues with that comparison, specifically:

a) Movie/film content is produced on film (or digital systems that approximate film; they're close enough to be considered equivalent for our purposes) which includes a form of implicit motion blur, specifically that the film is exposed with a shutter speed that correlates to its frame rate. So if you're filming at 24 fps, you're probably exposing each frame for 1/24s and the camera is actually capturing everything that it "sees" for that 1/24s (which has some inherent blur). If you've ever played around with shutter speed on an SLR its the same concept - shooting at 1/4000s yields a much "sharper" or "faster" image as you're getting a smaller time slice while shooting at 1/30s is usually fairly blurry, especially if you aren't using a tripod or some other support. So when the movie film is played back, you get 24 frames per second that encompasses 24 1/24s exposures that produce an essentially continuous stream. Videogames don't work that way - there's no implicit motion blur because the computer isn't rendering in a way that comports with "shutter speed." In some newer games they do introduce "motion blur" as a shader effect, but its still not directly equivalent, and ultimately what this means is that such low frame-rates (e.g. 20 fps, 24 fps) usually aren't as fluid looking for a videogame as they are for a movie.

b) 30 FPS is (along with 60 FPS) a common frame-rate target for many arcade and console based games, and in most cases yields an enjoyable experience. However this doesn't mean that higher frame rates (when the monitor supports the higher re-draw speed) aren't "noticeable" on some level. The difference, however, is subtle. It's also impossible to say that the difference is entirely up to the frame-rate, as the higher field rate (by definition) means faster redraw and lower latency on the monitor's end, so if your monitor is able to do, say, 120 Hz, each redraw is around 8ms. It's hard to say if that's "better" to the eye because its simply a lower latency redraw, or because its a higher frame-rate, or some combination of both.

You can get more into the frame/field-rate discussion with some external reading/viewing, if you'd like: ("T-Buffer" itself is an ancient and outdated technology, but this article has a good foundation on the film vs videogames discussion) (sort of a "Mythbusters" thing on higher-than-60FPS gaming)

Ultimately, however, you will find an ocean of personal opinions and strongly held beliefs (often with little evidence beyond "because I feel this way" or "because I said so" or "because I'm better than you") related to the frame-rate/field-rate discussion. So be wary of sources that make absolute claims. In general it is safe to assume that 60 FPS will be perfectly serviceable, and that in many cases so will 30 FPS, but in some cases higher-than-60FPS may yield a subtle, but noticeable, difference (if you watch the follow-on video from Linus TechTips where Linus (the host) tests the higher-than-60FPS system he more consistently discriminates between the two; they properly explain that Linus has more experience with gaming and higher-than-60FPS systems as a plausible reason for this ability (e.g. that it is trainable)).

End of the day this is largely academic: 60 FPS is perfectly functional, and for your specific monitor 60 FPS is the upper limit.

- The various settings of the Radeon driver are, unfortunately, not the best documented. In the majority of cases you won't need to adjust anything in there as the default settings should be "Use application setting" or "Use application default" which means the driver will abide whatever settings you've picked in-game (and many newer games are smart enough to figure out what they need based on your computer). The FRTC and/or vsync controls are probably the most frequently accessed options as some games don't offer vsync controls in-game (although, in some games, this is because they enforce vsync by default; for example Fallout 4 always runs with vsync enabled, and that's why there's no control for it in-game), and FRTC is a unique feature to AMD.

- Virtual Super Resolution is also a plausible explanation as to everything is running a bit warmer. VSR is asking the system to render at a higher resolution (than your monitor supports) and then scale the video output down to fit. The quality benefits of this are questionable (the primary use of this feature is to provide a crude-but-effective brute-force anti-aliasing method for games that don't support conventional anti-aliasing (e.g. because they use deferred shading/lighting techniques)), but it has a significant performance impact (because you're rendering at a higher resolution; the video scaling, at least on AMD cards, has a negligible impact as it uses the card's built-in video processor for the downscaling; nVidia does the downscaling on the GPU's shaders, which may have a somewhat bigger performance impact, but either way it shouldn't be significantly different than "natively" working at that higher resolution (e.g. plugged into a monitor that can accept that higher resolution natively)). I would try turning this feature off and observing temperatures - they should go down somewhat.

- I wouldn't bother with replacing the 290 itself. The 780 Ti would be a minor upgrade, if at all, as it isn't significantly faster ("much better all around" certainly comports with the popular opinion that "nVidia is better" - I'd chalk those kind of statements up to the same genre of fandom as "Fords are better than Chevys" or "Star Trek is better than Star Wars" - its someone's personal opinion), and won't yield support for anything that the 290 can't already do, excepting PhysX (which is used by a very small number of games). You can see some benchmarks here: (at 1080p your frame-rates should be higher overall, owing to the lower resolution)

The GTX 970 similarly won't be worth the upgrade, as its performance is also "in the pack" with the GTX 780 and R9 290/390 cards. Whether the GTX 1070 is itself even a significant upgrade is a matter of debate, but you can give it a look if you're curious:,18.html

Yes, the frame-rates are "higher" but we're largely talking in that "higher-than-60FPS" category, as opposed to unplayable (e.g. <20 FPS) versus playable (30-60 FPS or above). Also as a side note, that Guru3D comparison also includes the R9 290 and GTX 970, so you can compare all four cards if you spend the time to wade through the two dozen or so different boards being benchmarked.

Some final notes on these four:
a) Only the GTX 1070 and R9 290 actually support DirectX 12. The GTX 780 Ti is only capable of DirectX 11, and while nVidia claimed the GTX 900 series would fully support DirectX 12, they were later caught with their pants down when this lie was exposed by game developers running into performance and compatibility issues with the GTX 970 and 980, as the entire Maxwell architecture lacks support for DirectX 12's asynchronous compute function. That said, DirectX 12 itself is only supported in Windows 10, and then only by a small number of games, so the significance of this is probably minimal as of today.

b) The GTX 970 itself was involved in another scandal, involving its memory layout. The card should be properly listed and described as a "3.5GB+512MB" board, not a 4GB board, as the last (upper) 512MB of memory is only accessible over a very slow, limited bandwidth bus. In practice there isn't much need for cards in the 4GB range today, especially with a 1080p or 1440p monitor, and nVidia's defense of this design choice has ultimately proven valid (and the GTX 970 is not the first, and likely not the last, nVidia card to use this interleaved memory design - the GTX 660, for example, also works this way). But if you're going to do any significant GPU compute or 4K gaming, the 970 may cause you a bit more trouble.

With points a) and b) in mind, I would seriously consider sticking to the Radeon, or the newer GTX 1000 series GeForce cards, simply to sidestep any potential compatibility pitfalls that the GTX 900 series may encounter in the future. The GTX 700 series are fine cards, but they don't offer any performance benefits over the Radeon, so there's no point (from my perspective) to go through the hassle of swapping, unless you absolutely have to have PhysX.

- I wouldn't put a lot of stock in the popular view that nVidia's drivers are "consistently better." In 2016 its honestly hard to fault any of the big three graphics makers (nVidia, Intel, and AMD) for their drivers or hardware compatibility - we pretty much live in the era of "it just works" and have come a long ways from the days of things being explicitly tied up to one vendor or another. The only "big differences" I would highlight are AMD's support for their Mantle API (which is not required by any application, but supported in a few newer games, where it usually yields somewhat better performance), and nVidia's Gameworks program, which you're probably best off reading about on your own (, although generally speaking Gameworks games will run worse on AMD and Intel hardware, and this is widely believed to be an artificially built-in incompatibility (e.g. anti-competitive behavior). Newer Gameworks games (e.g. Fallout 4) however tend to run fine on hardware from both camps, so it may simply be a matter of teething issues as nVidia rolled Gameworks out in earlier titles.

- The Radeon RX 480 is an interesting card, as it primarily promises significantly lower power consumption relative to its performance. Given your R9 290, I wouldn't bother with an upgrade, but felt discussing the 480 a bit was worthwhile. The "power consumption issue" with the 480 is currently a fairly contentious matter, and AMD's official line is that the behavior exhibited in the Tom's Hardware review will be (or has been) adjusted in a firmware fix. It's worth pointing out that the Tom's Hardware review itself says nothing about the card "melting connectors" or "causing damage" and that this is largely all opinionated conjecture on the part of, for lack of a better word, fanboys. The 480, like many modern cards with advanced power management, can momentarily draw fairly large amounts of power (above and beyond its connector specs), but this is generally only occurring for periods of a few milliseconds (or less) which is handled by capacitors in the system's power supply. The GTX 980 exhibited similar behavior (it also relies on a single 6-pin connector, and yet draws up to 300W as a peak momentary inrush; much higher than the RX 480's ~180W peaks), and was not met with such a harsh response. In short, until there's significant and real evidence of boards/systems being blown up, what you're ultimately seeing is groupthink in action - its a problem because "they" say its a problem, and since "they" can say it long enough and loud enough, it eventually becomes true. But, as I said, I wouldn't bother with the hassle of an upgrade, as you already have a good performing card. Guru3D has a benchmarked review of the 480, which includes a lot of other cards as well, that you can peruse here:,15.html

Also to note, that's got a great (indirect) comparison of VRAM sizes, as the 480 is an 8GB card, the 290 and 290X are 4GB boards, and the 780 Ti is a 3GB board (and there's also a GTX 970 in there, with its interleaved memory design). You can see performance between the pack is very similar, and while the GTX 1070 is "out ahead" its not in any earth-shattering way (e.g. its all "higher-than-60FPS" values).

- Reference vs non-reference cards. Generally in the last few years reference designs are, unfortunately, defined by very loud blowers that don't do a good job of cooling the board down. Aftermarket/non-reference solutions (even if its just a non-reference cooler on a reference PCB) from third parties like Asus, MSI, XFX, Gigabyte, EVGA, and Sapphire have consistently done a better job of cooling the GPUs, and one wonders why both nVidia and AMD stick to the blower designs. In general I would suggest non-reference coolers as a result. Here's a comparison of the XFX DD cooler (which your 290 has) to the reference 290 cooler (which is possibly one of the loudest GPU coolers in history):

In summary: overall I think it is fair to say you've assembled a very competent gaming system that should work well with many games from both yesterday and today, as well as games of tomorrow, and that any significant upgrades are at this time largely unnecessary. I would suggest re-evaluting your case's airflow (and if needed perhaps replacing your case) as this may improve thermal performance (relocating the system may also be a consideration, for example if you currently have it in a little desk cubby it will probably do better set out in a more open environment, or if you can move it away from a heat source such as a radiator, or towards an A/C vent (if you have A/C)). That said, the temperatures you've reported don't appear to be outside of design limits for the hardware, but I certainly understand the concern associated with seeing different components reporting temperatures nearly high enough to boil water.

If you have further questions, feel free to ask as a follow-up or start a new question thread.


---------- FOLLOW-UP ----------

QUESTION: Many thanks for your thorough and comprehensive reply. Some of it  - the frame rate stuff - was a bit techy for me but I got the gist of it and you supplied plenty of stuff for me to check out which should keep me busy for a while..
Given that you'll only learn by asking  questions, I've a few more for you whenever you have the time - no rush.

I get from your reply that my system is more than adequate for my needs. I only do basic Internet browsing - mainly about pc's and what's new or upcoming and looking at reviews or benchmarks and reading all the pc magazines I can lay my hands on (all handy as I'm the go to guy when one of my pals has a problem or wants an upgrade) and watching Done Deal, Gumtree and mainly which is usually where I buy my parts when needed. Looking at Adverts it's funny that most of the 1151 mobos and DDR4 ram are all now nearly cheaper than the older 1155/1150 mobos and DDR3 ram. I find that odd. Anything I've built has usually been built with parts purchased from Adverts when I've had the cash to spare and seen a reasonable deal. I have been stung a couple of times (mainly Psu and Gpu) but I'm learning who I can trust on the site now. Other than that I have recently tried my hand at some games. That's about the extent of my pc use.

I don't have my own site or do videos or make music or anything like that so I can't envisage any need for hyperthreading and also I read somewhere that games don't need it or even use four cores (at present) so I think a decent quad core cpu is the ideal thing. My research over time suggested the best curve for my needs were i5 2500k, i5 4690k and if I ever upgrade, i5 6600k. Based on the fact that, as you've said, that Skylake users will be forced into Windows 10, this possible upgrade is unlikely as I found Windows 10's invasiveness and it's forced Windows Updates appalling.

Anyway, since I sent you my original query I have in fact shifted all my components from one case to another in the search for better cooling. They were all housed in a Corsair Carbide 200R (which is a nice case) but the two front intakes are somewhat hindered by the fact that the front panel is not detachable. Also, of the two roof fan exhausts, one is unusable due to the size of the H55 cpu cooler installed at the back. I also noticed, and this is just a thing that bugs me, that as the case starts to heat up when I decide to play a game, there's a good bit of creaking and groaning of the case itself - I believe it's just the side panels getting a bit warmer.
I've moved all the components into a Zalman Z11 Pro I had lying spare but have not noticed any benefit - if anything things might be a degree or two warmer Cpu 30, Mobo 28, Gpu 38 Drives 27,27,28,28 - idle. I've added another 2TB drive for extra storage as this was meant to be the final move of my gear. Unfortunately with this case there is inly one front intake fan  - the 'chipmunk cheek' side fans were a little noisy so I've left them detached. Similar situation with the roof exhausts - H55 gets in the way of one of them.

Early last year sometime myself and a pal decided we were going to try our hand at overclocking so we bought some parts cheapish on Adverts to give it a try and figure out how it worked. Although we never really got into it (basically just ended up increasing the multiplier to 40 and never did much after that), the Pc now belongs to me. It's sort of been sitting idle for some time now but these were the parts we bought:

Corsair Obsidian 350D.
Corsair CX500M modular PSU.
Asus B85M-G.
Pentium G 3258.
Hyper Evo 212.
16 Gb Corsair Value Select Ram, DDR3 @ 699MHz 11-10-10-27.
64 Gb Sandisk SSD.
2x500 GB HDD's.
2 GB Sapphire Radeon R9 270X.
5 case fans, 2 front intake, 2 roof exhaust and 1 rear exhaust (one front and one roof are blue led).

We hit a couple of snags as we went along. First, we discovered after having bought the case, that none of our mobo's (all ATX) would fit in the case. As we had read that the G3258 was a great little chip for overclocking (and we could get one for 40 - so it didn't really matter if we blew it up) we went for that. We wanted to stick to Skt 1150 as I hoped that if we figured out the overclocking we might upgrade to another i5 4690k. The B85M-G looked ideal (especially at another 40) so we got that. We had the Ram, SSD,  HDD's and Hyper Evo spare. Next snag we found was that the mobo needed some sort of special Bios update in order to overclock the G3258. We managed to do that without killing the mobo. We installed (via DVD RW) Win 7 32bit on the SSD for general stuff and Win 7 64bit on one HDD for games and left the other HDD for storage. Next 'problem' we encountered was that once the Hyper Evo was in, there was only one fan header on the board. If we'd used the H55 there'd be no fan headers at all. It would appear that a lot of these cheaper small boards provide few if any fan headers - usually just the one. Anyway with cooling in mind we spotted an Aerocool Touch 2100 fan controller on Adverts (another 40) capable of running five fans so we got that. Have to say, after switching it back on today, it certainly looks the part with the matching blue led case fans. Only problem with it is it takes up two bays but as the os's were loaded the DVD RW was no longer required.  

I'd all but forgotten about this pc until I wrote to you earlier and I switched it back on today. I must say that despite five case fans, the fan on the Hyper Evo, the two fans on the R9 270x and another in the Psu, I was only able to tell that the machine was actually on because the blue leds and fan controller were lit up. This thing is whisper quiet and air flow is great. With the multiplier at 40, Speccy gives the idle temps as CPU 30, Mobo 28, GPU 30 and Drives 28,24 and 24.

I really like the look of this case and how quiet it is and the fact that the front panel can be detached to let more air in so I was hoping as of today to use it as my main machine. As the Z97A won't fit in the case, I thought I might throw my i5 4690k into it and maybe the H55 but there's something niggling in the back of my mind about this board only being able to be unlocked for the G3258. I'm not sure what performance issues the i5 might have or otherwise I think I might have put it in sooner. Am I imagining this or is this the case? There also appears to be enough room in this case to install the H55 and keep both roof exhaust fans too.

I saw this on Adverts ( and although it's been on for two weeks and had 152 views, nobody's made an offer. I reckon I might have a chance with a bid of maybe 70 plus 10 for postage but will have to wait a couple of weeks for my pension to arrive and hope the mobo is still for sale. It looks like a good board for the i5 and will definitely fit the case. Let me know what you think.

Another thing you said in your earlier reply was:
'Fact of the matter is that, for gaming, CPU performance really hasn't improved that much in the last 4-5 years primarily because the games themselves aren't very aggressively pursuing use of more than 2 cores, and thus are primarily limited by the single-thread/single-core performance of a given CPU which hasn't improved a lot in recent years'.

I've played Sniper Elite III today on the G3258 machine and encountered no problems despite it being a dual core processor yet a lot of what I've read in forums says the G3258 is terrible for games and not to buy one. Your comment seems to contradict what some of these forums said. Can you enlighten me a bit on that please. Is the G3258 a good or bad CPU. I'll be holding onto it either way.

Last, what's your opinion on Win 10/ Directx12. Does it actually deliver any great benefit to games or even just general pc use.

Again, many thanks for trawling your way through all of this.
I look forward to your reply,

Case airflow and cooling design can get fairly complicated if you let it - generally speaking I find its easiest to just follow Intel's original ATX design guidelines, with a lower-mounted front intake, and higher mounted rear exhaust (this can be accomplished with 1 or more fans in each position, mind you). Many cases these days have, unfortunately, awful front intake design, usually placing a solid metal wall right in front of the intake fans. That said, none of the temperatures you're reporting are really cause for alarm, especially if the system isn't overheating when under load (e.g. gaming).

On the matter of CPUs, you're bang-on with the upper-mid-range quad cores as the "sweet spot" for gaming. The G3258 is a fine CPU as well, but there are a few games here and there that will actually benefit from the quad-core (e.g. Supreme Commander), and the higher-end i5 and i7 CPUs generally have more cache which can also help with performance. But there's nothing wrong with the G3258 as a lower-priced, and lower-power consuming option. Regarding forum opinion relative to my comment, it's tough to say exactly what a given forum poster is thinking about or meaning in their comments, but overall quality dual-core processors are relatively rare these days (as both Intel and AMD's midrange (and above) processors are quad-cores), so ultimately you end up comparing value-level hardware with higher end pieces. Usually there is some performance disparity there, but there also is a degree of "mine's bigger" when dealing with the higher end hardware. Tom's Hardware has reviewed the G3258 (with and without an overclock) relative to the i5-4690k and some other models, you can take a look here:,3849-
For most games its perfectly serviceable, especially considering the price difference up to the i5-4690k.

As far as running the 4690k in other boards, if they are compatible there's nothing explicitly wrong with that idea, however you are correct in observing that different boards will have different functionality and BIOS options. If you aren't overclocking, however, there probably won't be much practical difference between any two good quality boards.

With respect to Windows 10 and DirectX 12, the easiest answer is that they're both ultimately too new to make any reasonable judgment. Very little actually supports DirectX 12 (so there's no "benefit" or "loss" either way there), and Windows 10 is barely a year old (and still experiencing typical "new Windows release" growing pains as a result) - the privacy concerns and issues with Windows Update are an additional layer that previous versions of Windows haven't had to address either. Overall my advice would be to wait and see - Windows 7 and Windows 8.1 still have years left in their lifecycles, and there's no urgency to upgrade apart from the "forced upgrades" associated with brand-new Intel and AMD platforms (which, by itself, is not an uncommon facet of "new Windows").



All Answers

Answers by Expert:

Ask Experts




I have nearly two decades of experience in IT, computer repair, and related fields and will attempt to provide the most solid, brand-agnostic advice when it comes time to purchase a new computer, or upgrade an existing machine. I can answer anything from the seemingly basic to the downright complicated - and will do my best to provide this information in a clear and concise manner. I have a personal interest in PC gaming, and can apply my experience to such an end. Questions related to 3D games on OS X or other platforms are less likely to get answered.


Nearly two decades of experience in and around IT.

15+ years of experience and education.

©2016 All rights reserved.