Multi-Core Processors

I've noticed there is an insane amount of confusion about what a CORE actually is. AMD says one thing and Intel says another and in the end no one can actually agree. Hints why Intel's 6 core i7's blow away AMD's 8 core processor. The reason is that though AMD's are in fact Octo Core processors kind of, Intels are 12 core not six.....just not literally, and depending on who you ask. Basically half of the cores are "simulated" cores. Like the Matrix. However it really doesn't matter. It's exactly like horse power, or a "candle foot". It's not about how many horses or candles you literally have, it's how much it equates to.

An AMD FX 8350 at stock speeds is a little more then two times as fast as an Intel Core 2 Quad. Basically the FX 8350 is the equivalent of what you'd expect from a Intel Core 2 Octo.....if it existed. The Intel i7 Hex cores however are far more powerful then their Six cores would lead you too believe. That's because though they have six actual cores each has two threads. So, the i7, despite it's physical make up, is the equivalent of a previous generations 12 core processors.......and this is our problem in a nut shell. There is no standardized unit for Cores, threads, or anything to define the power of a CPU. Intel counts actual physical cores, where as AMD counts everything......

In the end I believe it's best to ditch the idea of physical cores. The high end Intel i7 is the equivalent of a 12 core processor where as AMD's high end is very much what you would expect from a octo-core circa 2005. The concept of equivalent terminology as technology advances is used in a wide variety of products. A pickup tire may say 10 ply to show it's load range, but the tire does not literally have 10 layers of material. It's merely stating it's the equivalent to an older tire which did.

CPU's among other computer components are in much need of this type of rating. When a 3.4 ghz quad core is equal to or better then a 4.0 ghz octocore you know things are getting screwy.

In the end the best gauge for knowing how much power you are getting is price. You get what you pay for. There some deals here and there but there is no free lunch. There are no i7 hexo-cores for $200.
 
Yes, it is a confusing mess.
I too found after some reading a few years back, that having too many cores working on things like a PC game can
be counter productive. In a few situations IF THE SOFTWARE is specifically crafted for vast multicore usage, having
all them other cores can be more better. Certain very specific graphic manipulation and 3d apps and the ilk could
be faster with bunches of spare cores.
For example, this i7 I have really offers little over the I5 series CPU's I examined at the time. Game wise.
Reason, the extra coreish ness really does not help. I went with the i7 cause I liked the smell of burning money.

It is why whenever I build a new system I end up burning candle time at sites like Anandtech.com to learn what does what
on what benchmark.
Real world benching preferably. I like charts featuring real world software and performance figures to match a given CPU vs CPU
comparison.
Trying to figure out the details you mention makes my protruding brow forehead hurt bad. Mungo no like hurt.

Knock on wood, the system I build 2 years ago, the one that should be in the sig below, I7 2600k w GTX580 should last
me for at least another 3 years of gaming fun if not more before pondering the worry of all the above.

I remember the days I was a AMD system builder (for myself anyway), there were a few years I was AMD only.
Started about the time AMD made cheaper 286 co-processors . There was that 386 thru 486 generation where it made
more sense for me to go AMD. Somewhere after the generation of CPUS that followed the Intel 486 CPUS I came to the conclusion
AMD and I had to part company. It's been Intel ever since for me when it comes to GAMING machines.
'
Now, on my box in another room that is used mostly to play movies and cruise the net, it is a happy lil AMD system. Lower cost.
Works fine.
 
Definitely. The future is certainly in the software. All this can be paralleled with the automotive industry. From 1900 to 1972 they focused on bigger, stronger, faster engines. 2 cylinder, 4 cylinder, 6 cylinder, V8, V10....eventually in the early 70's the government said no more. The fumes were burning peoples eyes, so though v8's still existed they lost around 100 HP, to cut toxic exhaust. Fast forward to 2013 and my V6 Camaro has 325 hp and get 30 miles per gallon. We never needed more gas, or bigger engines. We needed to pull our heads out of our asses and make it work better. We have basically become lazy. Case and point, Netflix. 10 years ago the idea of streaming an entire movie over an average broadband network seemed impossible. Now we have Onlive streaming modern games over a 5mb a sec DSL line. The future is software. We need programs to be coded better, smoother, more intelligently. It is possible to code a program in such a way that you could have a Pentium 4 run like a i7. Theoretically. Basically we'd have to forget everything about how we code programs and start from scratch. Maybe instead of the data being processed bit by bit, it'd be like a zip cord. one bit of information triggering a chain reaction. Don't know, cause I am not edumacated, but I can imagine something like that. Super Code! The code processes itself. Like a pop up tent!
 
It's kind of worse, from what I've gathered, as you noted in part. Sure you can throw a dozen cores at a program (if you have say a handful of processors, it can happen), but unless the program is coded to be sufficiently parallel as to take those into account, what you have is a lot of processor for all that other stuff in the background that isn't the program. It's not so simple as to just chop up the program mid-run and say "Well, half of it can run here, half here, now it's twice as fast!" because part 1 might be waiting on results from part 2, which is still processing what part 1 just finished. In serial, simple since it's a straight line. In parallel.. not so much, they have to be rather independent bits to run for speed upgrade.
And then of course I/O. I mean, video card does the grunt work of rendering a lot of stuff, and ram is good for how much you can force into it to be ready, but then as soon as you need to get stuff to or from disk (and you will) you're suddenly waiting for this slooooooow thing to run. Solid state drives help, but are expensive and hardly standard, so you're stuck waiting for the computer to load whatever file you need. You can have a thousand things running in parallel, all screaming across a massive processing cluster, but as soon as you need to get something off disk or do a write back.. they all smash up to do that. Unless you can pass data about, so now you have a task for the one core to write while the rest do whatever. Just don't try to give it another write while it's doing the last one...

Hardware developers sell you based on the specs, even if it's not applicable to what you're doing. Sadly, coding for those is a different story (thus the PS3 while supposedly a great machine for the time is said to be a nightmare to design for because of just how it arranges all that little memory and processor spaces). So you buy what you can work with, and don't be too sad if you don't see much gain like the chip sellers claim you will.

So, nutshell, computers are better, but getting everything out of that hardware gets harder and harder, and then of course code complexity skyrockets, as does testing complexity and the simple chance of replicating a bug can drop... programmers are good at what they do, but it's not like the days where one guy working in assembly with a handful of registers could make a game.


(Take all this with a bit of salt. I've done a little coding, but I'm not a professional, and work from what I've heard and gathered over years)
 
I agree, but it's just like bump maps/normal maps. During the PS 2 era, game developers just through complex polygons at everything hoping hardware would move forward so they could use even more polygons. If we used the same logic today games would have 200 million polygons for a character model. Instead we have tricks like normal maps and so on that pretend to be polygons.....kinda. It's about reinventing the wheel. Remember Doom was a 2d game that looked like a 3d FPS. Imagine if we revisited that idea using the technology we have now. We could fake the hell outta 3d. You could have a 360 degree texture of an enemy, and as the player moves around them the texture basically pans around showing you the different sides....instead of always facing you. Any ways....it's the idea that sometimes it takes better idea's, not bigger engines.
 
3d Textures? That sounds interesting, except for the fact that you have to store the data somehow..

But then again, this sounds familiar.. think I saw something once about some tech demo using hackery to do a lot of what you described. It's far too late to recall any of it though.

Still, the problem often is that it's not so much getting better hardware, but different design techniques. Maps (bump, normal, etc) are a great example, adding levels of detail ontop of what's given, rather than adding a billion polys to the model. Once you get past the weirdness of multi-threaded systems across the cores, it's far more powerful (and heck, I'd say part of it is there in the whole gpu handling its own rendering)...

Personally I can't wait for the next gen consoles to come out. Not because I'm waiting for the consoles (I couldn't care less about them as gaming systems) but because we'll finally get games that aren't constrained to fit in teeny-tiny little spaces, and can start to use some of the stuff that computers have had for half a decade or more. Like, say, 64-bit coding. (I recently found out the reason skyrim crashes so often for me is because my computer is too good.. turns out the game is so strictly 32-bit that as soon as memory use breaches 3.2gb it falls apart and crashes. Plus, being dx9 means it's mirroring textures in ram. So everything you have brings the memory up, and a good system you want to crank it up high and add mods.. but then it quickly exhausts the little memory space and falls apart. I'm sure Bethesda programmers can code. It's probably that they're given half the time they need to even debug, and the poor engine is a shambling corpse that needed a rewrite a decade ago or more. :rolleyes:)
 
Back
Top