Dual Core G5, the Last Computer you Ever Buy?

by Chris Seibold Apr 20, 2005

There are rumors swirling of imminent releases of speed bumped PowerMacs and iMacs. Some of the rumors go a bit farther than positing a mere increase in processor speed and intimate that the new PowerMacs will feature dual-core G5 chips. The scenario seems likely, Intel released dual core chips, AMD is doing so today  hence there is no reason to think IBM will be the only major chip maker left out of the dual core party. In fact since the PowerPC 970 MP is more than a rumor it is more a question of when rather if PowerMacs will ship with dual core processors.

The dual core aspect of the chip is particularly interesting. For those of you unfamiliar with dual core chips the theory can be somewhat summarized by noting that the chips, while one physical unit, perform as though they were independent processors. In theory a PowerMac fitted with a single dual core chip would perform comparably to a current dual processor tower. One supposes that the possibility of a dual processor PowerMac that performs like a quad processor machine is extremely tantalizing to true power users and folks who think they need the most bleeding edge Mac even if they are operating in a state of delusion usually remedied with psychoactive pharmaceuticals. For the rest of us there is still cause to be excited, after all the technology increases in high end machines eventually becomes cheap or outpaced by even more processing power and today’s cutting edge trickles down to more mundane machines. It isn’t inconceivable to think of a Mac Mini powered by a dual core G5 somewhere down the road.

A quick inspection of the above scenario leaves one wondering just when chips are going to become so powerful that any increase in performance will be of negligible utility to the average home computer user. Or put more succinctly: The moment in time that computers stop becoming obsolete. That is an interesting question but before examining the question in any great detail another question must first be answered: Why hasn’t the limit already been reached? The question may seem unimportant at first blush, after all it is still the case that most people would prefer that their system had improved performance. Hence we note that since the limit obviously hasn’t been reached, wondering why it hasn’t would seemingly be of little interest. A closer inspection reveals the opposite to be true; the fact that the limit hasn’t been reached provides insight into the earlier of question of when it will be reached, if ever.

The simplest explanation is that as computers get faster users become more demanding of hardware. There is some truth to this argument. For example one of the likely uses of a computer by a home user is ripping a CD into iTunes. Today’s computers complete the task in moments while the computers of even a decade ago would take forty times as long (if they could accomplish the task at all). At that multiple moments become chunks of hours and the resulting effect is that only the most dedicated will take the time to rip a CD onto their hard drive. While the majority of people may, in fact, know about the functionality and want to rip a CD into their hard drive they are dissuaded by the associated expense of time. This leaves them unwilling to undertake the process thereby ensuring that they will be hungry for faster computers.

That example does not tell the entire story, and even if it did CD ripping rates have leveled out in the past few years yet demand for more powerful computers has not appreciably decreased. The most common tasks computers are used for are probably web browsing, e-mail and word processing. Modern computers should be hideously overpowered for any of these tasks, after all they were all tasks that were achievable using chips five generations old. To illustrate: at one time I word-processed, checked and composed e-mail and browsed the web on a computer armed with a 25 MHz 68040 processor. My current PowerMac is faster by a factor of 80 in Hertz alone and that neglects all the other improvements. Yet my e-mail isn’t appreciably faster, word processing is markedly slower and the ‘net experience, while somewhat faster, doesn’t display anything approaching an eighty fold increase.

On the surface this seems to be an intractable paradox, after all how can these unchanged uses remain seemingly stagnant in performance when the computer is nearly two orders of magnitudes faster? The answer is easily discernable, while the computer hardware has been getting steadily faster the software has been becoming more feature laden and resource hungry and less well written. In essence the software has ballooned to fill the available processor power and space. So where one naively would have expected applications to only get faster and faster the rate of return on processing power hasn’t been as great as even the most pessimistic user would have predicted.

At this point it is tempting to point the finger of blame squarely at the software makers. That would be a mistaken and oversimplified solution for the simple reason that while most people would enthusiastically agree that bloated software is a bad thing indeed their agreement on which features of the software constitute the bloat will likely be wildly divergent. To use a personal example I often employ the linear regression tools in Excel and the Flesch-Kincaid Reading Level function in Word (this is clocking at about 12th grade if you’re interested). Because I frequently use these tools I consider those two features to be an important part of Office’s functionality. On the other hand I am not simple minded enough to believe that the majority of users would find these tools to be anything other than bloatware. I am confidant that surveys of any two users would reveal many such examples. The lone exception might be the color fax covers found in the project gallery. Honestly who but printer ink manufacturers think that color on a fax cover page is a must have feature?

So bloatware and increased physical functionality (the aforementioned CD ripping or transferring data) are obvious contributing factors to why computers have not yet hit the performance limit for the average user. However those factors alone don’t explain the still strong desire for faster machines. We also have to consider that the machines of today are asked to do things nearly inconceivable of a home computer ten years ago. iMovie provides a wonderful example. To get the a studio of equivalent capabilities to iMovie ten years ago would have resulted in an investment of thousands if not ten of thousands of dollars, today that functionality will set you back $499 plus the cost of a memory chip. iMovie is just one example of a computer gaining new functionality that would be impossible on a more modestly powered machine, games are probably a bigger culprit. In short: While most of the time computers are doing essentially the same tasks they were doing five years ago with aplomb it is also the case that we are asking our computers to do some tasks, on occasion, that simply require a massive amount of computing power.

Now that we’ve seen why computers haven’t hit the point where there is little incentive to the home user to seek out increased performance we can consider if that point will ever arrive. Here the picture is clearer. In the physical arena, ripping CDs, burning DVDs and transferring data we are already seeing a leveling of sorts. FireWire 800 is supposedly preferred more for the increase in the length of cabling allowed than for the increase in speed, CD speeds will probably not exceed 40X because of the physical constraints of the media (here one would expect write speeds to keep increasing until all formats can be written at 40X). As for software continuing to hobble the processor to such a point that people feel compelled to upgrade one is a bit less optimistic. If you would have asked the programmers that truncated the first two digits of the year to save a little space in memory if something as simple as word processing could tax a processor tens of thousands of times more powerful than the ones the were dealing with they probably would have laughed. Still history teaches us that sloppy programming and questionable features are the byproducts of increasing processing power and storage space. Finally we have to consider the question of as yet unthought of uses for the computer that require ever more horsepower. While it is hard to envision any widely accessible functionality that would require more power than the latest generation of PowerMacs possess the truth is that new programs, particularly games, will continue to demand every resource available and still want a little more. If there is a lesson in all of this it is: the point of buying a computer and having to replace it because of failure instead of obsolescence is a still several years, if not decades, in the future.

Comments

  • You want to know the next “Big” thing that’s going to bring your computer whimpering to its knees? h.264 is going to chew and spit your computer out the moment you try to encode HD quality material. Remember the days when MPEG2 decoding was far from realtime. They’re baaaaaaaack.

    Relative to MPEG AVC/h.264 takes roughly 8x the power to encode and 4x the power to decode. You now need a whole lot more power to maintain the encoding/decoding performance you currently enjoy with MPEG2. Unless Powermacs improve in speed by a factor of at least 4 there will be slowdowns.

    You’ve been warned

    hmurchison had this to say on Apr 21, 2005 Posts: 145
  • Ha ha ha,

    I remember reading articles very similar to this back in the days when the 50MHz was first breached…

    Which just goes to show that the more power/speed/hard disk space you have the more stuff you throw at the machine and eat it all up again.

    In theory all modern machines (even real low enders) have so much power we should be frightened… But we just soak it all up (Remember when 256 colours seemed amazingly flash?) Now we run huge monitors in billions of colours with live transparency effects, expect realtime video streaming from the web, think nothing of importing and editing a 20Gb home DV files while listening to internet radio and having the machine constantly monitor our email accounts and a couple of RSS feeds too…

    The more power we have the more things we find to do with it. 15 years ago relatively few people had a PC at home. Now the majority do I would guess - and printers, scanners, digital cameras/video cams, web cams, broadband, home networks, wireless tech, iPods, etc. are not the preserve of the tech geek - they’re on the supermarket shelves down at Tesco and Wal*Mart!

    More power, must…. have…. more…. power….

    Ten years from now we will look back and smile saying “do you remember when 3GHz was fast and we all used to use those funny little ADSL boxes to connect to the internet?”

    “I had one of the first ever dual cores you know… it seemed so fast back then”

    Yes when you find your 50GHz Quad Core Cell Based box a bit long in the tooth you’ll be dreaming of upgrading to that 250GHz 512bit 16 core grid chip machine….

    See all back here some time then…

    Serenak had this to say on Apr 21, 2005 Posts: 26
  • New technology in the computer sciences has been stagnant for a couple of decades now.  Chipsets are much faster, but the new speed has been offset by poorly written compilers, software and disk I/O.  Seymore Cray was the last great innovator.  Cray systems had compilers that generated highly efficient executibles.  I/O was performed by using solid state technology to enhance the slowness of hard drive access.  Applications were not bloated fat binaries and dlls that slow the system to a crawl.  All of this was created, albeit at a high cost at the time, by the early 80s.

    All of this should have reached the desktop by now.  However, today’s “systems” have too many bottlenecks in their architecture.  The worst is the hard drive.  Why is there not a very high speed connection using solid state storage that has very little latentcy and could be attached to the system board at a much higher speed that is available with any hard drive system.  Texas Memory Systems still has to connect using a slow connection but is capable of so much more in speed.  The chips would be cheap and cool, while a disk backup would run in the background.

    Clearly, there has been little technology that has trickled down to the desktop when these PCs are using 40 year old data storage technology.  Compilers have been getting worse in optimization leading to bloated software that runs slowly, even slower with dynamic link libraries that duplicate modules that have even been compiled using different generations of compilers and compiler options.  Java and it’s ilk are little more than a perpetual beta program that few, if any people can get it to perform at a reasonable speed.

    As long as desktops or even existing PC servers continue with the status quo, systems will take up more space and accomplish less than they should.  Today’s handhelds have more power than mainframes twenty five years ago but are little more than toys by comparison.

    jreinhart1 had this to say on Apr 21, 2005 Posts: 1
  • Remember when the prefix “giga” meaning ” a billion” was esoteric geek knowledge?  Today, you are only a moderate geek if you know the next SI prefix, “tera” for “a trillion”.  Terabyte storage systems are already out there, but I think we are a long way from seeing RAM in terabytes (although 64-bit systems make it at least possible) and terahertz clock speeds!

    macFanDave had this to say on Apr 22, 2005 Posts: 3
  • Page 1 of 1 pages
You need log in, or register, in order to comment