jep.jpg (13389 bytes)

Chaos Manor Special Reports

Essays on diverse topics

Saturday, June 16, 2001

Email Jerry

Sections

Chaos Manor Home

View From Chaos Manor

Reader Mail

Alt.Mail

Columns

Special Reports

Book &; Movie Reviews

Picture Gallery

Links

Table of Contents

What's New

The BYTE Fiasco

SMP AND THE FUTURE OF COMPUTING

 

This was an internal dialog at Chaos Manor that I believe is instructive; it's a bit hard to give it a title. The discussion involves both multiple processors -- you may recall that one of Pournelle's Laws going back to the 70's is "One task, at least one CPU" since I believe hardware always wins over software in gaining processing speed. It also goes to the future of computing.

 

It all began when Eric Pobirs sent this message to an internal Chaos Manor bulletin board:

 

http://www6.tomshardware.com/releases/99q2/990609/computex-99-05.html

You could easily build a dual CPU 466 MHz NT system for under $1000.

 

Alex Pournelle answered,

The question is: How large is the market for speed, or for perceived speed? Will there be a large enough group of people who really want a (not really) 2X speed improvement, and who are willing to use NT?

Of course, if Intel stops obeying Moore's Observation, we'll see multi CPU machines become popular quickly... but I don't see that happening in the next 5 years at the very least.


And Eric replied:

 

 

Consider the way the costs work out. Check out the workstation comparison in latest issue of DV.) If your target is applications that are written for SMP you could get much more cost effective stations built from cheap Celerons than from a top of the line P-III. The several hundred dollar premium for a 550 MHz P-III is enough to buy a couple Celerons. Note also the physical dimensions of the Celeron on the Socket 370 form. The pair of CPUs fits easily into the space of a single Slot 1 connector without any additional bulk or the need for multiple fans

If Win2K becomes the mainstream Windows than there should more applications supporting SMP and more consumer demand. One thing that should spark interest is the SMP support in Quake 3. SMP in Linux is now reasonably easy to install under the latest generation. Intel should be thorough enthused about pushing SMP into homes long before Moore start to sputter out. Contrary to popular belief they do make a decent profit on Celerons, just not the massive margins the high-end delivers. Throwing a pair of CPUs into a consumer box could go a long ways toward making up the difference. On the other hand, AMD does lose money on competing with the Celeron line and doesn't offer SMP support until the K7 become available in quantity. (Something Glaskowsky say could be far off.) Selling a configuration AMD cannot match would be a great way to raise the heat under the frying pan where AMD resides.

If Win2K ships this year, the next WinHEC will likely have some discussions of the consumers apps even if a consumer specific version doesn't arrive until much later. Computers have joined cars as a point of macho posturing at social gatherings. I think Win2K will takes off into the home market without much effort on Microsoft's part beyond keeping the DirectX support up to date. Dual CPUs will replace dual carbs as a conversational gambit.

I think this could be a Next Big Thing quite soon if the right people spread the meme.

At this point enter Darnell Gadberry, who has a somewhat different view from the rest of us at Chaos Manor:

A Win32 application need only be written using threads to take advantage of SMP. Unfortunately, the Windows NT process scheduler is really stupid and wastes lots of time thrashing idle processes. Interestingly enough there is great variation in the performance increase achieved by Backoffice applications on a multiprocessing box. SQL server benefits greatly by running on a SMP box. By contrast, IIS performance is improved by less than 10% by the addition of an additional processor.

 

Remember, CPU throughput is just one factor in determining the overall performance of a machine. You have to look at Memory bandwidth, I/O channel throughput and task switch overhead. Since the average business desktop loafs around at 25 - 40% CPU capacity it is unlikely that most people would see any substantive improvement by going to a SMP box.

And Eric answered:

Yes, we're aware of threading and the wide variation in SMP benefits. You might note that I mentioned the need for Win2K (and Linux) to drive developers to make more threaded apps. The same can be said for a lot of PC features. For instance, if you're a PhotoShop user the promise of the MMX instructions is realized but for many cases the difference is only a tiny boost here and there. Even so, after the media deluge it soon became impossible to sell a PC that didn't have an MMX CPU, devaluing a lot of AMD and Cyrix inventory.

I think Intel could do the same with SMP to the gaming market. The Quake3 engine will be widely licensed, making for a good base of major titles that know about dual processors. (Carmack has said he sees SMP for 3D games as a no brainer compared to most apps because the decision on how to allocate processing is very simple: Geometry setup gets its own CPU and everything else goes to the other. Benchmark heaven.) This same market is driving fast sales of state of the art video boards despite the fast that almost no existing games can produce substantially better looking output on those boards compared to their inexpensive predecessors.

If 3Dfx and NVIDIA can make this scam fly, why not the market titan Intel? If motherboard support became standard the price point for adding a second Celeron would be no greater than choosing a $225 TNT2 Ultra over a $100 TNT. The trick on this is to target the right magazines. Multiprocessing has always been pitched to the MIS crowd. Time to whisper into the ears of the PC Games audience.

And Alex added, in reply to Darnell:

But that begs the question I framed, doesn't it? If more compute-bound reasons to have fast CPUs can be found/invented/desired, then dual CPUs appeal commercially. What form this will take--natural language, streaming video, mind control--helps determine how high the power curve soars, at least in the volume production space.

And, as Eric mentioned, if dual Celeron type systems (I can but hope that AMD figures out how to make them pay) cost well less than the equivalent mono-CPU one, and W2K hits big, and there's a perceived need for speed--then dual CPU systems will sell.

-- Alex

Prompting from Darnell:

Alex,

 

Let us not forget that we compu-nerd types are frequently influenced by an unseen and generally unnoticed reality distortion field. What you and I do with our desktop machines bares very little resemblence to what my Father does with his Dell Dimension. I know very few people who have the actual (or even perceived) need for a dual 450 MHz processor-equipped desktop PC.

Doom fanatics - perhaps. Word 2000 users - I don't think so.

 

- darnell

Which finally got me into the act, with:

After being involved with these little machines for thirty years, I can say with some confidence that no one has ever had too much computing power for very long.

Just at the moment hardware has jumped ahead of software and aside from a few games there is no need for all the capacity in modern desktops. That will change.

The average user five years ago could not have conceived of the need for the equipment he has now. When Office 97 came out I denounced it as bloatware, but it has features I use every day now, and as to taking up disk space, the 300 megabyte it uses costs maybe $15.00 and the memory it hogs costs maybe twice that. Not worth worrying about.

We haven't yet got to the point where the home PC runs the house, but that will come,

There's just no such thing as an excess of CPU cycles: at least not for long.

Jerry

Darnell:

 

Please don't get me started on a rant about poorly written PC software. All things being equal, I would love to have a 1.5 Billion operation per second machine on my desktop. Unfortunately, I fear that Word 2005 would require such a machine to scroll text at an acceptable speed.

 

- darnell

And that got me to write:

Come now.

What are you trying to conserve? By expenditure of a lot of human effort you can in fact save a lot of computing cycles by making software more efficient; and what have you saved? More reliable is worth something. More efficient in the sense of saving cycles is not particularly worth while.

Sure, there is a larger customer base if software runs more efficiently and thus works on older machines; and that saves upgrading; and it's a decision that an MIS with 20,000 machines to worry about has to give a lot of thought to. The software houses also have to give some thought to this.

Office 97 could probably be made to work on a 386 running Windows 3.1 but the effort required to do that would be very expensive, and beyond the praise from academically inclined computer scientists the reward for expending that effort would be just about nil.

In a command economy we could force software houses to spend the effort to make things "efficient" and conserve CPU cycles, just as the imbecilic power management "features" have been forced on us; but what would that do for us?

When hardware was expensive and improvements to hardware slow, IBM could make a lot of money with software systems that made efficient use ofthat hardware; and make money on the hardware too.

Today the hardware is commodity priced, and that trend will continue. When I denounced Office 97 as bloatware, 300 megabyte of disk space cost as much as Office 97 did. Now the costs of that disk space is trivial, and the "savings" to me from making Office 97 more "efficient" would not be noticed. Literally.

Office 2005 probably won't run very well on last year's equipment. So? It may not even run too well on today's, and it may choke up the average 2002 machine. And again, so?

The market will take care of gross inefficiencies. We will NEVER have the kind of elegance that most computer scientists want because by the time that software is written, there will be an inefficient bloated program that does more and does it well enough. Thus is has been for 20 years and thus it will be for 20 more.

Moore's Law is the enemy of software elegance.

Prompting two replies:

Well said. I am willing to concede the point.

Next topic...

 

- darnell

and Eric, who will get the last word here:

What Jerry says is true for personal systems but Darnell has an important point when it comes to servers.

The bit about obsoleting hardware is rather overstated though. There were HP P60 systems at Nexus that did a pretty adequate job of running Office97. The biggest constraint came more from a lack of memory than CPU power. I wouldn't want to be saddled with one of those units myself but the users in this context had nothing to compare against. The P120 machines at Selby would have made them think they'd had the upgrade of a lifetime.

A big part of the reason mega-apps dominate, I believe, isn't just Moore's Law but also the psychology of consumers. A package like Office is relatively cheap these days. If it doesn't come with your machine chances are the combination of upgrade deals, rebates, and promotions will make it nearly free. After all, if everybody has a minimal proficiency from their home system this carries a lot of sway with corporate purchases, where the real money is. (The folks at Star Division have carried this to its pinnacle, where non-business use is flat-out free.)

This makes it very hard to make any money on a more tightly coded product that may serve the user better on their aging hardware. For instance Yeah! Write (www.yeahwrite.com) is the 'Opera' of word processors for Win32, you might say. The download isn't much more than a megabyte, it's fast, does most everything an average user could want, and it's only $29. The main guy behind Yeah Write, Pete Petersen, is doing a decent little business but won't be rich anytime soon. Microsoft itself offers the Works package that delivers considerably more than that ever popular 10% of features that most folks barely use but in a much less demanding package than Office. Even so, Works exists almost solely as a bundling item and is nearly non-existent at retail. Even though Works would often be the better choice it's rare that Office doesn't get the nod.

The traditional reward stimulus needed for a company to produce tighter code in consumer apps just doesn't exist anymore. Back in the day, when anybody reading a computer mag had a certain geek level, you could tout an upgrade as having no new features but a smaller binary. When RAM was sold by the kilobyte and many had the experience of running short of memory just piddling around in BASIC, nobody had any difficulty appreciating why this was important. Nowadays though, the marketing people are afraid of sounding like they're offering less rather than more. The modern user isn't clued into the things that impress programmers.

All well and good for consumers. Faster CPU's and bigger drives, both cheaper with every passing month absorb the brunt of new mega-apps faster than they can be coded. The same cannot be said for servers. Create a popular site and your usage levels can go from zero to tens of thousands a minute in less than a week. Nor can you necessarily just through hardware at the problem. The costs are much higher, for one, and even if you have a blank check you'll soon run up against the limits in what is available in state of the art hardware. From there you get into exotic massive multiprocessing platforms or ganging together a lot of servers. Either way, you're looking at big bucks and complexity. Run a big enough site and the problems soon begin to include real estate and the cost of keeping this assemblage from melting.

So, in this field tight code still counts for a lot. One reason many operators get annoyed at NT is its GUI orientation. Running admin tools in a friendly environment is a fine thing but just a waste of RAM and CPU cycles during normal operation. The little waste generating features here and there that cannot be easily turned off soon add up to the need to buy more hardware sooner than it should have been necessary. If the drag on throughput means supporting 5% less customers in an hour (presuming you have that rarest of rarities: a viable business model) you're losing a good chunk of revenue to code that isn't serving your purpose.

That aside, I think cheap SMP sold on the macho posturing tact could do a big business.