[SGVLUG] Linux Sonoma (Centrino) Support

Dustin laurence at alice.caltech.edu
Fri Sep 23 17:57:59 PDT 2005


On Thu, 22 Sep 2005, Chris Smith wrote:

> >         me->CrossOffList("Sony Vaio");

Huh.  Should have been "this->".

> Me either. There are some Sony's that score very high on the function
> side of things, so don't rub them out entirely.

I'm probably either going to get one from a small white-box retailer, or 
just buy the parts and assemble it.  The latter seems to be viable now for 
a laptop, even though it didn't seem to be just a few years ago.

> > Recall in another life I was a physicist.  Floating point speed gives me a
> > warm feeling all over. :-)
> 
> So you must really love the Itanium2. ;-)

Mmm, I haven't paid enough attention to be sure, but very likely. :-)  It
ought to make a great basis for a Linux platform, since we can re-compile
everything rather than be stuck with the slow x86 emulation (seems like I
heard Intel warmed up a bit to Linux over that very issue, trying to drum
up more support for Itanic).  Distro watch doesn't list it as an
architecture choice, though, so I can't easily tell how many distros
actually support it.  Looks like at least Red Hat, SuSE, Debian, and
Gentoo, so that's good enough.

The only caveat is that everything that matters is massively parallel
nowadays anyway, so I suspect it's bang per buck that counts rather than
performance per node.  Is Itanic2 good enough to compete with adding more
slower-but-cheaper nodes?

> Think of PPC is being like MIPS or Alpha, only their first version
> didn't work too well (IBM RT), so they kind of hacked back in some of
> the CISC-y instructions they knew and loved.

I vaguely remember "RT".

> > My understanding was that the cache was what killed the PPro in terms of
> > price, but it had some funky system where the cache was bonded onto the
> > CPU or some such wierdness that drove the cost up.
> 
> Well, it had a weird design where the cache was on it's own die, but
> it was bonded to the chip early on, so a flaw in either die meant you
> had to scrap both.

Yes, that's what I was remembering.  I guess manufacturing problems, not
necessarily cache per se.  I don't have any idea what chip yields are
considered OK, but it could just be that squaring the success rate (i.e.  
the bonding-before-testing manufacturing process meant that overall yields
would be the product of the CPU and cache yields) was enough to cause
trouble.

> cache was 62 million. On top of that the cache was locked to the clock
> rate of the CPU core, which is more than a little crazy (in a lot of
> ways it was almost like Intel added in some slightly slower 64,000
> anonymous general purpose registers ;-).

That is something I hadn't heard before.

Mmmmmmm, 64k registers. :-)

I guess *that* would shut up the RISC bigots!

> Oh there are other things besides cache misses, but they do turn out
> to be a huge deal.

Not that I've ever been that sort of programmer, but I gather to zeroeth
order the old tricks don't matter so much because a cache miss outweight
all the other issues that used to be important.  Except, I suppose,
pipeline stalls on something with as many instructions in flight as a P4,
but after those two I gather that isn't that much extra performance to be
gotten by the tricks that used to matter.

Until you get to disk I/O, of course.

As long as cpu speeds keep outstripping main memory, I guess that trend
will continue, and I suppose caches will continue to get bigger.

Dustin



More information about the SGVLUG mailing list