Post #55,020
10/5/02 1:39:58 AM
|

Yeah, but they'll have more problems this time
because of marketing.
Even though P-4 doesn't get as much work done per cycle, a 3MHz P-4 sounds a lot more impressive than a 2GHz Athlon (or 800 MHz Itanic-2). So Intel has a built-in advantage when marketing to the clueless.
But, when AMD has 64-bit (Hammer) Athlons priced at P-4 prices, even if 64-bits doesn't make it any faster than 32-bit machines, it will sound a lot better when AMD markets to the clueless.
I'm not convinced about how great Intel is. OK, I do give them credit for achievements (DRAM memory, 1968; microprocessor, 1974). But frankly they got a lucky break with IBM (and rode it very well), and their real advantage now is marketing (ugh!, but I have to admit they do it very well) and process technology (which is helped by their size -- their capex probably acounts for a quarter of the whole semi biz) -- not by great designers. In fact, IMNSHO, their latest designs seem crappy, e.g. Itanic, P-4, and XScale (another CPU that spins it wheels faster but doesn't get any more traction).
And, of course, they basically dropped out of a couple of growth areas, such as embedded (Intel 8051 architecture is very popular, but Intel's share is very low; this area is dominated by Motorola, Microchip, and others) and DSP (dominated by TI). Now they're trying to get back in, but based on their past I wouldn't spec them for anything embedded.
Tony
|
Post #55,070
10/5/02 4:33:18 PM
|

Another difference
There are a lot of people today using languages which do allocation and garbage collection for you. Many of these languages are written by people with a distinct Unix bent, who really like having a flat address space.
They may balk at having to mess up their internal data structures for that. If they don't accept it, and there is already a culture of acceptance for huge performance overheads, 64-bit might be more popular than Intel thinks.
Although thinking about it, and speaking out of my *ss, I wonder if languages (like Java, Ruby and Perl 6 but not like Python or Perl 5) with true garbage collection will have no problem adjusting. After all the details of how you access data is already indirect (because of the garbage collection), I can imagine that people could come up with schemes where paging is all done behind the scenes and you are fine as long as you have less than 4 GB of objects.
Does anyone who actually knows anything about garbage collection techniques either way care to confirm or correct that guess?
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #55,096
10/5/02 11:31:38 PM
|

Actually, could be more
Actually, if your willing to do the nasty memory manegement by hand you can even get past the 4 GB limit. Both Linux and Windows give you a way to access the PAE memory beyond 4 GB but you have to keep track of what is where youself. You still can't look at more then your address space worth at one time, but by carefull swapping of address spaces you can access more memory overall.
The only applications I know of that use this right now are databases, but it should be possible in a system like Java, Smalltalk or Perl. With sufficent manipulation behind the scenes, any language where you can't get a pointer directly should be able to take advantage of this without rewriting applications. I don't know if it would be worth the speed hit or not though.
Without playing these sorts of games, the limit will be lower then 4 GB. Part of the address space is used by the OS and shared libraries. I think the latest version of Linux will let you work with up to 3 GB, while Windows XP is limited to 2 GB.
Jay
|
Post #55,140
10/6/02 2:29:39 PM
|

Return of EMS/XMS anyone?
Ugh. I thought we all learned that lesson with the switch to a flat address space with 32-bit processors.
lister
|
Post #55,163
10/6/02 4:07:52 PM
|

Of course it *can* be done
The question is whether it makes sense to do it.
My thinking is that it doesn't make sense to do it when you have reference pointing and internal pointers that launch you to essentially random locations in memory. It might make more sense when the program has more opportunities to organize its memory usage to generate locality of reference.
Another two trends that deserve consideration here. The first is whether Moore's law is getting the software to the point where really bad hacks are OK because people don't mind the performance overhead. The opposing one is whether the ever increasing relative cost of a cache miss is going to make it make sense in going forward to try to automatically find and use locality of reference anyways. On the second note take a look at [link|http://www.sourcejudy.com/|Judy arrays]. (The licensing got better. Perhaps I should look at them as well? :-)
Perhaps the time has come to reconsider the pervasive use of hashing algorithms?
Cheers, Ben
"Career politicians are inherently untrustworthy; if it spends its life buzzing around the outhouse, it\ufffds probably a fly." - [link|http://www.nationalinterest.org/issues/58/Mead.html|Walter Mead]
|
Post #55,202
10/6/02 8:42:49 PM
|

Whee!
My Atari 800 had 64K of RAM... 16K main memory, then 3-16K extra banks (~$100 per) that you could swap in by POKEing a particular address.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #55,099
10/5/02 11:43:09 PM
|

boy now I feel really old
you mean you no longer have to explicitely release the memory lock you engendered? I thought that you still have to cleanup behind yourself. Need to get out of old think into new think I guess.
get 256k manipulate 256k release 256k
thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
qui mori didicit servire dedidicit
|
Post #55,193
10/6/02 7:08:40 PM
|

You're kidding, right?
I haven't dealt with memory juggling in about 9 years. Went from 'C' to Perl and never looked back.
Well, not exactly. Sometimes you still need to code in a certain way to ensure efficient use of memory, and to make sure you don't have any leaks.
But you never dance on the stack.
|
Post #55,200
10/6/02 7:54:57 PM
|

I do perl scripting not perl programming :-)
and since I was a tadpole I was taught to explicitly get an adress space and make sure the OS knew I was done with it when I was finished. Since scripting batches is quite different from programming I never bothered to use bounded memory chunks. Remember I program in self defence cleaning up after programmers, I wouldnt call myself a programmer. thanx, bill
will work for cash and other incentives [link|http://home.tampabay.rr.com/boxley/resume/Resume.html|skill set]
qui mori didicit servire dedidicit
|