Post #139,485
2/3/04 12:19:54 AM
|
Followed a few of the newsletter links - usual pattern emerg
ing.
Just found info that points to possible IBM vs MS schism on GRID directions ...
"Companies seek to marry Web Services & GRID" -- [link|http://www.gridcomputingplanet.com/news/article.php/3301191|http://www.gridcompu...ticle.php/3301191]
EXTRACT >> Access to these resources, she said, allow "just in time" procurement with many suppliers, systems outage detection and recovery and grid-based workload balancing. For example, WS-Notification can automatically alert suppliers that they need to restock merchandise once inventory decreases. It can also be set up so that only the supplier with the best bid fills the order.
In basic respects, this idea mirrors that of WS-Eventing, a specification cooked up by Microsoft, (Quote, Chart) BEA (Quote, Chart) and TIBCO that describes the communication of events in a Web services architecture.
Because IBM usually works with Microsoft and BEA on Web services standards, Norsworthy was asked at the time why they chose not to participate. Norsworthy told internetnews.com that Big Blue was doing its own work in this space and declined to join the WS-Eventing effort because they had different priorities.
While such separation of Web services specifications is normally fodder for speculation about catastrophic schisms that could threaten to sunder the software community, Norsworthy Tuesday expressed confidence that the two specs would someday converge, and noted that TIBCO's presence on both specs is a good harbinger of this potential.
Ronald Schmelzer, senior analyst with XML and Web services research firm ZapThink, said the announcement answers the questions he originally had about why IBM declined to participate in a spec that could help them spread their Web services gospel and ultimately improve their WebSphere software platform, which is so vital to the Armonk, N.Y. company's success.
"It was conspicuous that IBM was absent from that announcement, and now we know why -- IBM was working on their own spec, WS-Notifications," Schmelzer told internentnews.com. "The difference between the two specs is the intended focus and technology. IBM found that their priorities were for allowing brokers to be present between the event publisher and subscriber, and in addition to support management of the notifications and end points.
Also, IBM wanted to support their Grid initiatives that required notifications to make it work. Simply put, these were not priorities for the Microsoft, et. al. team, and as a result, IBM decided to come out with their own spec focusing on these priorities."
Meanwhile, the WS-Resource Framework, authored by The Globus Alliance, HP and IBM, describes how to utilize the related specifications to model the resources in the context of Web services.
<<
Doug
|
Post #139,486
2/3/04 12:24:27 AM
|
What is it?
Is this an effort at distributed computing? With 64 processor servers soon to be common, with terabytes of RAM, who needs it, whatever it is?
-drl
|
Post #139,489
2/3/04 12:42:29 AM
2/3/04 12:46:32 AM
|
As far as I can tell ...
It is a way of tying together hetrogeneous computers to share resources.
The Web Services bit looks like a dynamic middleware that could make a big difference in how viable the whole concept is (vedors collaborating vs Vendors at war).
It doesn't really seem to be new, just that the supporters of it seem to have garnered industry wide acceptance of the concept & an open standards base (WebSvcs) to now build on.
My interpretation thus far & based on the work we did back at IBM in the early 1990s. Today, I envisgae someone like our company, deciding at some stage (who know when), to adopt a strategy that says all new fixed workstations can become part of a GRID & that this GRID computing facility then becomes part of the companies IT infrastructure & gets factored into the IT capacity for daily work.
It could be a strategy to get everyone thinking in terms of shared computing resources in such a way that services companies can then offer to farm the 'GRID' & eventually have IT shops use terminals to do their IT but rent the GRID. It seems to fit in with the notion of an era of 'Services Oriented Architecture' which can be reinterpreted as the golden opportunity for the big players to get customers onto fixed flow anuity revenue (the dream of MS & IBM & Oracle etc: etc:).
There is also IBM's repeated catch-phrase 'Computing on-demand' which seems to support the above notion.
As mentioned in another post, this concept of splitting work & using parallel compilers & split work loads, has been working well for over 10 years at many Universities & certainly at IBM research facilities.
My opinion thus far is it is an IT industry vendor initiative aimed at changing the computing model such that they shift to being GRID Service Providers. In the long term, the perhaps anticipated death knell of IT shops as we know them today. Hmmmmmmmmm!!!
Doug Marker
Edited by dmarker
Feb. 3, 2004, 12:46:32 AM EST
|
Post #139,492
2/3/04 12:52:30 AM
|
Re: As far as I can tell ...
I used to think a lot about this. In the end it's pointless because the processors are so much faster than the network, and what you gain in parallelism you lose in latency.
Massive clustering of multi-processor machines is IMO "where it's at" for this problem. Imagine 64 clustered 64 processor machines connected by Gb ethernet. You could compute the weather in a small room.
The idea of "agents" roaming around to where they need to operate is more interesting. Think of a "super" search engine that creates a process that migrates to where a database is kept, and then comes back when satisfied. What this could do to privacy is scary. The machine itself need not run the query, so it could be a very pedestrian machine. This could work for any idiom that involved "form filling".
-drl
|
Post #139,511
2/3/04 2:46:03 AM
|
Re: As far as I can tell ...
One paper I read this week pointed out that Fibre comms was doubling every 9 months while hardware power was still every 18 months & that at this rate the imbalance will swing heavily toward using comms to boost the GRID concept.
Doug
|
Post #139,526
2/3/04 6:45:54 AM
|
They used to say that about SMP
Memory far away was slower. Cray, then Sun fixed that, so NUMA was not really a requirement.
I'm aware of (but my ride is about to show up so I'm not going to start researching) a "network" technology that is fast enough to use for remote memory access. Faster than Myrinet, which is the cluster gold standard. I'll track it down for you later. The killer is the price, of course, which I thing is about $5,000 per port on an 8 port switch. But it'll come down, just like GB ethernet did.
So the CPUs will get faster again, but we will cross the threshold of fast enough, and we will go into a NUMA like architecture for local and remote CPUs with an fake SMP like cluster.
|
Post #139,629
2/3/04 4:09:16 PM
|
Re: They used to say that about SMP
Computers are already limited by the speed of light and finite size of circuit boards and even discrete components. Now tell me how latency over long connections is going to be a simple problem.
-drl
|
Post #139,631
2/3/04 4:33:06 PM
|
Perhaps I am missing some history, but...
My understanding is that the reason that you want NUMA is that the SMP strategy simply does not scale. The more CPUs you add, the more time each CPU spends waiting on the rest. Pretty soon you hit diminishing returns.
You can improve that by going to finer grained locks, more locks that last shorter each, making each CPU hog somewhat less of everyone else's time.
This adds overhead, but pushes off when you get diminishing returns. You still hit a wall though.
NUMA is still scaling well with a few thousand CPUs. You don't hear of people using more than 64 CPUs very often with SMP because you are wasting the other CPUs.
My further understanding is that SMP is the more widely used because it is easier to program to, and (particularly with Moore's law improving the CPUs) very few people have CPU needs beyond what SMP can provide.
Cheers, Ben
PS Seconding what Ross said, as your machine spreads out and chips speed up, relativistic latency becomes an ever-growing issue. Sure, throughput can be scaled as far as you are willing to pay for. But Einstein ain't so cheap to buy off.
"good ideas and bad code build communities, the other three combinations do not" - [link|http://archives.real-time.com/pipermail/cocoon-devel/2000-October/003023.html|Stefano Mazzocchi]
|
Post #139,641
2/3/04 5:38:26 PM
|
I dug up these docs that cover a range of technologies
In particular they offer various opinions on hardware chip advances vs network growth & network speed improvements.
The main point I guess, is that if network speeds do advance dramatically, then SMP will be equalled or bettered by clusters (clusters assume homogeous computing) and then GRIDs (heterogenous computing).
The case for GRID is that by the time all the interfaces & tools & standards are set, GRID will become the dominant computing model. IBM are taking this view & have announced they will GRID enable *all* their platforms.
Doug Marker
........................................
Moore's, Metcalfe's & Gilder's laws (Gilder: Bandwidth rises three times faster than computer power). [link|http://www.findarticles.com/cf_dls/m0BRZ/12_22/98977161/p1/article.jhtml|http://www.findartic.../p1/article.jhtml]
Grid computing & Moore's law [link|http://gridcafe.web.cern.ch/gridcafe/Gridhistory/moore.html|http://gridcafe.web....istory/moore.html]
Moore's law and processor chips [link|http://www.wired.com/news/technology/0,1282,50672,00.html|http://www.wired.com...282,50672,00.html]
Wi-Fi & Moore's law [link|http://www.ciol.com/content/news/2003/103061003.asp|http://www.ciol.com/...003/103061003.asp]
Moore on Moore's law [link|http://news.com.com/2100-1001-203750.html?legacy=cnet|http://news.com.com/....html?legacy=cnet]
Metcalfe's law & Networking (1998 - Jim Barksdale) [link|http://wp.netscape.com/columns/mainthing/it.html|http://wp.netscape.c...mainthing/it.html]
Doug Marker
|
Post #139,654
2/3/04 8:06:41 PM
|
Check out Infiniband
[link|http://www.computerworld.com/hardwaretopics/hardware/server/story/0,10801,89037,00.html?f=x76|http://www.computerw...037,00.html?f=x76]
Low latency data movement faster than any "regular" CPU can read it right now.
I forsee a mixture of of faked SMP and NUMA based on Infiniband. It'll give the single system image for ease of programming. Clusters will pick up on the next step.
For small data, high CPU partitioned compute tasks, Grids are the most cost-effective.
But corporate programmers are lazy. They take a single system model, throw a few CPUs at it, and it seems to work. They don't have the budget or the expertise to test real scaling. They release it, it become business critical, and the performance tanks. Right now the only easy fix is SMP.
I think we will hit a price sweet spot where 4-8 CPU boards are cheap and the next step becomes prohibitive compared to clustering. Mix in infiniband connections and you have nice building block scalability.
|
Post #139,662
2/3/04 9:21:53 PM
|
Re: Check out Infiniband - Tks had not seen it before
In the mid 90s I did some presentations on ATM & how it was likely to provide the needed backbone bandwidth for the Internet to grow. An ISP in Singapore grabbed hold of me after one show & set about explining to me that as good as ATM was, it would lose out to Ether tcp/ip wholely because ATM required replacing what was already working (even if tcp/ip was not super efficient).
He turned out to be right. Am not sure yet if Infiniband fits into this category (will read up on it a bit more).
Tks for the link.
Doug Marker
|
Post #139,675
2/3/04 11:33:15 PM
|
Apples and Oranges
ATM was for carriers who needed the small frame with the QOS for voice. It was way too expensive to the average company to use, and the expenses were ongoing. There were comparable speed alternatives at the next level down that most connections used, that were cheaper, and nobody cared about the latency for IP.
Infiniband is not that much more expensive than GB was 2 years ago (if that), while allowing for many times the throughput. Once you buy it, you gain the speed and you are not paying ongoing (unlike the ATM comparison) cost. Once in, nobody is going to sell you on a cheaper alternative. It shows huge expandability based on current tech, just by adding wires.
Can't compare the two.
While you can ride TCP/IP over it, that is a huge waste. Native protocol is MUCH faster. This is not a network technology, this is a bus extender which is faster than all current busses. I think the only thing that compares are memory crossbars in the current SMP boxes. And as a bus extender, you can then build real SMP via building blocks. Or really fast NUMA when the SMP locks get to be too much overhead.
|
Post #139,674
2/3/04 11:22:30 PM
2/3/04 11:35:50 PM
|
I found this diag on IBM site
[link|http://www-106.ibm.com/developerworks/grid/library/gr-heritage/|http://www-106.ibm.c...rary/gr-heritage/]
Halfway down is the diag that positions network perf etc: in relation to benefits of GRID
This introduction to GRID compares GRID with Clustering, CORBA & Peer-2-Peer. It handles the comparison quite well as the writer knows what he is talking about & seems to hit all the key points. Re Corba for example, he highlights the incompatibility of Corba with the web (exploitation of http & the lack of use of web end-point identiites). Web Services builds on the best of Corba by solving the shorcommings just mentioned thus taking full advantage of the web and *best-of-all* introduces the concept of dynamic late binding between interfaces. Something that Corba can't do.
Doug M
Edited by dmarker
Feb. 3, 2004, 11:35:50 PM EST
|