Post #156,981
5/25/04 9:09:06 PM
5/25/04 9:11:42 PM
|
OO takes credit for sunrises even
That sounds like a good approach. And now that you have them in separate files, you can associate the file with the appropriate customer(s); maybe even make that file an attribute or property of each customer. Then you can collapse your (possibly huge) IF statement into a one-liner!
Congratulations. We've just reinvented OO. Nope, Lisp, which predates OO by about 7 years. (And some say that Lambda calculus did it first.) I would generally suggest such a solution, but Scott suggested that we avoid things like "eval" because logging-in is too sensative. In other parts of the system after login I may be more likely to suggest it. However, in practice the one-to-one association between subroutines and entity instances tends to dissapate over time for most things in my observations. The coupling between nouns and actions in the real world is rather loose. If your observations are different, so be it.
________________ oop.ismad.com
Edited by tablizer
May 25, 2004, 09:11:42 PM EDT
|
Post #156,984
5/25/04 9:12:57 PM
|
Scheme and the Lambda Calculus
In many respects, Scheme (a member of the Lisp family) is just the Lambda Calculus with about a dozen special forms stacked on top of it. Of course, the special forms are what make it an "interesting" language.
|
Post #156,985
5/25/04 9:13:22 PM
|
Re: OO takes credit for sunrises even
ALL parts of the system are performance sensitive when your database server is a $500K box and it's near capacity. Sloppy, inefficient code like what you are proposing is deadly.
Since the OO code provides the same benefits automatically without the performance problems, which is objectively better?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,001
5/26/04 1:24:08 AM
5/26/04 1:30:09 AM
|
No way Jose -- Gotta go to DB anyhow
ALL parts of the system are performance sensitive when your database server is a $500K box and it's near capacity. Sloppy, inefficient code like what you are proposing is deadly. A user logs in only once or twice a day on average. I am assuming that we don't have to keep authenticating them for every task or event. Assuming they don't time-out, one login a day is good enough. Besides, we have to retrieve the customer attributes record anyway, even under OO. In most uses of table-driven strategy patterns, we have to retrieve the containing record anyhow. It is no extra burden on the DB. I'll tell you what. We will use OO for the login, but any other need for the strategy pattern beyond that we will retrieve the function name from the record and Eval it. In practice I don't see strategy needed that often. I mostly see it in table-driven menu systems, which I have seen at least four other independent developers put to use before they ever met me: the McCosker AS/400 accounting system, NovaQuest's FoxPro system, a stint at that e-learning company, and Ken's trending system. I am not the only Toppie around. (Perhaps table-oriented companies tend to hire me though.) I will give you OO for the login if the rest is free to do p/r. Deal? OO has is niche areas. They just ain't large. I can't believe I allowed you to drag me this far without telling you where to put your Eval ban. You must have worked pretty hard to dig up a reason to avoid Eval. I gotta give you a B+ for cleverness, but you still flunk the general OO evidence test though. At best you move the possible uses of OO from about 3% to 5% of total project. Maybe for other uses of TOP DB performance is an issue, but not strategy pat. If tables are "bad" because they are slow, so be it. Like I keep saying, 15 years ago slowness was often given as a reason to not use OO. NEXT!
________________ oop.ismad.com
Edited by tablizer
May 26, 2004, 01:30:09 AM EDT
|
Post #157,014
5/26/04 8:11:48 AM
|
Er...
Apparently you missed the word "all". There's no room in our system for any slow code. I'm not talking about just logins. Eval ain't gonna happen, pal. At best you move the possible uses of OO from about 3% to 5% of total project. Uh, no. Like I said before, your eval "trick" is just a manual substitute for OO programming. More inefficient and higher maintenance costs. If tables are "bad" because they are slow, so be it. Hey, feel free. You wanted objective proof, there it is.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,051
5/26/04 12:40:01 PM
|
"Use OO because OO is faster" is the best you can do?
Uh, no. Like I said before, your eval "trick" is just a manual substitute for OO programming. More inefficient and higher maintenance costs. It is *not* higher maintenance costs. Your reasoning seems to be based on the decision about whether config info is put in files or tables. If we already have some or most config info in tables, then the additional effort of putting yet more in there is no more than putting it in a file. You are simply pro-file and anti-table. And what is with this "manual" talk? The function/script name is just another attribute like any other attribute one puts in tables. The OO approach is often to create a class that mirrors the DB entity but puts the behavioral stuff in the class but the attributes still come from the DB. This schema mirroring is too "manual". I will concede that TOP tecniques are often not as fast as in-language polymorphism. If one uses OOP for speed, that is little different than using assembler for speed because high-level languages would be too slow. By your logic we should all be using assembler.And, it is not necesarilly "I/O". Due to caching, many DB queries never even have to hit disk. The overhead is because the application EXE is one "system" and the database another. Something stored/kept in the same language is going to usually be faster than something stored/kept in a different language than the language that uses it, for example.
________________ oop.ismad.com
|
Post #157,058
5/26/04 1:12:56 PM
|
Serious question
You are simply pro-file and anti-table. You keep saying you don't like OO, but all your comparisons seem to be comparing tables to files. Is your main beef with files vs tables? Because it looks to me like you and Scott are having two different discussions.
===
Implicitly condoning stupidity since 2001.
|
Post #157,084
5/26/04 4:08:44 PM
|
Interrelated
You keep saying you don't like OO, but all your comparisons seem to be comparing tables to files. Is your main beef with files vs tables? Because it looks to me like you and Scott are having two different discussions. They are interrelated because people tend to use OO to compensate for limits of hierarchical file systems but I prefer databases for such. And, to communicate info between the program and the database sometims files are needed (or to speed things up) because compilers and interpreters are better integrated with file systems than database systems. For example, "include" commands in programs are adapted to grab code from files, but not directly from databases. There is pro-file bigotry out there. Scott's argument seems to be that OO and file-centricity currently work well together and that is why one should go with them instead of table-centric approaches. It is kind of a QWERTY argument: standards protect themselves because they create mini-industries and habits around such standards, even if they have problems. My argument is that even though conventions limit their power, table-centric approaches are still superior, or at least not clearly inferior.
________________ oop.ismad.com
|
Post #157,093
5/26/04 4:54:00 PM
|
Re: Interrelated
table-centric approaches are still superior, or at least not clearly inferior. You've demonstrated no superiority whatsoever. The only thing you've demonstrated is possible code maintenance parity (by using control tables and poorly imitating polymorphism), which has quite a few deficiencies, including poorer performance and migration maintenance.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,095
5/26/04 5:00:33 PM
|
Re: "Use OO because OO is faster" is the best you can do?
If we already have some or most config info in tables, then the additional effort of putting yet more in there is no more than putting it in a file. You are simply pro-file and anti-table. Incorrect. The table config information is more difficult to migrate between environments, and doesn't support revision control. And what is with this "manual" talk? The function/script name is just another attribute like any other attribute one puts in tables. It's what you do with it that's manual. You have to create your own jump table using eval, whereas OO techniques get that built in for free. I will concede that TOP tecniques are often not as fast as in-language polymorphism. If one uses OOP for speed, that is little different than using assembler for speed because high-level languages would be too slow. By your logic we should all be using assembler. If, as you say, table techniques and OO techniques are equally fast for development, then we should prefer the technique that performs better at runtime: OO. Assembler requires vastly greater development time, and as such is not a contender. You're being daft again. And, it is not necesarilly "I/O". Due to caching, many DB queries never even have to hit disk. Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. The whole world isn't XBase, happiness, and light. The overhead is because the application EXE is one "system" and the database another. Something stored/kept in the same language is going to usually be faster than something stored/kept in a different language than the language that uses it, for example. Not when the "EXE" is a stored procedure running in the same process, as in the parm table example I posted. The overhead WAS IO. This was PROVEN by analysis.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,236
5/27/04 1:42:29 PM
|
Table != Disk
Incorrect. The table config information is more difficult to migrate between environments, and doesn't support revision control. Taken up in another message. I showed how to add rivision tracking. It's what you do with it that's manual. You have to create your own jump table using eval, whereas OO techniques get that built in for free. I don't have to create no "jump table". I use the existing table from the DB. In the example of your 600 parameters, assuming it is in a dictioanry array, we just do this: \n eval(clientDict['loginStategry'])\n Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems. Not when the "EXE" is a stored procedure running in the same process, as in the parm table example I posted. The overhead WAS IO. This was PROVEN by analysis. I heard it was possible to cache *entire* Oracle tables in RAM so that there is little or no disk I/O. Maybe there is an oracle bug or your DBA is dumb. I will agree that sometimes caching and other techniques don't work as we expect, and we have to resort to hacky shit like converting tables into giant case lists and the like. But just because an approach creates a problem for situation A, does not necessarily mean we bannish it from the face of the earth. If I find a specific performance bug in Java does that mean all of OO is rotten?
________________ oop.ismad.com
|
Post #157,238
5/27/04 1:51:37 PM
|
Re: OR mappers slowing things down
Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems. You've repeated this line of argument several times now, so I suppose it's worth asking the question: "slow things down" relative to what? First point. A wrapper, whether it's OR or PR (procedural relational) adds an extra layer of abstraction, with some costs usually involved (though there can be gains with things like cacheing as well). In terms of these OOP people you keep referring to, what they are saying is that they would rather have an OO-database (persistence mechanism of some sorts). They are not comparing your idea of language and saying that OR-mapping is slower than that ideal. They are simply saying that an OO-Database is to be preferred to OR-mapping. And, yes, if you have that luxury, it will be much faster.
|
Post #157,246
5/27/04 2:38:09 PM
|
Re: OR mappers slowing things down
First point. A wrapper, whether it's OR or PR (procedural relational) adds an extra layer of abstraction, with some costs usually involved That is what I keep saying about relational and tables: higher abstraction may cost some in performance. In terms of these OOP people you keep referring to, what they are saying is that they would rather have an OO-database Some do, but as I understood it, many recommend more direct access to the RDBMS or a creating a lite-layered custom OR-mapper that fits that particular app. By the way, what do you think's the reason for poor OODBMS sells?
________________ oop.ismad.com
|
Post #157,248
5/27/04 2:51:27 PM
|
Re: OR mappers slowing things down
That is what I keep saying about relational and tables: higher abstraction may cost some in performance. Notes, the fact that you elided the last part of the comment (some gains wrt cacheing, and there some other optimizations possible as well, such that you may actually get better performance). Notes also, that Procedural abstraction have the same concerns as an OO one, in terms of creating a procedural abstraction layer. Procedural languages are no more related to the relational calculus as OO languages. Some do, but as I understood it, many recommend more direct access to the RDBMS or a creating a lite-layered custom OR-mapper that fits that particular app. Notes, that OO languages are just as capable of using raw SQL commands as procedural ones. By the way, what do you think's the reason for poor OODBMS sells? Probably because of all the hype generated by the table programmers.
|
Post #157,261
5/27/04 3:32:44 PM
|
Re: OR mappers slowing things down
and there some other optimizations possible as well, such that you may actually get better performance May be true of TOP-techniques also. Why scotts "600" table couldn't cache in RAM more effectively, who knows. Maybe they just couldn't find the right Oracle tweak. Procedural languages are no more related to the relational calculus as OO languages. It is just that OO and databases tend to fight over territory. In p/r, the "noun attribute model" is mostly in the DB, not code structures; but in OO you have classes that tend to mirror the noun attribute model, fighting with the DB over that duty. Notes, that OO languages are just as capable of using raw SQL commands as procedural ones. True, but it starts to look rather procedural in design if you do that. Probably because of all the hype generated by the table programmers. I wish it were true. Fight fire with fire :-)
________________ oop.ismad.com
|
Post #157,266
5/27/04 3:52:22 PM
|
Procedural abstraction
In p/r, the "noun attribute model" is mostly in the DB, not code structures; but in OO you have classes that tend to mirror the noun attribute model, fighting with the DB over that duty. You use these words as if they actually mean something? I'm thinking you don't quite understand the concept of abstraction - you seem to use it as if it's a "bad" word. The point of abstraction is to hide implementation details at a lower level of code such that the code built upon top of that abstraction need not worry about it. Specifically, if you try to abstract the fact that you don't care how, when and where a method (or procedure or function) goes about it's business of doing a request, then you are half way there to abstraction. Now build a procedural model that doesn't care about how data is stored (any number of database vendors or persistance or text files or ....). You soon find that building an abstraction in the Procedural code is just as hard (if not harder since you limit your toolbox). The fact that you assume that Procedural and Relational go hand in hand mean that you miss the obvious fact that you are tightly coupled to a specific modus operandi. Now if what you want is to not build an abstraction of the storage mechanism, I'd say that most OO languages are more than happy to oblige. After all, OO languages are a superset of procedural ones since they always have the ability to stuff all the code into a single static method.
|
Post #157,286
5/27/04 4:59:40 PM
|
My abstraction can beat up your abstraction
Relational is about much more than JUST "storage". That is what OO'ers don't get. They use it JUST for storage, but then end up reinventing all the other stuff in their OO app anyhow. They have to reinvent it because OO does not provide enough power out-of-the-box. To add it requires reinventing a (navigational) database. Relational provides a fairly standardized way to manage state and noun attributes that OO lacks. Everybody ends up doing it so differently. Plus, OO often hard-wires access paths into the design.
If I wanted to be able to easily swap database engines, then I could just use lowest-common-denominator SQL. But why don't I do this? because I want to use the rest of the DB features also. BTW, SQL is an interface, not an implementation. Ponder that. The only way OO systems get out of vendor lock is to have a translation layer. There is no reason an equally-powerful (and maybe equally flawed) intermediate query language could not be built for procedural. The fact that it does not exist likely means the need for it is not as great as OO'ers claim. Plus, the OO frameworks tend to be language-locked. Thus the choice is DB vendor lock or language lock so far at this stage in the swap wars. Pick your poison. If you can clearly demonstrate that OO is higher abstraction without fuzzy zen talk, be my guest.
________________ oop.ismad.com
|
Post #157,296
5/27/04 5:23:52 PM
|
re: Relational is more than storage (new thread)
Created as new thread #157295 titled [link|/forums/render/content/show?contentid=157295|re: Relational is more than storage]
|
Post #157,302
5/27/04 5:44:43 PM
|
It wasn't a caching issue
It was CPU spinning due to searching through the table for data.
Though I admit to being puzzled as to why IF/THEN written in PL/SQL would beat a hash lookup from adding the right index to a table.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,311
5/27/04 6:36:08 PM
|
No file IO
In memory always beats file IO.
The algorithm produced a hard-coded binary search in IF/THENs. :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,314
5/27/04 6:42:57 PM
|
I understand how it worked
It just surprises me that binary search in PL/SQL beats an index lookup.
After all index lookups can be implemented many ways, including binary search or a hash lookup. Personally with 2000 things I'd expect a properly coded hash lookup to beat a binary search.
Oh well. Optimization often has little surprises like that for obscure implementation reasons.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,317
5/27/04 6:51:28 PM
|
Index lookup in code, or table index?
Table index requires file IO.
If you're talking about using string keyed hashes in the programming language, keep in mind that this is v8 PL/SQL. There ain't no sich beastie. Integer index only.
9i has associative arrays, but there are still some deficiencies to them.
Even if we had decent hashes, since the connection state is blown away between pages there's no place to keep the hash without it being recreated every time. Persistence in this situation requires that the data be represented by code.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,323
5/27/04 7:14:44 PM
|
That requirement would shock me
Table index requires file IO.
Why? Oracle should be smart enough to cache a frequently used table in RAM. If you change it, then you have to hit disk. But if you don't, then there is no reason in the world for them to be so stupid, and I don't think that they are stupid.
Furthermore your problem was that the query was spinning lots of CPU. Slowness from hitting I/O won't show up on your CPU usage statistics.
If I was going to guess the cause of the problem, I'm going to bet on low-level implementation details. An index lookup is fast. But before Oracle gets there the execution path has to include getting a latch (that's a kind of lock), look for the cached query plan for the current query, find it there (let's ignore the parse route since most of the time the common query has a parse in cache), release the latch, interpret that plan, realize that the plan says to do an index lookup, locate the appropriate index, realize that it is in cache, do the index lookup, look for the appropriate row, find it in cache, read it and return it. I've probably missed something that it does. You'll note that several of these steps involve string comparisons that are going to take CPU time.
That's the overhead which I think makes it possible to beat an index lookup using straight PL/SQL.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,332
5/27/04 10:10:43 PM
|
Re: That requirement would shock me
Why? Oracle should be smart enough to cache a frequently used table in RAM. And if they're all frequently used? :-) Furthermore your problem was that the query was spinning lots of CPU. If I gave that impression, it was erroneously. Performance was decreased, but that doesn't mean more CPU necessarily. Basically the question you ask the profiling tool is "how much time is being spent doing foo?" Whether that time is spent doing IO or spinning the CPU doesn't matter. It's still time spent. And if the time spent is 15% of the overall time spent across the system then it's a good candidate for optimization. I'll have to talk to the DBA on Tuesday to get the particulars. The developer who rewrote it was a little fuzzy on why it was so slow in the first place (this was two years ago). He just remembered that it had something to do with file IO, and pinning the table made no difference.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,334
5/27/04 10:33:28 PM
|
Question about Oracle tables
Is it possible to define a memory based partition that holds selectable tables? Not that this is a question about the problem/solution you are talking about, but I've always thought that allowing the programmer to set up memory based tables for optimization purposes might be a useful optimization technique for certain lookup tables that you know are used frequently.
|
Post #157,341
5/27/04 11:06:22 PM
|
You can pin them in memory.
Assuming you have enough memory. As I said, I'm going to have to take it up with the DBA as to why that wasn't sufficient.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #158,339
6/3/04 10:56:28 PM
|
Oracle tables pinned in memory.
As it turns out, I was wrong all around, and right for the wrong reasons.
The actual problem was lock contention. The table was pinned, but Oracle places micro locks for reads, effectively serializing reads on single blocks. The CPU was churned by grabbing and releasing locks on the parm data repeatedly. Since we make extensive use of that data (as I indicated, this is Bryce's dream architecture), the lock management became a significant consumer of CPU. Why Oracle needed to lock read-only data I neglected to find out. This is also a significantly dumbed-down version of the explanation I was given. :-P
Also I was misremembering the %cpu being used. The actual figure was MUCH higher. The DBA estimates that we would have been maxed out at 25% of our current capacity had the change not been made.
An interesting comment he made: Oracle considers the heavy use of a single parm table such as we were doing to be an application design flaw. :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,338
5/27/04 11:04:10 PM
|
You did give the impression that CPU was the issue
In your description at [link|http://z.iwethey.org/forums/render/content/show?contentid=157019|http://z.iwethey.org...?contentid=157019] it said 15% of CPU time was spent on this query. I've been working from the assumption that this was the problem that needed solving.
If that is wrong, then reasonable theories to explain what didn't happen are, of course, superfluous.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,340
5/27/04 11:05:24 PM
|
Whoops, my mistake.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,250
5/27/04 2:56:53 PM
|
Re: Table != Disk
I don't have to create no "jump table". I use the existing table from the DB. In the example of your 600 parameters, assuming it is in a dictioanry array, we just do this:
eval(clientDict['loginStategry']) Otherwise known as a "jump table". Here's a list of entry points, for this client jump to this code to do the work. Thank you for the verification of the technique. Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems.
Which has nothing to do with what you quoted. We're talking about database engines. Juggling IO issues is a constant concern on this scale. Cached values are constantly being thrown out because so much data is moving through the system, causing hits to disk and file IO. Our DBA has a PhD in database management. I think he's probably slightly more versed in the particulars than you. I heard it was possible to cache *entire* Oracle tables in RAM so that there is little or no disk I/O. Maybe there is an oracle bug or your DBA is dumb. See above. He has a PhD. You simply don't understand the issues involved. As I said, ask any DBA for a large system. I will agree that sometimes caching and other techniques don't work as we expect, and we have to resort to hacky shit like converting tables into giant case lists and the like. But just because an approach creates a problem for situation A, does not necessarily mean we bannish it from the face of the earth. If I find a specific performance bug in Java does that mean all of OO is rotten? This is your main technique. Used on these scales it causes performance problems. Or do you have evidence to the contrary? And given that it causes performance problems, do you have any other suggestions?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,273
5/27/04 4:14:42 PM
|
This is an area in which I'm proud of Dejavu
The default Unit Server class has a "cache lifetime" value you can set; if a unit is not requested by client code within that period, it gets flushed out of the cache. Deployers can set a sweeper process to run every 5 minutes, every hour, every day, whatever they find is best--AND, can do that either at app startup with a config file, or just do it on the fly (OK, I haven't written the "on the fly" part yet, but it wouldn't be hard).
But the cool part IMO is that you don't have to use the default Server class or its default components. For example, I have a BurnedRecaller that, on the first request (even if its filtered), loads _all_ objects of that Unit class into the cache and keeps them there. You could just as easily make one that does no caching at all.
In other words, I tried to make testing and then using different cache strategies monkey-easy.
|
Post #157,281
5/27/04 4:53:04 PM
|
Nifty.
Hibernate is pretty flexible with caching as well. There's even one caching strategy that clusters across machines.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,019
5/26/04 8:35:37 AM
|
Another little anecdote for you:
Here we have the exact kind of table you're talking about, for holding configuration information that affects program flow. We call them "parms". They're used everywhere: IF pkg_getparm.get(client, 'SOME-PARM', 'A') = 'B' THEN\n do_the_b_thing\nEND IF; 'get' is just a simple DML call that selects from the parm table. If the parm for that client isn't found, it selects again for the default value. If no default is found, then that 3rd argument is used. Then people noticed that we spent a sizable portion of our time just getting parms from the parm table. Like about 15% of the system's CPU time. This is a lot on a system that is constantly bumping up against capacity because of added features. So one of the PL/SQL guys wrote a Perl script that reads the parm table and constructs a stored procedure that uses IF/THEN/ELSE logic in a binary search pattern to contain all the parms (about 30,000 or so). This works much faster since we don't have any IO now; it's just PL/SQL code running. The cost goes down to about 2%. And these guys are a lot more experienced at writing and tuning Oracle code than you and I are. Now, considering that the cost on the database server for doing this in an OO fashion is pretty much 0%, I'd have to say that you're full of crap. There is a very noticeable hit from using parms in this fashion. And since 1) it adds no value (you still have to use text files to manage the config information in order to do code promotions) and 2) it's still slower and 3) you're just emulating polymorphism anyway and 4) now you have the extra development burden of maintaining config files, a script, and dealing with running the script every time you just want to change a lousy parm...
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,050
5/26/04 12:27:41 PM
|
OT: Scott, can we please do something about long lines?
The post that I replied to causes the lines in every post of the thread to need to be scrolled left to right to read them. I know that the post has a <pre> or some such HTML goobers on it (and all of Jake123's posts have similar formatting which causes any thread that he replies to to do that same horrible thing).
My suggestion: Any posts that do not have <pre> or <tt> or <code> or any other goobers that would dick up the otherwise excellent formatting your engine provides should wrap to the browser border. Any posts that do contain such goobers should also wrap to the browser border, except for those lines that are goobered, which should render as formatted.
I have absolutely no idea how difficult this is, but boy... would it aid the usability of these fora (without making jake123 reformat his .sig!)...
thanx-
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,055
5/26/04 1:04:43 PM
|
Dang, this was SUPPOSED to go into the Suggestions forum
So...Is there any way to get it there besides cutting and pasting it?
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,079
5/26/04 3:45:01 PM
|
The other way, besides cut and paste, is to re-type it :-)
|
Post #157,200
5/27/04 10:37:13 AM
|
There isn't enough time in the world...
Oh! the Carnage!
Oh! the Humanity...!
;-\ufffd
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,217
5/27/04 12:43:31 PM
|
HTH: As with Perl, There's More Than One Way To Do It
|
Post #157,069
5/26/04 3:08:03 PM
|
Perhaps one might play with CSS clip and overflow...?
|
Post #157,097
5/26/04 5:03:23 PM
|
Not that I'm aware of.
Unless Mr. Brewer's suggestion has legs.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,021
5/26/04 8:45:08 AM
|
Retrieving the customer record anyway:
Besides, we have to retrieve the customer attributes record anyway, even under OO. Actually, the OO code never has to. There's nothing in the client record related to logins. The only tables hit are the user and account tables, and then only at the end when it's time to save the data. Now for the procedural code, let's say you put your "strategy pattern" parms in the customer record. All 600 of them. Does this seem like a good way to do that? All the normal client stuff plus 600 columns used for flow control? No? Then you need a parm table too, which is an additional hit over and above hitting the client table. See my anecdote above.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,048
5/26/04 12:19:22 PM
|
question
No? Then you need a parm table too, which is an additional hit over and above hitting the client table. Why would we need a separate table for them?
________________ oop.ismad.com
|
Post #157,096
5/26/04 5:01:17 PM
|
Re: question
So you think it's a good idea to have 600+ columns in a single table?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,098
5/26/04 5:06:03 PM
|
The other method is to have one column....
...that could be a free form field with 600 different purposes.
|
Post #157,099
5/26/04 5:09:50 PM
|
Re: The other method is to have one column....
Unless you have to support all 600 different purposes at once.
Ah! Let's use a comma-delimited field and parse it every time we need a value! :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,101
5/26/04 5:22:00 PM
|
That's the spirit!!!
|
Post #157,111
5/26/04 7:09:21 PM
|
I would have to look at the nature of the data
________________ oop.ismad.com
|
Post #157,116
5/26/04 7:41:22 PM
|
Are you kidding??
Bryce, you claim that you do this kind of thing ALL THE TIME. You should be able to roll this off the top of your head!
Here's the situation:
You have 600 "control points" or whatever you want to call them that you make decisions at with a control table. Organized by client.
You said that we'd just need a single hit to the client table to get all the parameters. This implies that you need 600+ columns on that table. Stop weaseling. True or false?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,162
5/27/04 1:43:07 AM
|
suggestion 3
You have 600 "control points" or whatever you want to call them that you make decisions at with a control table. Organized by client. You said that we'd just need a single hit to the client table to get all the parameters. This implies that you need 600+ columns on that table. Stop weaseling. True or false? I orginally did not know you had 600. Anyhow, if you have a table like the one I described in the "flintstone" message, then why not load it into a dictionary array upon login if you want to avoid querying each record?
________________ oop.ismad.com
|
Post #157,174
5/27/04 8:55:03 AM
|
Re: suggestion 3
OK, so now you're saying your original design won't work. Progress.
So, we have a client table, and some parm table to be loaded at login.
What if it's a stateless environment? You have 1200 connections pooled between all the web users, and when a new page is requested global state is cleared from the last user to use that connection. All of the work is done in the database in stored procedures, so you don't have a place to keep cached stuff like that.
Now what do you do?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,228
5/27/04 1:23:42 PM
|
bottleneck black box
OK, so now you're saying your original design won't work. I did not say that. YOU claim it is too slow. I have to take your word for it on faith. It is possible you are just blowing smoke. Maybe if OO or some other goofy practice did not bloat up the REST of the system, we would have more margin for TOP practices. I cannot study your system to see where other bottlenecks may be. What if it's a stateless environment? You have 1200 connections pooled between all the web users, and when a new page is requested global state is cleared from the last user to use that connection. All of the work is done in the database in stored procedures, so you don't have a place to keep cached stuff like that. If you use OO or an OR mapper instead, then *something* has to store the state between submits. Some web languages allow one to store dictionary arrays in session variables and some don't (out of the box). According to my documentation, ASP can store dictionary arrays as session variables, but I have never tried it myself. One can perhaps serialize/unserialize the array as a string and session it that way, but I don't know if that will add too much processing to your fragile system.
________________ oop.ismad.com
|
Post #157,231
5/27/04 1:30:59 PM
|
Storing Dictionary as Session variables in ASP
Them dictionary thingies you refer to are called "Objects". Let's see: Set MyDictionary = CreateObject("Scripting.Dictionary") Behold the power of OOP!!! Seriously though, you have to be careful about the amount of data you stuff in the Session Objects (in ASP and other platforms as well). Memory is a valued commodity on a web server, and if you eat too much of it up with Session vars, it has to start swapping them to and from disk. And then there's the question of distributed web processing, where the service the web page may be distrubuted among several web servers. The Session vars have to be able to pass to the servicing web server, which may not be the one that instantiated the Session. Passing around large objects between servers can degrade the performance.
|
Post #157,239
5/27/04 1:52:52 PM
|
It has gotta go *somewhere*
Them dictionary thingies you refer to are called "Objects". That is true. ASP does not have built-in dictionary arrays. They chose to impliment then as an API instead, which means we can't use convenient array syntax. Note that one can implement them using the "handle" API approach also. Thus, we don't need OOP to do the same. Seriously though, you have to be careful about the amount of data you stuff in the Session Objects (in ASP and other platforms as well). Memory is a valued commodity on a web server, and if you eat too much of it up with Session vars, it has to start swapping them to and from disk. That is why I would keep that 600 thingy in tables if possible and let the DB handle RAM caching. But if Scott caches it in RAM, then it is in RAM. It is either in RAM or in tables or in files. Scott's approach seems to be using RAM also. It will have any of the same problems caused by being in RAM as a sessioned array. Interesting material: [link|http://www.c2.com/cgi/wiki?ProgrammingWithoutRamDiskDichotomy|http://www.c2.com/cg...tRamDiskDichotomy]
________________ oop.ismad.com
|
Post #157,241
5/27/04 2:08:10 PM
|
ASP is OO
Perhaps not done well enuf, but OO none-the-less. Response.Write("I'm an OO method") Session("I'm_an_OO_session_variable") Request("I'm_an_OO_request_variable") VBScript is brain dead when it comes to constructing objects, but you ain't gonna get very far with ASP without objects. Perhaps they "could have", "should have" done it differently - but they didn't - and my guess is that had a lot of "objective" reasons why they chose the path they chose.
|
Post #157,243
5/27/04 2:28:21 PM
|
re: ASP is OO
Response.Write("I'm an OO method") The first time I saw that, I thought, "oh shit. They borrowed the Java anti-Demeter dot bloat for print()". VBScript is brain dead when it comes to constructing objects, but you ain't gonna get very far with ASP without objects. Do you mean that one has to use existing OOP API's in order to talk to MS services, or that one must create their *own* classes in order to implement maintainable biz logic? and my guess is that had a lot of "objective" reasons why they chose the path they chose. Microsoft objective? Ha ha. Actually, they tend to copy what a competitor is selling well at a given time. MS is not known to love OO. They were slow to fix the inheritance in VB, for example.
________________ oop.ismad.com
|
Post #157,249
5/27/04 2:55:24 PM
|
ASP = COM
The first time I saw that, I thought, "oh shit. They borrowed the Java anti-Demeter dot bloat for print()". It's called COM (component OBJECT model). Do you mean that one has to use existing OOP API's in order to talk to MS services, or that one must create their *own* classes in order to implement maintainable biz logic? Meaning classes are second-class (as opposed to first class) in VBScript. Notes, that they are still useful and used quite a bit in VBScript. Microsoft objective? Ha ha. Actually, they tend to copy what a competitor is selling well at a given time. MS is not known to love OO. They were slow to fix the inheritance in VB, for example. And I thought you were keen on MS, seeing as how Longhorn is trying to use SQLServer for the File System.
|
Post #157,258
5/27/04 3:18:32 PM
|
re: ASP = COM
It's called COM (component OBJECT model). I meant the syntax, not how it is implemented. Hmmm. I wonder how closely the ChiliSoft ASP clone sticks to the COM model? Meaning classes are second-class (as opposed to first class) in VBScript. What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. And I thought you were keen on MS, seeing as how Longhorn is trying to use SQLServer for the File System. MS does some things well, and some poorly. I will praise them for good stuff, and cuss them for stupid stuff. For example, I like the case-insensitivity in their tools. But their default of "smart quotes" in Word really sucks.
________________ oop.ismad.com
|
Post #157,262
5/27/04 3:33:09 PM
|
ChiliSoft ASP
I've not done more than play with it, but the ChiliSoft ASP works pretty good. Biggest problem is how well it deals with custom COM components written in VB and C++. It does provide a COM-like container, but it works only so far. If you stick with the standard five ASP objects (Application, Response, Session, Request, Server) and the four standard VBScript objects (Err, Dictionary, FileSystemObject, TextStream), then you won't have too many problems. Anyhow, the way Chilisoft implements ASP is by using OO programming techniques. But then, somehow I know you knew that I would say that. What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. Generally speaking, it's the ability of the language to add libraries to itself, and not have the distinction between those libraries you wrote vs. the standard libraries that come with the environment. MS does some things well, and some poorly. I will praise them for good stuff, and cuss them for stupid stuff. For example, I like the case-insensitivity in their tools. But their default of "smart quotes" in Word really sucks. So when they agree with you - they are being rational. But when they make a design decision you disagree with - they are being irrational.
|
Post #157,275
5/27/04 4:19:21 PM
|
Interesting terminology
What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. Generally speaking, it's the ability of the language to add libraries to itself, and not have the distinction between those libraries you wrote vs. the standard libraries that come with the environment.
I would have thought: first-class classes are themselves objects which can be passed around. Second-class classes are not objects. Both can be used to create objects, but only one is itself an object. Or something.
|
Post #157,279
5/27/04 4:32:41 PM
|
You're probably correct convention-wise
(Had a link I was gonna post on the subject matter of "first-classness" have to do with first class messages, but the site is unresponsive at the moment).
Anyhow, from my standpoint, I do think that the ability to build libraries from the language should count for something (should probably invent a new term like VBScript is Adjective and/or Adverb based - not Noun or Verb based).
|
Post #157,259
5/27/04 3:30:05 PM
|
Re: bottleneck black box
OK, so now you're saying your original design won't work. I did not say that.
Yes, you did. Your initial design was [link|/forums/render/content/show?contentid=156950|put all the "features" in the client table]": // select login strategy\ncustomer = query('select * from customer where id=...)\nstrat = customer.loginStrategy\nif strat='A' then\n....\nelseif strat='B' then\n...etc...
So I asked, what if you have 600 "features"? At which point you said use a different table. So the original design won't work, correct? As a matter of fact, 600 was just a number I pulled out of thin air. Checking the code (with grep, natch), we have about 4000 instances of parm-based decisions being made, and 10,000 instances of "if client = foo" decisions being made. Maybe if OO or some other goofy practice did not bloat up the REST of the system, we would have more margin for TOP practices. Again, 1 million lines of PL/SQL code. 99.3% of the system is OO, and that's the bridge login. And any margin is going to go towards adding more clients to the system and doing useful work, not supporting poorly performing, unnecessary practices. If you use OO or an OR mapper instead, then *something* has to store the state between submits. We're not talking about OO. We're talking about doing everything in the database. Web request comes into Apache, mod_plsql determines that a particular URL maps to a particular PL/SQL package, and the rest is ALL database code. This is a stateless environment. Since you can't cache (the connections have DBMS_SESSION.RESET_PACKAGE called on them between pages), the parm table becomes a performance bottleneck. And the system is hardly fragile. We have 400K pieces of inventory. Half a million users. 10 million users if you include representatives (look around: 1 out of 30 people you know uses our system in some way). We process a good 20% of all the transactions in our market. A billion dollars changes hands through our system every day. There's just no room for performance-sucking crap that doesn't add any value. And in fact, the whole procedural hairball doesn't scale as well as it needs to, so we're moving away from that now. And given your utter lack of experience in this arena, nothing you would be able to tell us after looking at the code is going to help, especially since you're not proposing anything we aren't already doing, albeit on a much larger scale than you've ever contemplated.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,265
5/27/04 3:47:54 PM
|
how they relate
I am confused about how the config params relate to the login strategy, other than containing the strategy name as one of the params.
________________ oop.ismad.com
|
Post #157,271
5/27/04 4:11:02 PM
|
Re: how they relate
Login params are a configuration value. Just like all the other 4000 configuration values. It's a value used to determine what to do at a branch point: which decryption method do I use? which parsing method? what's the timestamp window? what's the home page? This is classic control table technique.
So your suggestion now is to just store bridge login parm values in the client table? Where do you draw the line? Why not the client's account control parms, or their routing parms?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #156,986
5/25/04 9:30:28 PM
|
I don't understand why you mention "eval"
|
Post #157,002
5/26/04 1:27:26 AM
|
re: I don't understand why you mention "eval"
See: [link|http://www.geocities.com/tablizer/prpats.htm|http://www.geocities...blizer/prpats.htm]
________________ oop.ismad.com
|
Post #157,044
5/26/04 10:57:29 AM
|
I get it now.
Emphases mine: Some OO fans say that putting expressions or code into tables is "doing OO without OOP". Rather than get caught up in chicken-or-egg terminology battles, let's just say that I prefer tables because of their 2-D nature as opposed to the 1-D nature of OO code. Placing code in collections pre-dates the birth of OOP (Simula-67) by roughly a decade, at least. OOP gets credit for using classes to do such, not collections. LISP pioneered many aspects of treating code and data in a similar fashion. Even without Eval or Execute, the p/r version is still better than the OO version in my opinion. I will grant that the OO approach offers a bit more potential compile-time checking, but not much else. (Perhaps sacrificing run-time changes/additions in the process.) Further, it seems more logical to use the same collections management approach for both program code AND data. Otherwise, you are duplicating effort (special IDE's), duplicating schema information, and increasing the learning curve. Collection handling should be factored into a single methodology regardless of whether it is code or data. LISP came closer to the right idea by treating code like data instead of data like code, which is what OO does wrong.
Got it. Subjectivity wins again. I think from now on you're going to have to work very hard to convince anyone here that you don't prefer OO, albeit a twisted version.
|
Post #157,053
5/26/04 12:43:59 PM
|
OO is just a (bad) reinvention of 60's databases with
...behavioral dispatching tacked on.
________________ oop.ismad.com
|
Post #157,057
5/26/04 1:10:43 PM
|
No. You are a proponent of OO programming.
I just didn't see that you were so exotic in your methodologies.
Come on Bryce don't make me taunt you, causing Ben to hate me MORE!
You are a strong OO proponent.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Give a man a match, he'll be warm for a minute. Set him on fire, he'll be warm for the rest of his life!
|
Post #157,085
5/26/04 4:17:57 PM
|
The question that launched a thousand arguments
I keep saying that OO reinvents databases in app code. Thus, are OO programs really databases or are databases with code really OO? This issue came up before but nobody could agree on a definition of OO in order to determine for once and for all.
The biggest seperation between OO and TOP is that OO wants to use code (text) to make "records" (aka "objects/classes") and use pointer-hopping to navigate relationships (aka "navigational database") rather than relational algebra. Reduce dependence on text-code and pointer-based navigation, and TOP and OO would not be that much different.
________________ oop.ismad.com
|
Post #157,086
5/26/04 4:21:36 PM
|
That explains a lot
Thus, are OO programs really databases or are databases with code really OO? Do you really believe this question has an answer?
|