Post #156,829
5/25/04 9:53:22 AM
|
Sounds like another...
... half-assed concession to me. If by chance changes kept being along a clean "client grain", then I could make seperate client-specific files also. Maybe the build system would supply a default file that has some of the common options (using IF statements). Those clients that don't fit the standard can have their own supplied. Thus, we wouldn't have to change the default very often. If there are common exception patterns, then maybe one can later build those into the default libraries on a next major release. It's right there. You said, "what if", and I showed you the change. Come up with another one, if you want. I don't care. The funny thing is, though, when presented by a real example like this you retreat into generalities. Fix the example given, if you know a better way. But the problem is with the coding approach you use, not the "grouping". Another thing you don't understand: there ISN'T a "grouping approach" with that code. It's divided along every line of change. Getting rid of the case and if statements is what does it. Who knows, mabye you found the 1-in-10 situation where changes actually occur on polymorphic or GOF fault lines and danced around the office bragging about it to your PL/SQL colleque. If that's the vision that gets you through the day, then so be it. In my experience, it's always like this. OO'ers conveniently remember the fits and convenient forget the non-fits or throw yet more indirection at the problem with fat layers and tangled combos of GOF patterns. More generalities. Show me the fat layers and tangled combos of patterns. Or is this like the "bloated" O-R mapping code...? What's the sound of one weasel clapping? Time for you to go work on conceding the [link|/forums/render/content/show?contentid=156741|resource files] point now.
Regards,
-scott anderson
"Do you hear that, Mr. Anderson...? That... is the sound of inevitability..."
|
Post #156,911
5/25/04 4:46:02 PM
|
Text is linear
It's right there. You said, "what if", and I showed you the change. Come up with another one, if you want. I don't care. The funny thing is, though, when presented by a real example like this you retreat into generalities. I was not there and don't know your software and environment. Your anecdotes are not sufficient and I am not going to take your word for it. Not that you outright lie, but you do see the world through OO-colored classes polluted by years of OO polymorphic change pattern doctrine. BTW, it sounds like you have too many different "levels" or places to put the change depending on whether it is a client difference, user difference, etc. At least a new hire knows IF statements, but possibly not your funky framework. And, you have not described the flaws in the revised approached I layed out. (I know of a possible one, lets see if you can identify it.) Another thing you don't understand: there ISN'T a "grouping approach" with that code. It's divided along every line of change. That is impossible in text code. It is possible with a database because the grouping is whatever you make your viewing query, but text is linear. It is like asking for a detailed sales report that is sorted (major) by both product category and region. It cannot be done. You would have to change the laws of physics to pull it off. OO generaly dictates that you group methods by (inside a) class, for example.
________________ oop.ismad.com
|
Post #156,913
5/25/04 4:59:33 PM
|
Re: Text is linear
Not that you outright lie, but you do see the world through OO-colored classes polluted by years of OO polymorphic change pattern doctrine. Ah, I see. So my 15 years of database development don't factor into this at all. Got it. When I'm using OO I can only think OO. I'll keep that in mind. Again. 7000 lines vs. 8000 lines. Code that no longer requires branches when we change things. Looks pretty cut and dried to me. BTW, it sounds like you have too many different "levels" or places to put the change depending on whether it is a client difference, user difference, etc. At least a new hire knows IF statements, but possibly not your funky framework. Time to come up on the new system: about 30 minutes. Time to come up on the old system: two weeks. You were saying? And you are apparently using "Too many" as a euphemism for "the right amount", since the maintenance cost has improved drastically. And, you have not described the flaws in the revised approached I layed out. "revised approach"? "layed out"? What approach? I saw a bunch of generalities and hand waving. Write some code like I did. That is impossible in text code. Baloney. Need to make a user change? Visit the BridgeUser file. Need to make a parsing change? Visit the parser file. Why is this so difficult for you to understand? Want to see all the places that "getUserId" is called? Use grep, or pull it up in Eclipse, or Emacs, or one of dozens of class aware editors. You're inventing problems again.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #156,950
5/25/04 7:23:17 PM
5/25/04 7:25:43 PM
|
Delt-A-Matic
Ah, I see. So my 15 years of database development don't factor into this at all. I think you are just too used to the bulky "big-iron" DB's like Oracle. Also OO doctrine books have clouded your thinking. "revised approach"? "layed out"? What approach? I saw a bunch of generalities and hand waving. Write some code like I did. Okay, lets revisit the login config issue of yours. Generally there will be a fairly fixed set of commonly-used strategies. The default code may look something like: \n// select login strategy\ncustomer = query('select * from customer where id=...)\nstrat = customer.loginStrategy\nif strat='A' then\n....\nelseif strat='B' then\n...etc...\n However, some customers will have one-off differences in which we don't want to alter our default code base to supply yet another elseif or other custom fiddle. We can do this by making a "delta" directory that replaces only those files which are different for a given customer. Example DOS: \nxcopy /defaultSource/*.* /customerX/*.* /sy\nxcopy /customerX_delta/*.* /customerX/*.* /sy\n If enough customers want a given feature, then maybe we can later integrate it into the default for the next major release or something. (Ideally the granularity would be at the subroutine level instead of file level, but that may have to wait until RDBMS replace file systems and IDE's upgrade to work with them.) Baloney. Need to make a user change? Visit the BridgeUser file. Need to make a parsing change? Visit the parser file. That is not what I meant. Maybe later I will create an example. Want to see all the places that "getUserId" is called? Use grep That is using a "text query" to create a temporary view. By the way, you keep saying that I am "inventing" or exaggerating problems. However, I can't believe that you guys had so much trouble adding a simple if or case statement without crashing the empire. I agree that long lists of case or elseif statements are a yellow alert as far as code design (and I usually replace them with tables if possible), but were they really that big of a problem?
________________ oop.ismad.com
Edited by tablizer
May 25, 2004, 07:25:43 PM EDT
|
Post #156,953
5/25/04 7:36:15 PM
|
Re: Delt-A-Matic
I think you are just too used to the bulky "big-iron" DB's like Oracle. And who uses XBase for web applications? Also OO doctrine books have clouded your thinking. I read very few programming books. if strat='A' then\n....\nelseif strat='B' then\n...etc...
Programmer #1 changes strategy A. Programmer #2 changes strategy B. Programmer #2 needs to promote to testing, but #1 doesn't. Branch. Want to see all the places that "getUserId" is called? Use grep That is using a "text query" to create a temporary view.
Nice selective quoting. If you don't like "text queries" then use a code editor. By the way, you keep saying that I am "inventing" or exaggerating problems. However, I can't believe that you guys had so much trouble adding a simple if or case statement without crashing the empire. That's because your experience is lacking. I agree that long lists of case or elseif statements are a yellow alert as far as code design (and I usually replace them with tables if possible), but were they really that big of a problem? Actual branch in some IF/THEN/ELSE PL/SQL code: 1.405.1.36.1.0.1.3 sanderson 15 Sep 2003 14:39:58 (RELNO_1_405_1_36_1_0_1_3) This is a daily occurrence in a large code base with multiple programmers and that kind of programming. Re: replacing them with tables: now you have a performance hit, external influences on code that can be difficult to track down ("what does the BRIDGE-ORIG-URL do again? Anyone remember?"), and a data maintenance problem.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #156,959
5/25/04 8:15:51 PM
|
problem is treed-files, not procedural
And who uses XBase for web applications? My point is that one learns that DB's don't have to be bulky and beurocratic (for at least some things). BTW, a company called PlugSys used to make Active Xbase Pages. But folded. They started too late in the web game. An archived page: [link|http://web.archive.org/web/20030624210131/www.plugsys.com/products/|http://web.archive.o...sys.com/products/] Programmer #1 changes strategy A. Programmer #2 changes strategy B. Programmer #2 needs to promote to testing, but #1 doesn't. Branch. Why would one be subject to testing but not another one? However, if this is a constant problem, then make each strategy a seperate routine and put them in separate files, and manage them that way. The skeleton IF statements that call the routines shouldn't change that often. A RDBMS-based file system would make it easier to manage code at a subroutine level, BTW. Many of your problems are not caused by a lack of OO, but archaic techniques such as hierarchical file systems. OO allows one to build a network-like internal database-like structure to move beyond the file system tree, but network-DB's have their own problems. The real solution is to move up the ladder yet more toward relational. Re: replacing them with tables: now you have a performance hit, external influences on code that can be difficult to track down ("what does the BRIDGE-ORIG-URL do again? Anyone remember?"), and a data maintenance problem. Any "key" such as "BRIDGE-ORIG-URL" should have a description column associated with its entity table. All the info you need would be at your SQL finger-tips if you simply build good, normalized schemas. Grep is a sequential toy in comparison.
________________ oop.ismad.com
|
Post #156,961
5/25/04 8:19:41 PM
5/25/04 8:26:03 PM
|
Re: problem is treed-files, not procedural
Then you concede that "treed files" are bad. Stop suggesting them as a solution, then. My point is that one learns that DB's don't have to be bulky and beurocratic (for at least some things). I use Oracle. I also use PostgreSQL. I don't care what XBase does. Why would one be subject to testing but not another one? One has to go to testing now, and the other isn't ready. However, if this is a constant problem, then make each strategy a seperate routine and put them in separate files, and manage them that way. The skeleton IF statements that call the routines shouldn't change that often. Hand waving. Code. A RDBMS-based file system would make it easier to manage code at a subroutine level, BTW. Hand waving. Any "key" such as "BRIDGE-ORIG-URL" should have a description column associated with its entity table. All the info you need would be at your SQL finger-tips if you simply build good, normalized schemas. Grep is a sequential toy in comparison. Selective response. Performance issues and data maintenance issues. You're starting to churn here, Bryce.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
Edited by admin
May 25, 2004, 08:26:03 PM EDT
|
Post #156,979
5/25/04 9:02:21 PM
|
Re: problem is treed-files, not procedural
Then you concede that "treed files" are bad. Stop suggesting them as a solution, then. Because existing compiler/interpreters are based on files and not tables, I don't really have a choice. The Delta Technique would be better in a RDB, but the interpreter/compiler cannot use it directly from a RDB. So I go with files instead. In a generation or two that problem might go away. One has to go to testing now, and the other isn't ready. That kind of problem can happen with any module/class/file. Unless your development management granularity is a single character (which is not practical), there is no magic escape from that. Sometimes two+ developers need to modify the same unit for different purposes/needs. For example, two people may be working on two different methods in the same class. If it keeps happening to one particular unit, then the natural solution is to divide up such units into smaller units, which was one of my suggestions. Hand waving. Code. \n// select login strategy\ncustomer = queryRec('select * from customers where id=...)\nstrat = customer.loginStrategy\nif strat='A' then\n loginA(...)\nelseif strat='B' then\n loginB(...)\n...etc...\n#include loginA.prg // code for loginA routine\n#include loginB.prg // code for loginB routine\n Selective response. Performance issues and data maintenance issues. I will agree that the DB-centric solution would probably have performance problems with today's technology. (Remember OO used to be rejected by some because it was slower.) However, how are "data maintenance" issues inherently more evil or worse than file maintenance issues?
________________ oop.ismad.com
|
Post #156,982
5/25/04 9:11:02 PM
|
Re: problem is treed-files, not procedural
Because existing compiler/interpreters are based on files and not tables, I don't really have a choice. Hand waving. Come back when it actually happens. That kind of problem can happen with any module/class/file. Correct. It happens much more frequently with procedural code of the sort we have been discussing than with OO code. (Objectively better, anyone?) I'll note that we've not had a single collision since the rewrite, too. // select login strategy\ncustomer = queryRec('select * from customers where id=...)\nstrat = customer.loginStrategy\nif strat='A' then\n loginA(...)\nelseif strat='B' then\n loginB(...)\n...etc...\n#include loginA.prg // code for loginA routine\n#include loginB.prg // code for loginB routine Performance bottleneck. Database access is an order of magnitude slower (if not more) for logic like this. And my my my, doesn't that look a lot like manual polymorphism. I thought polymorphism was bad, Bryce? I will agree that the DB-centric solution would probably have performance problems with today's technology. Concession noted, since we live today, and not 20 years from now. However, how are "data maintenance" issues inherently more evil or worse than file maintenance issues? How do you promote tabular configuration data between environments?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,003
5/26/04 1:39:55 AM
|
Re: problem is treed-files, not procedural
Performance bottleneck. Database access is an order of magnitude slower (if not more) for logic like this. See the "Jose" message. How do you promote tabular configuration data between environments? Generally there is an OS script file(s) that is run to prepare stuff. That script can run programs or stored procedures to prepare the data also. (I don't hear the word "promote" used that often for execution preperation. Is that java-shop lingo?)
________________ oop.ismad.com
|
Post #157,015
5/26/04 8:17:33 AM
|
Re: problem is treed-files, not procedural
Generally there is an OS script file(s) that is run to prepare stuff. That script can run programs or stored procedures to prepare the data also. Ah... finally... text files. Thank you. So all of your garbage about the database being better for managing configuration information has been crap, because in the end you have to load it from text files in the first place. (I don't hear the word "promote" used that often for execution preperation. Is that java-shop lingo?) Promote means moving the code from one environment (dev) to the next (test, QA, prod). And get it through your head: this is an Oracle shop, not a Java shop.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,054
5/26/04 12:49:28 PM
|
misunderstanding
Ah... finally... text files. Thank you. So all of your garbage about the database being better for managing configuration information has been crap, because in the end you have to load it from text files in the first place. I did not say anything about putting data in text files there. The script I talked about would create and copy relavant database info from say the development DB environment to the testing DB environment, etc. Promote means moving the code from one environment (dev) to the next (test, QA, prod). I usually hear that called "migration". And get it through your head: this is an Oracle shop, not a Java shop. I bet you are trying to change that.
________________ oop.ismad.com
|
Post #157,064
5/26/04 2:38:55 PM
|
Re: misunderstanding
I did not say anything about putting data in text files there. The script I talked about would create and copy relavant database info from say the development DB environment to the testing DB environment, etc. And when your development database is rebuilt every night? How do you prime it in the first place? How do you track revisions? How do you migrate one change and not another? I bet you are trying to change that. No, there are an awful lot of things that belong in the database. Code isn't one of them.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,066
5/26/04 2:43:12 PM
|
Well... I pack up my development database and...
...take it home with me every night. :-)
|
Post #157,071
5/26/04 3:30:05 PM
|
How do you prime it in the first place?
Usually, using a siphon, FIFO or primer pump. DUH.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Give a man a match, he'll be warm for a minute. Set him on fire, he'll be warm for the rest of his life!
|
Post #157,078
5/26/04 3:41:04 PM
|
I prefer Kilz myself.
|
Post #157,082
5/26/04 3:51:37 PM
|
That works well...
For the crayon markup he does... nice suggestion.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Give a man a match, he'll be warm for a minute. Set him on fire, he'll be warm for the rest of his life!
|
Post #157,083
5/26/04 3:54:38 PM
|
The file system *is* a database
And when your development database is rebuilt every night? How do you prime it in the first place? How do you track revisions? How do you migrate one change and not another? They are many different approaches. I would need a specific use-case to suggest something. I am sure many large firms have faced the issues of schema and config data management. Most places I am familiar with had too many manual processes in place and seemed strangely uninterested in automating such stuff, perhaps out of job loss fear. But there may be tools out there to help with such. I agree that management-by-file tools are further along and more available. Schemas and stuff are tables themselves, so it is possible to perform operations on those tables just like any other; however, companies like Oracle seem to make it more difficult than it has to be it seems to me. there are an awful lot of things that belong in the database. Code isn't one of them. Like I said, the file system *is* a database, just not a relational one. They are currently treated differently by different tools, but I expect/hope that will eventually change. I see no logical reason to distinguish between a file system and a database system. The current conventions are archaic.
________________ oop.ismad.com
|
Post #157,090
5/26/04 4:48:29 PM
|
Re: The file system *is* a database
Here's the use case: You have a development database. It's rebuilt every night. There's a parm table that is used to make decisions during execution. How do you: 1) Get the data into the dev database in the first place after each rebuild. 2) How do you track revisions between the different changes? Someone changes one parm, someone else changes another. Now you need to migrate the one change but not the other. 3) Create a new dev instance on a developer's PC so they can work locally. Basically you're telling me that there are no tools that you're aware of to do this, and that it's pretty much a manual process unless you want to spend a lot of time automating it. Most places I am familiar with had too many manual processes in place and seemed strangely uninterested in automating such stuff, perhaps out of job loss fear. Oddly enough, we've completely automated this process here... using the proper tools for the job: files. I agree that management-by-file tools are further along and more available. So parm tables: 1) Execute more slowly at runtime. 2) Require more maintenance. 3) Require manual tracking of revisions. 4) Require manual migration procedures. 5) If automated, require completely different migration procedures than everything else in the build. Or I can use OO methods and get both automatic migration tools and maintenance benefits as I have described elsewhere. Concession noted. The current conventions are archaic. The current conventions work. The new conventions you are describing have no tangible benefit other than "Bryce likes them more", and plenty of deficiencies.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,113
5/26/04 7:27:42 PM
|
over-the-phone brain surgery
I would have to study the nature of the business to make any recommendations. I don't work there. Maybe there is a grand table way to clean it all up, but I cannot offer such very well only knowing bits and peices. It is like doing brain surgery over the phone. If you want more info on database versioning tools, go look it up yourself. Or, stay with your comfy archaic files. Basically you're telling me that there are no tools that you're aware of to do this, and that it's pretty much a manual process unless you want to spend a lot of time automating it. I don't know that it would be a lot of time automating it. Tables are usually easy to work with. Maybe you just need more nimble languages or something. Oddly enough, we've completely automated this process here... using the proper tools for the job: files. By (your) definition. The current conventions work. So does assembler and goto's. The new conventions you are describing have no tangible benefit other than "Bryce likes them more", and plenty of deficiencies. They are only "deficiencies" because you like to work with files instead of tables. Don't tell me about "liking them more". Relational tables are a more powerful concept than files. Even with file-bigotted tools they still usually lead. The limits of files and directories cause headaches in many of projects I deal with. Maybe files work faster at your place because they are like assembler: Primative and annoying, but fast.
________________ oop.ismad.com
|
Post #157,115
5/26/04 7:39:07 PM
|
Re: over-the-phone brain surgery
I would have to study the nature of the business to make any recommendations. Hand waving. Make something up. It's very simple. How would you do it? You said that you've been at places where it was a manual copy. That's it? Nothing better? If not, then your way loses. I don't know that it would be a lot of time automating it. So you've never worked some place that has requirements like this. You just slam the control table data right into production. If this is true, then no wonder you think this is an acceptable way to do things. So does assembler and goto's. Nope. Assembler and GOTOs take considerably more development effort. Try again. They are only "deficiencies" because you like to work with files instead of tables. So the performance problems are there because I like to work with files? And manual database migration strategies suck because I like to work with files? Are you serious? Relational tables are a more powerful concept than files. Even with file-bigotted tools they still usually lead. "Usually" as in "a project Bryce worked on once". They're more powerful for DATA, not CODE. The limits of files and directories cause headaches in many of projects I deal with. Oh, really? Care to back that up? What headaches? Maybe files work faster at your place because they are like assembler: Primative and annoying, but fast. Hardly primitive. I can sit down with a blank system, type one command (bexvm build -d SID) and create an entire, running copy of our system, loaded config data, compiled code, everything. Unattended. This is primitive? You must be using a different dictionary than I do.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,161
5/27/04 1:29:08 AM
|
a single command does not like databases?
Make something up. It's very simple. How would you do it? For migration? Just copy it over. As far as revision tracking, let's assume your config table looks something like this: Table: clientConfig ------------------ clientRef paramID paramValue You can set a trigger(s) such that every time a record is added, changed, or deleted, the change is copied to a log that looks something like: Table: clientConfig ------------------ clientRef paramID paramValue changeType // add, change, or delete changedWhen who // login ID of changer (if avail.) With this info, one can recreate any time period and study all changes. (I here some RDBMS have such "delta-log" features already built in.) So you've never worked some place that has requirements like this. There are a lot of domains and situations that I have never encountered. I just know that tables are more useful and flexible than files most of the time. If you found an exception, so be it. Nothing is 100% always the best solution. Nope. Assembler and GOTOs take considerably more development effort. So does dealing with Flinstonian file systems. They're more powerful for DATA, not CODE. And config info is data. Oh, really? Care to back that up? What headaches? I can't easily query the file system to find or view files how I want. Yeah, I know, if I learned grep and other file utils well I could probably eventually do the same, but why learn two query languages? Plus file systems don't have indexes on attributes outside of the tree. Thus, it has to do a sequential search for many operations. I have heard multiple times from developers how they wish they could query the file system using SQL and/or add extra attributes to files or directories to mark stuff for various purposes. I have seen companies jump thru hoops because they couldn't add custom file attributes. They have to keep a seperate list(s) of file info. Hardly primitive. I can sit down with a blank system, type one command (bexvm build -d SID) and create an entire, running copy of our system, loaded config data, compiled code, everything. Unattended. This is primitive? How does tabling info preclude the use of a single command to initiate everything?
________________ oop.ismad.com
|
Post #157,180
5/27/04 9:16:09 AM
|
Re: a single command does not like databases?
There are a lot of domains and situations that I have never encountered. I just know that tables are more useful and flexible than files most of the time. Do you see how these two statements are incompatible? You don't know that tables are more useful for situations you've never encountered. So does dealing with Flinstonian file systems. Which you haven't shown. So far you've shown that using tables for config information requires more overhead. And config info is data. Thou sayest. I can't easily query the file system to find or view files how I want. Not this again. Concession granted. You have a mental block against grep. Sad state to be in, but I'll give you that. Now the amazing thing is, every company I've worked with hasn't had these "headaches". Given the vast range of tools available today for working with source code, it just isn't a concern. How does tabling info preclude the use of a single command to initiate everything? How is using a single command "primitive"?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,219
5/27/04 12:46:38 PM
|
Re: single command
Do you see how these two statements are incompatible? You don't know that tables are more useful for situations you've never encountered. When you encounter a new application to build or enhance, do you automatically use OO? If not, what *do* you automatically use? You have to have *some* default, otherwise you do nothing. Not this again. Concession granted. You have a mental block against grep. Sad state to be in, but I'll give you that. Now the amazing thing is, every company I've worked with hasn't had these "headaches". Given the vast range of tools available today for working with source code, it just isn't a concern. Sure, if you get used to ANY primative tool, you will eventually learn to love the bomb. How is using a single command "primitive"? I did NOT say that. Commands are orthogonal to OO and files, BTW.
________________ oop.ismad.com
|
Post #157,240
5/27/04 1:55:32 PM
|
Naw... you are thinking...
Commands are orthogonal to OO and files, BTW Naw... HEXAGONAL, that way they are cursed[1] before you use them. Therefore no chance they can be cursed additionally. [1] curses using.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Give a man a match, he'll be warm for a minute. Set him on fire, he'll be warm for the rest of his life!
|
Post #157,247
5/27/04 2:46:30 PM
|
Re: single command
When you encounter a new application to build or enhance, do you automatically use OO? If not, what *do* you automatically use? You have to have *some* default, otherwise you do nothing. No, I do not have a default. I look at the requirements and choose amongst several tools, which for me includes RDBMSes, OO programming, and so on. Bryce, I work at a place that is your wet dream. 1 million lines of procedural RDBMS code, using control tables, parms, and IF/THEN blocks. I have direct experience with this. You have none. Yet you feel qualified to wave your hands and scream "tables are better!" when I have direct empirical evidence that they are not. There are 30 PL/SQL developers here who prefer to manage their configuration information in FILES. Every single one of them has TOAD on their computers and use it daily. They don't want the overhead, and they understand the issues. You do not, and you've got your fingers in your ears while I'm trying to demonstrate this to you. Perhaps on a small scale you may find them more convenient. Fine. I do not, these people do not, and your techniques simply do not scale. And quite frankly, if I have to draw my programmers from a pool that is mostly people who work on smaller apps, I'm going to look for people who know how to deal with files because that's what works best at this scale. And all the hand-waving in the world won't change that. Sure, if you get used to ANY primative tool, you will eventually learn to love the bomb. Perhaps if I were working with a "primative" tools, I would. But I'm not. Given that there exist no tools for doing revisions of database tables, and there are for files, which is the primitive tool? I did NOT say that. Commands are orthogonal to OO and files, BTW. What you said was "How does tabling info preclude the use of a single command to initiate everything?" Which is not what you were asked. You were asked: Hardly primitive. I can sit down with a blank system, type one command (bexvm build -d SID) and create an entire, running copy of our system, loaded config data, compiled code, everything. Unattended. This is primitive? To which your response made no sense at all, so I reiterated. Answer the question: is that primitive, yes or no? No weaseling, no hand-waving. Answers other than yes or no will be ignored and the question posed again. So does dealing with Flinstonian file systems. Which you haven't shown. So far you've shown that using tables for config information requires more overhead.
No response, so concession noted.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,182
5/27/04 9:16:20 AM
|
Database migrations (new thread)
Created as new thread #157181 titled [link|/forums/render/content/show?contentid=157181|Database migrations]
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,280
5/27/04 4:52:26 PM
|
Nit: some code does belong in a database. (sp's)
bcnu, Mikem
If you can read this, you are not the President.
|
Post #156,964
5/25/04 8:40:29 PM
|
Exactly.
Programmer #1 changes strategy A. Programmer #2 changes strategy B. Programmer #2 needs to promote to testing, but #1 doesn't. Branch. ...However, if this is a constant problem, then make each strategy a seperate routine and put them in separate files, and manage them that way. The skeleton IF statements that call the routines shouldn't change that often.
That sounds like a good approach. And now that you have them in separate files, you can associate the file with the appropriate customer(s); maybe even make that file an attribute or property of each customer. Then you can collapse your (possibly huge) IF statement into a one-liner! Congratulations. We've just reinvented OO.
|
Post #156,965
5/25/04 8:42:41 PM
|
Amazing, isn't it?
How Bryce's solutions grow OO attributes as they progress in response to criticism, eh?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #156,981
5/25/04 9:09:06 PM
5/25/04 9:11:42 PM
|
OO takes credit for sunrises even
That sounds like a good approach. And now that you have them in separate files, you can associate the file with the appropriate customer(s); maybe even make that file an attribute or property of each customer. Then you can collapse your (possibly huge) IF statement into a one-liner!
Congratulations. We've just reinvented OO. Nope, Lisp, which predates OO by about 7 years. (And some say that Lambda calculus did it first.) I would generally suggest such a solution, but Scott suggested that we avoid things like "eval" because logging-in is too sensative. In other parts of the system after login I may be more likely to suggest it. However, in practice the one-to-one association between subroutines and entity instances tends to dissapate over time for most things in my observations. The coupling between nouns and actions in the real world is rather loose. If your observations are different, so be it.
________________ oop.ismad.com
Edited by tablizer
May 25, 2004, 09:11:42 PM EDT
|
Post #156,984
5/25/04 9:12:57 PM
|
Scheme and the Lambda Calculus
In many respects, Scheme (a member of the Lisp family) is just the Lambda Calculus with about a dozen special forms stacked on top of it. Of course, the special forms are what make it an "interesting" language.
|
Post #156,985
5/25/04 9:13:22 PM
|
Re: OO takes credit for sunrises even
ALL parts of the system are performance sensitive when your database server is a $500K box and it's near capacity. Sloppy, inefficient code like what you are proposing is deadly.
Since the OO code provides the same benefits automatically without the performance problems, which is objectively better?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,001
5/26/04 1:24:08 AM
5/26/04 1:30:09 AM
|
No way Jose -- Gotta go to DB anyhow
ALL parts of the system are performance sensitive when your database server is a $500K box and it's near capacity. Sloppy, inefficient code like what you are proposing is deadly. A user logs in only once or twice a day on average. I am assuming that we don't have to keep authenticating them for every task or event. Assuming they don't time-out, one login a day is good enough. Besides, we have to retrieve the customer attributes record anyway, even under OO. In most uses of table-driven strategy patterns, we have to retrieve the containing record anyhow. It is no extra burden on the DB. I'll tell you what. We will use OO for the login, but any other need for the strategy pattern beyond that we will retrieve the function name from the record and Eval it. In practice I don't see strategy needed that often. I mostly see it in table-driven menu systems, which I have seen at least four other independent developers put to use before they ever met me: the McCosker AS/400 accounting system, NovaQuest's FoxPro system, a stint at that e-learning company, and Ken's trending system. I am not the only Toppie around. (Perhaps table-oriented companies tend to hire me though.) I will give you OO for the login if the rest is free to do p/r. Deal? OO has is niche areas. They just ain't large. I can't believe I allowed you to drag me this far without telling you where to put your Eval ban. You must have worked pretty hard to dig up a reason to avoid Eval. I gotta give you a B+ for cleverness, but you still flunk the general OO evidence test though. At best you move the possible uses of OO from about 3% to 5% of total project. Maybe for other uses of TOP DB performance is an issue, but not strategy pat. If tables are "bad" because they are slow, so be it. Like I keep saying, 15 years ago slowness was often given as a reason to not use OO. NEXT!
________________ oop.ismad.com
Edited by tablizer
May 26, 2004, 01:30:09 AM EDT
|
Post #157,014
5/26/04 8:11:48 AM
|
Er...
Apparently you missed the word "all". There's no room in our system for any slow code. I'm not talking about just logins. Eval ain't gonna happen, pal. At best you move the possible uses of OO from about 3% to 5% of total project. Uh, no. Like I said before, your eval "trick" is just a manual substitute for OO programming. More inefficient and higher maintenance costs. If tables are "bad" because they are slow, so be it. Hey, feel free. You wanted objective proof, there it is.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,051
5/26/04 12:40:01 PM
|
"Use OO because OO is faster" is the best you can do?
Uh, no. Like I said before, your eval "trick" is just a manual substitute for OO programming. More inefficient and higher maintenance costs. It is *not* higher maintenance costs. Your reasoning seems to be based on the decision about whether config info is put in files or tables. If we already have some or most config info in tables, then the additional effort of putting yet more in there is no more than putting it in a file. You are simply pro-file and anti-table. And what is with this "manual" talk? The function/script name is just another attribute like any other attribute one puts in tables. The OO approach is often to create a class that mirrors the DB entity but puts the behavioral stuff in the class but the attributes still come from the DB. This schema mirroring is too "manual". I will concede that TOP tecniques are often not as fast as in-language polymorphism. If one uses OOP for speed, that is little different than using assembler for speed because high-level languages would be too slow. By your logic we should all be using assembler.And, it is not necesarilly "I/O". Due to caching, many DB queries never even have to hit disk. The overhead is because the application EXE is one "system" and the database another. Something stored/kept in the same language is going to usually be faster than something stored/kept in a different language than the language that uses it, for example.
________________ oop.ismad.com
|
Post #157,058
5/26/04 1:12:56 PM
|
Serious question
You are simply pro-file and anti-table. You keep saying you don't like OO, but all your comparisons seem to be comparing tables to files. Is your main beef with files vs tables? Because it looks to me like you and Scott are having two different discussions.
===
Implicitly condoning stupidity since 2001.
|
Post #157,084
5/26/04 4:08:44 PM
|
Interrelated
You keep saying you don't like OO, but all your comparisons seem to be comparing tables to files. Is your main beef with files vs tables? Because it looks to me like you and Scott are having two different discussions. They are interrelated because people tend to use OO to compensate for limits of hierarchical file systems but I prefer databases for such. And, to communicate info between the program and the database sometims files are needed (or to speed things up) because compilers and interpreters are better integrated with file systems than database systems. For example, "include" commands in programs are adapted to grab code from files, but not directly from databases. There is pro-file bigotry out there. Scott's argument seems to be that OO and file-centricity currently work well together and that is why one should go with them instead of table-centric approaches. It is kind of a QWERTY argument: standards protect themselves because they create mini-industries and habits around such standards, even if they have problems. My argument is that even though conventions limit their power, table-centric approaches are still superior, or at least not clearly inferior.
________________ oop.ismad.com
|
Post #157,093
5/26/04 4:54:00 PM
|
Re: Interrelated
table-centric approaches are still superior, or at least not clearly inferior. You've demonstrated no superiority whatsoever. The only thing you've demonstrated is possible code maintenance parity (by using control tables and poorly imitating polymorphism), which has quite a few deficiencies, including poorer performance and migration maintenance.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,095
5/26/04 5:00:33 PM
|
Re: "Use OO because OO is faster" is the best you can do?
If we already have some or most config info in tables, then the additional effort of putting yet more in there is no more than putting it in a file. You are simply pro-file and anti-table. Incorrect. The table config information is more difficult to migrate between environments, and doesn't support revision control. And what is with this "manual" talk? The function/script name is just another attribute like any other attribute one puts in tables. It's what you do with it that's manual. You have to create your own jump table using eval, whereas OO techniques get that built in for free. I will concede that TOP tecniques are often not as fast as in-language polymorphism. If one uses OOP for speed, that is little different than using assembler for speed because high-level languages would be too slow. By your logic we should all be using assembler. If, as you say, table techniques and OO techniques are equally fast for development, then we should prefer the technique that performs better at runtime: OO. Assembler requires vastly greater development time, and as such is not a contender. You're being daft again. And, it is not necesarilly "I/O". Due to caching, many DB queries never even have to hit disk. Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. The whole world isn't XBase, happiness, and light. The overhead is because the application EXE is one "system" and the database another. Something stored/kept in the same language is going to usually be faster than something stored/kept in a different language than the language that uses it, for example. Not when the "EXE" is a stored procedure running in the same process, as in the parm table example I posted. The overhead WAS IO. This was PROVEN by analysis.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,236
5/27/04 1:42:29 PM
|
Table != Disk
Incorrect. The table config information is more difficult to migrate between environments, and doesn't support revision control. Taken up in another message. I showed how to add rivision tracking. It's what you do with it that's manual. You have to create your own jump table using eval, whereas OO techniques get that built in for free. I don't have to create no "jump table". I use the existing table from the DB. In the example of your 600 parameters, assuming it is in a dictioanry array, we just do this: \n eval(clientDict['loginStategry'])\n Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems. Not when the "EXE" is a stored procedure running in the same process, as in the parm table example I posted. The overhead WAS IO. This was PROVEN by analysis. I heard it was possible to cache *entire* Oracle tables in RAM so that there is little or no disk I/O. Maybe there is an oracle bug or your DBA is dumb. I will agree that sometimes caching and other techniques don't work as we expect, and we have to resort to hacky shit like converting tables into giant case lists and the like. But just because an approach creates a problem for situation A, does not necessarily mean we bannish it from the face of the earth. If I find a specific performance bug in Java does that mean all of OO is rotten?
________________ oop.ismad.com
|
Post #157,238
5/27/04 1:51:37 PM
|
Re: OR mappers slowing things down
Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems. You've repeated this line of argument several times now, so I suppose it's worth asking the question: "slow things down" relative to what? First point. A wrapper, whether it's OR or PR (procedural relational) adds an extra layer of abstraction, with some costs usually involved (though there can be gains with things like cacheing as well). In terms of these OOP people you keep referring to, what they are saying is that they would rather have an OO-database (persistence mechanism of some sorts). They are not comparing your idea of language and saying that OR-mapping is slower than that ideal. They are simply saying that an OO-Database is to be preferred to OR-mapping. And, yes, if you have that luxury, it will be much faster.
|
Post #157,246
5/27/04 2:38:09 PM
|
Re: OR mappers slowing things down
First point. A wrapper, whether it's OR or PR (procedural relational) adds an extra layer of abstraction, with some costs usually involved That is what I keep saying about relational and tables: higher abstraction may cost some in performance. In terms of these OOP people you keep referring to, what they are saying is that they would rather have an OO-database Some do, but as I understood it, many recommend more direct access to the RDBMS or a creating a lite-layered custom OR-mapper that fits that particular app. By the way, what do you think's the reason for poor OODBMS sells?
________________ oop.ismad.com
|
Post #157,248
5/27/04 2:51:27 PM
|
Re: OR mappers slowing things down
That is what I keep saying about relational and tables: higher abstraction may cost some in performance. Notes, the fact that you elided the last part of the comment (some gains wrt cacheing, and there some other optimizations possible as well, such that you may actually get better performance). Notes also, that Procedural abstraction have the same concerns as an OO one, in terms of creating a procedural abstraction layer. Procedural languages are no more related to the relational calculus as OO languages. Some do, but as I understood it, many recommend more direct access to the RDBMS or a creating a lite-layered custom OR-mapper that fits that particular app. Notes, that OO languages are just as capable of using raw SQL commands as procedural ones. By the way, what do you think's the reason for poor OODBMS sells? Probably because of all the hype generated by the table programmers.
|
Post #157,261
5/27/04 3:32:44 PM
|
Re: OR mappers slowing things down
and there some other optimizations possible as well, such that you may actually get better performance May be true of TOP-techniques also. Why scotts "600" table couldn't cache in RAM more effectively, who knows. Maybe they just couldn't find the right Oracle tweak. Procedural languages are no more related to the relational calculus as OO languages. It is just that OO and databases tend to fight over territory. In p/r, the "noun attribute model" is mostly in the DB, not code structures; but in OO you have classes that tend to mirror the noun attribute model, fighting with the DB over that duty. Notes, that OO languages are just as capable of using raw SQL commands as procedural ones. True, but it starts to look rather procedural in design if you do that. Probably because of all the hype generated by the table programmers. I wish it were true. Fight fire with fire :-)
________________ oop.ismad.com
|
Post #157,266
5/27/04 3:52:22 PM
|
Procedural abstraction
In p/r, the "noun attribute model" is mostly in the DB, not code structures; but in OO you have classes that tend to mirror the noun attribute model, fighting with the DB over that duty. You use these words as if they actually mean something? I'm thinking you don't quite understand the concept of abstraction - you seem to use it as if it's a "bad" word. The point of abstraction is to hide implementation details at a lower level of code such that the code built upon top of that abstraction need not worry about it. Specifically, if you try to abstract the fact that you don't care how, when and where a method (or procedure or function) goes about it's business of doing a request, then you are half way there to abstraction. Now build a procedural model that doesn't care about how data is stored (any number of database vendors or persistance or text files or ....). You soon find that building an abstraction in the Procedural code is just as hard (if not harder since you limit your toolbox). The fact that you assume that Procedural and Relational go hand in hand mean that you miss the obvious fact that you are tightly coupled to a specific modus operandi. Now if what you want is to not build an abstraction of the storage mechanism, I'd say that most OO languages are more than happy to oblige. After all, OO languages are a superset of procedural ones since they always have the ability to stuff all the code into a single static method.
|
Post #157,286
5/27/04 4:59:40 PM
|
My abstraction can beat up your abstraction
Relational is about much more than JUST "storage". That is what OO'ers don't get. They use it JUST for storage, but then end up reinventing all the other stuff in their OO app anyhow. They have to reinvent it because OO does not provide enough power out-of-the-box. To add it requires reinventing a (navigational) database. Relational provides a fairly standardized way to manage state and noun attributes that OO lacks. Everybody ends up doing it so differently. Plus, OO often hard-wires access paths into the design.
If I wanted to be able to easily swap database engines, then I could just use lowest-common-denominator SQL. But why don't I do this? because I want to use the rest of the DB features also. BTW, SQL is an interface, not an implementation. Ponder that. The only way OO systems get out of vendor lock is to have a translation layer. There is no reason an equally-powerful (and maybe equally flawed) intermediate query language could not be built for procedural. The fact that it does not exist likely means the need for it is not as great as OO'ers claim. Plus, the OO frameworks tend to be language-locked. Thus the choice is DB vendor lock or language lock so far at this stage in the swap wars. Pick your poison. If you can clearly demonstrate that OO is higher abstraction without fuzzy zen talk, be my guest.
________________ oop.ismad.com
|
Post #157,296
5/27/04 5:23:52 PM
|
re: Relational is more than storage (new thread)
Created as new thread #157295 titled [link|/forums/render/content/show?contentid=157295|re: Relational is more than storage]
|
Post #157,302
5/27/04 5:44:43 PM
|
It wasn't a caching issue
It was CPU spinning due to searching through the table for data.
Though I admit to being puzzled as to why IF/THEN written in PL/SQL would beat a hash lookup from adding the right index to a table.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,311
5/27/04 6:36:08 PM
|
No file IO
In memory always beats file IO.
The algorithm produced a hard-coded binary search in IF/THENs. :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,314
5/27/04 6:42:57 PM
|
I understand how it worked
It just surprises me that binary search in PL/SQL beats an index lookup.
After all index lookups can be implemented many ways, including binary search or a hash lookup. Personally with 2000 things I'd expect a properly coded hash lookup to beat a binary search.
Oh well. Optimization often has little surprises like that for obscure implementation reasons.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,317
5/27/04 6:51:28 PM
|
Index lookup in code, or table index?
Table index requires file IO.
If you're talking about using string keyed hashes in the programming language, keep in mind that this is v8 PL/SQL. There ain't no sich beastie. Integer index only.
9i has associative arrays, but there are still some deficiencies to them.
Even if we had decent hashes, since the connection state is blown away between pages there's no place to keep the hash without it being recreated every time. Persistence in this situation requires that the data be represented by code.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,323
5/27/04 7:14:44 PM
|
That requirement would shock me
Table index requires file IO.
Why? Oracle should be smart enough to cache a frequently used table in RAM. If you change it, then you have to hit disk. But if you don't, then there is no reason in the world for them to be so stupid, and I don't think that they are stupid.
Furthermore your problem was that the query was spinning lots of CPU. Slowness from hitting I/O won't show up on your CPU usage statistics.
If I was going to guess the cause of the problem, I'm going to bet on low-level implementation details. An index lookup is fast. But before Oracle gets there the execution path has to include getting a latch (that's a kind of lock), look for the cached query plan for the current query, find it there (let's ignore the parse route since most of the time the common query has a parse in cache), release the latch, interpret that plan, realize that the plan says to do an index lookup, locate the appropriate index, realize that it is in cache, do the index lookup, look for the appropriate row, find it in cache, read it and return it. I've probably missed something that it does. You'll note that several of these steps involve string comparisons that are going to take CPU time.
That's the overhead which I think makes it possible to beat an index lookup using straight PL/SQL.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,332
5/27/04 10:10:43 PM
|
Re: That requirement would shock me
Why? Oracle should be smart enough to cache a frequently used table in RAM. And if they're all frequently used? :-) Furthermore your problem was that the query was spinning lots of CPU. If I gave that impression, it was erroneously. Performance was decreased, but that doesn't mean more CPU necessarily. Basically the question you ask the profiling tool is "how much time is being spent doing foo?" Whether that time is spent doing IO or spinning the CPU doesn't matter. It's still time spent. And if the time spent is 15% of the overall time spent across the system then it's a good candidate for optimization. I'll have to talk to the DBA on Tuesday to get the particulars. The developer who rewrote it was a little fuzzy on why it was so slow in the first place (this was two years ago). He just remembered that it had something to do with file IO, and pinning the table made no difference.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,334
5/27/04 10:33:28 PM
|
Question about Oracle tables
Is it possible to define a memory based partition that holds selectable tables? Not that this is a question about the problem/solution you are talking about, but I've always thought that allowing the programmer to set up memory based tables for optimization purposes might be a useful optimization technique for certain lookup tables that you know are used frequently.
|
Post #157,341
5/27/04 11:06:22 PM
|
You can pin them in memory.
Assuming you have enough memory. As I said, I'm going to have to take it up with the DBA as to why that wasn't sufficient.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #158,339
6/3/04 10:56:28 PM
|
Oracle tables pinned in memory.
As it turns out, I was wrong all around, and right for the wrong reasons.
The actual problem was lock contention. The table was pinned, but Oracle places micro locks for reads, effectively serializing reads on single blocks. The CPU was churned by grabbing and releasing locks on the parm data repeatedly. Since we make extensive use of that data (as I indicated, this is Bryce's dream architecture), the lock management became a significant consumer of CPU. Why Oracle needed to lock read-only data I neglected to find out. This is also a significantly dumbed-down version of the explanation I was given. :-P
Also I was misremembering the %cpu being used. The actual figure was MUCH higher. The DBA estimates that we would have been maxed out at 25% of our current capacity had the change not been made.
An interesting comment he made: Oracle considers the heavy use of a single parm table such as we were doing to be an application design flaw. :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,338
5/27/04 11:04:10 PM
|
You did give the impression that CPU was the issue
In your description at [link|http://z.iwethey.org/forums/render/content/show?contentid=157019|http://z.iwethey.org...?contentid=157019] it said 15% of CPU time was spent on this query. I've been working from the assumption that this was the problem that needed solving.
If that is wrong, then reasonable theories to explain what didn't happen are, of course, superfluous.
Cheers, Ben
To deny the indirect purchaser, who in this case is the ultimate purchaser, the right to seek relief from unlawful conduct, would essentially remove the word consumer from the Consumer Protection Act - [link|http://www.techworld.com/opsys/news/index.cfm?NewsID=1246&Page=1&pagePos=20|Nebraska Supreme Court]
|
Post #157,340
5/27/04 11:05:24 PM
|
Whoops, my mistake.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,250
5/27/04 2:56:53 PM
|
Re: Table != Disk
I don't have to create no "jump table". I use the existing table from the DB. In the example of your 600 parameters, assuming it is in a dictioanry array, we just do this:
eval(clientDict['loginStategry']) Otherwise known as a "jump table". Here's a list of entry points, for this client jump to this code to do the work. Thank you for the verification of the technique. Ah, I see. You believe this to be true because you've never worked on a large system. Juggling IO requirements is a constant battle. Ask any DBA for a large system. Even some OO'ers have complained that OR mappers slow things down. Maybe those OO'ers are just not as smart as you, and so have speed problems.
Which has nothing to do with what you quoted. We're talking about database engines. Juggling IO issues is a constant concern on this scale. Cached values are constantly being thrown out because so much data is moving through the system, causing hits to disk and file IO. Our DBA has a PhD in database management. I think he's probably slightly more versed in the particulars than you. I heard it was possible to cache *entire* Oracle tables in RAM so that there is little or no disk I/O. Maybe there is an oracle bug or your DBA is dumb. See above. He has a PhD. You simply don't understand the issues involved. As I said, ask any DBA for a large system. I will agree that sometimes caching and other techniques don't work as we expect, and we have to resort to hacky shit like converting tables into giant case lists and the like. But just because an approach creates a problem for situation A, does not necessarily mean we bannish it from the face of the earth. If I find a specific performance bug in Java does that mean all of OO is rotten? This is your main technique. Used on these scales it causes performance problems. Or do you have evidence to the contrary? And given that it causes performance problems, do you have any other suggestions?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,273
5/27/04 4:14:42 PM
|
This is an area in which I'm proud of Dejavu
The default Unit Server class has a "cache lifetime" value you can set; if a unit is not requested by client code within that period, it gets flushed out of the cache. Deployers can set a sweeper process to run every 5 minutes, every hour, every day, whatever they find is best--AND, can do that either at app startup with a config file, or just do it on the fly (OK, I haven't written the "on the fly" part yet, but it wouldn't be hard).
But the cool part IMO is that you don't have to use the default Server class or its default components. For example, I have a BurnedRecaller that, on the first request (even if its filtered), loads _all_ objects of that Unit class into the cache and keeps them there. You could just as easily make one that does no caching at all.
In other words, I tried to make testing and then using different cache strategies monkey-easy.
|
Post #157,281
5/27/04 4:53:04 PM
|
Nifty.
Hibernate is pretty flexible with caching as well. There's even one caching strategy that clusters across machines.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,019
5/26/04 8:35:37 AM
|
Another little anecdote for you:
Here we have the exact kind of table you're talking about, for holding configuration information that affects program flow. We call them "parms". They're used everywhere: IF pkg_getparm.get(client, 'SOME-PARM', 'A') = 'B' THEN\n do_the_b_thing\nEND IF; 'get' is just a simple DML call that selects from the parm table. If the parm for that client isn't found, it selects again for the default value. If no default is found, then that 3rd argument is used. Then people noticed that we spent a sizable portion of our time just getting parms from the parm table. Like about 15% of the system's CPU time. This is a lot on a system that is constantly bumping up against capacity because of added features. So one of the PL/SQL guys wrote a Perl script that reads the parm table and constructs a stored procedure that uses IF/THEN/ELSE logic in a binary search pattern to contain all the parms (about 30,000 or so). This works much faster since we don't have any IO now; it's just PL/SQL code running. The cost goes down to about 2%. And these guys are a lot more experienced at writing and tuning Oracle code than you and I are. Now, considering that the cost on the database server for doing this in an OO fashion is pretty much 0%, I'd have to say that you're full of crap. There is a very noticeable hit from using parms in this fashion. And since 1) it adds no value (you still have to use text files to manage the config information in order to do code promotions) and 2) it's still slower and 3) you're just emulating polymorphism anyway and 4) now you have the extra development burden of maintaining config files, a script, and dealing with running the script every time you just want to change a lousy parm...
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,050
5/26/04 12:27:41 PM
|
OT: Scott, can we please do something about long lines?
The post that I replied to causes the lines in every post of the thread to need to be scrolled left to right to read them. I know that the post has a <pre> or some such HTML goobers on it (and all of Jake123's posts have similar formatting which causes any thread that he replies to to do that same horrible thing).
My suggestion: Any posts that do not have <pre> or <tt> or <code> or any other goobers that would dick up the otherwise excellent formatting your engine provides should wrap to the browser border. Any posts that do contain such goobers should also wrap to the browser border, except for those lines that are goobered, which should render as formatted.
I have absolutely no idea how difficult this is, but boy... would it aid the usability of these fora (without making jake123 reformat his .sig!)...
thanx-
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,055
5/26/04 1:04:43 PM
|
Dang, this was SUPPOSED to go into the Suggestions forum
So...Is there any way to get it there besides cutting and pasting it?
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,079
5/26/04 3:45:01 PM
|
The other way, besides cut and paste, is to re-type it :-)
|
Post #157,200
5/27/04 10:37:13 AM
|
There isn't enough time in the world...
Oh! the Carnage!
Oh! the Humanity...!
;-\ufffd
jb4 shrub\ufffdbish (Am., from shrub + rubbish, after the derisive name for America's 43 president; 2003) n. 1. a form of nonsensical political doubletalk wherein the speaker attempts to defend the indefensible by lying, obfuscation, or otherwise misstating the facts; GIBBERISH. 2. any of a collection of utterances from America's putative 43rd president. cf. BULLSHIT
|
Post #157,217
5/27/04 12:43:31 PM
|
HTH: As with Perl, There's More Than One Way To Do It
|
Post #157,069
5/26/04 3:08:03 PM
|
Perhaps one might play with CSS clip and overflow...?
|
Post #157,097
5/26/04 5:03:23 PM
|
Not that I'm aware of.
Unless Mr. Brewer's suggestion has legs.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,021
5/26/04 8:45:08 AM
|
Retrieving the customer record anyway:
Besides, we have to retrieve the customer attributes record anyway, even under OO. Actually, the OO code never has to. There's nothing in the client record related to logins. The only tables hit are the user and account tables, and then only at the end when it's time to save the data. Now for the procedural code, let's say you put your "strategy pattern" parms in the customer record. All 600 of them. Does this seem like a good way to do that? All the normal client stuff plus 600 columns used for flow control? No? Then you need a parm table too, which is an additional hit over and above hitting the client table. See my anecdote above.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,048
5/26/04 12:19:22 PM
|
question
No? Then you need a parm table too, which is an additional hit over and above hitting the client table. Why would we need a separate table for them?
________________ oop.ismad.com
|
Post #157,096
5/26/04 5:01:17 PM
|
Re: question
So you think it's a good idea to have 600+ columns in a single table?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,098
5/26/04 5:06:03 PM
|
The other method is to have one column....
...that could be a free form field with 600 different purposes.
|
Post #157,099
5/26/04 5:09:50 PM
|
Re: The other method is to have one column....
Unless you have to support all 600 different purposes at once.
Ah! Let's use a comma-delimited field and parse it every time we need a value! :-)
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,101
5/26/04 5:22:00 PM
|
That's the spirit!!!
|
Post #157,111
5/26/04 7:09:21 PM
|
I would have to look at the nature of the data
________________ oop.ismad.com
|
Post #157,116
5/26/04 7:41:22 PM
|
Are you kidding??
Bryce, you claim that you do this kind of thing ALL THE TIME. You should be able to roll this off the top of your head!
Here's the situation:
You have 600 "control points" or whatever you want to call them that you make decisions at with a control table. Organized by client.
You said that we'd just need a single hit to the client table to get all the parameters. This implies that you need 600+ columns on that table. Stop weaseling. True or false?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,162
5/27/04 1:43:07 AM
|
suggestion 3
You have 600 "control points" or whatever you want to call them that you make decisions at with a control table. Organized by client. You said that we'd just need a single hit to the client table to get all the parameters. This implies that you need 600+ columns on that table. Stop weaseling. True or false? I orginally did not know you had 600. Anyhow, if you have a table like the one I described in the "flintstone" message, then why not load it into a dictionary array upon login if you want to avoid querying each record?
________________ oop.ismad.com
|
Post #157,174
5/27/04 8:55:03 AM
|
Re: suggestion 3
OK, so now you're saying your original design won't work. Progress.
So, we have a client table, and some parm table to be loaded at login.
What if it's a stateless environment? You have 1200 connections pooled between all the web users, and when a new page is requested global state is cleared from the last user to use that connection. All of the work is done in the database in stored procedures, so you don't have a place to keep cached stuff like that.
Now what do you do?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,228
5/27/04 1:23:42 PM
|
bottleneck black box
OK, so now you're saying your original design won't work. I did not say that. YOU claim it is too slow. I have to take your word for it on faith. It is possible you are just blowing smoke. Maybe if OO or some other goofy practice did not bloat up the REST of the system, we would have more margin for TOP practices. I cannot study your system to see where other bottlenecks may be. What if it's a stateless environment? You have 1200 connections pooled between all the web users, and when a new page is requested global state is cleared from the last user to use that connection. All of the work is done in the database in stored procedures, so you don't have a place to keep cached stuff like that. If you use OO or an OR mapper instead, then *something* has to store the state between submits. Some web languages allow one to store dictionary arrays in session variables and some don't (out of the box). According to my documentation, ASP can store dictionary arrays as session variables, but I have never tried it myself. One can perhaps serialize/unserialize the array as a string and session it that way, but I don't know if that will add too much processing to your fragile system.
________________ oop.ismad.com
|
Post #157,231
5/27/04 1:30:59 PM
|
Storing Dictionary as Session variables in ASP
Them dictionary thingies you refer to are called "Objects". Let's see: Set MyDictionary = CreateObject("Scripting.Dictionary") Behold the power of OOP!!! Seriously though, you have to be careful about the amount of data you stuff in the Session Objects (in ASP and other platforms as well). Memory is a valued commodity on a web server, and if you eat too much of it up with Session vars, it has to start swapping them to and from disk. And then there's the question of distributed web processing, where the service the web page may be distrubuted among several web servers. The Session vars have to be able to pass to the servicing web server, which may not be the one that instantiated the Session. Passing around large objects between servers can degrade the performance.
|
Post #157,239
5/27/04 1:52:52 PM
|
It has gotta go *somewhere*
Them dictionary thingies you refer to are called "Objects". That is true. ASP does not have built-in dictionary arrays. They chose to impliment then as an API instead, which means we can't use convenient array syntax. Note that one can implement them using the "handle" API approach also. Thus, we don't need OOP to do the same. Seriously though, you have to be careful about the amount of data you stuff in the Session Objects (in ASP and other platforms as well). Memory is a valued commodity on a web server, and if you eat too much of it up with Session vars, it has to start swapping them to and from disk. That is why I would keep that 600 thingy in tables if possible and let the DB handle RAM caching. But if Scott caches it in RAM, then it is in RAM. It is either in RAM or in tables or in files. Scott's approach seems to be using RAM also. It will have any of the same problems caused by being in RAM as a sessioned array. Interesting material: [link|http://www.c2.com/cgi/wiki?ProgrammingWithoutRamDiskDichotomy|http://www.c2.com/cg...tRamDiskDichotomy]
________________ oop.ismad.com
|
Post #157,241
5/27/04 2:08:10 PM
|
ASP is OO
Perhaps not done well enuf, but OO none-the-less. Response.Write("I'm an OO method") Session("I'm_an_OO_session_variable") Request("I'm_an_OO_request_variable") VBScript is brain dead when it comes to constructing objects, but you ain't gonna get very far with ASP without objects. Perhaps they "could have", "should have" done it differently - but they didn't - and my guess is that had a lot of "objective" reasons why they chose the path they chose.
|
Post #157,243
5/27/04 2:28:21 PM
|
re: ASP is OO
Response.Write("I'm an OO method") The first time I saw that, I thought, "oh shit. They borrowed the Java anti-Demeter dot bloat for print()". VBScript is brain dead when it comes to constructing objects, but you ain't gonna get very far with ASP without objects. Do you mean that one has to use existing OOP API's in order to talk to MS services, or that one must create their *own* classes in order to implement maintainable biz logic? and my guess is that had a lot of "objective" reasons why they chose the path they chose. Microsoft objective? Ha ha. Actually, they tend to copy what a competitor is selling well at a given time. MS is not known to love OO. They were slow to fix the inheritance in VB, for example.
________________ oop.ismad.com
|
Post #157,249
5/27/04 2:55:24 PM
|
ASP = COM
The first time I saw that, I thought, "oh shit. They borrowed the Java anti-Demeter dot bloat for print()". It's called COM (component OBJECT model). Do you mean that one has to use existing OOP API's in order to talk to MS services, or that one must create their *own* classes in order to implement maintainable biz logic? Meaning classes are second-class (as opposed to first class) in VBScript. Notes, that they are still useful and used quite a bit in VBScript. Microsoft objective? Ha ha. Actually, they tend to copy what a competitor is selling well at a given time. MS is not known to love OO. They were slow to fix the inheritance in VB, for example. And I thought you were keen on MS, seeing as how Longhorn is trying to use SQLServer for the File System.
|
Post #157,258
5/27/04 3:18:32 PM
|
re: ASP = COM
It's called COM (component OBJECT model). I meant the syntax, not how it is implemented. Hmmm. I wonder how closely the ChiliSoft ASP clone sticks to the COM model? Meaning classes are second-class (as opposed to first class) in VBScript. What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. And I thought you were keen on MS, seeing as how Longhorn is trying to use SQLServer for the File System. MS does some things well, and some poorly. I will praise them for good stuff, and cuss them for stupid stuff. For example, I like the case-insensitivity in their tools. But their default of "smart quotes" in Word really sucks.
________________ oop.ismad.com
|
Post #157,262
5/27/04 3:33:09 PM
|
ChiliSoft ASP
I've not done more than play with it, but the ChiliSoft ASP works pretty good. Biggest problem is how well it deals with custom COM components written in VB and C++. It does provide a COM-like container, but it works only so far. If you stick with the standard five ASP objects (Application, Response, Session, Request, Server) and the four standard VBScript objects (Err, Dictionary, FileSystemObject, TextStream), then you won't have too many problems. Anyhow, the way Chilisoft implements ASP is by using OO programming techniques. But then, somehow I know you knew that I would say that. What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. Generally speaking, it's the ability of the language to add libraries to itself, and not have the distinction between those libraries you wrote vs. the standard libraries that come with the environment. MS does some things well, and some poorly. I will praise them for good stuff, and cuss them for stupid stuff. For example, I like the case-insensitivity in their tools. But their default of "smart quotes" in Word really sucks. So when they agree with you - they are being rational. But when they make a design decision you disagree with - they are being irrational.
|
Post #157,275
5/27/04 4:19:21 PM
|
Interesting terminology
What is the difference between second-class classes and first-class classes? Nah. maybe I don't wanna know. Generally speaking, it's the ability of the language to add libraries to itself, and not have the distinction between those libraries you wrote vs. the standard libraries that come with the environment.
I would have thought: first-class classes are themselves objects which can be passed around. Second-class classes are not objects. Both can be used to create objects, but only one is itself an object. Or something.
|
Post #157,279
5/27/04 4:32:41 PM
|
You're probably correct convention-wise
(Had a link I was gonna post on the subject matter of "first-classness" have to do with first class messages, but the site is unresponsive at the moment).
Anyhow, from my standpoint, I do think that the ability to build libraries from the language should count for something (should probably invent a new term like VBScript is Adjective and/or Adverb based - not Noun or Verb based).
|
Post #157,259
5/27/04 3:30:05 PM
|
Re: bottleneck black box
OK, so now you're saying your original design won't work. I did not say that.
Yes, you did. Your initial design was [link|/forums/render/content/show?contentid=156950|put all the "features" in the client table]": // select login strategy\ncustomer = query('select * from customer where id=...)\nstrat = customer.loginStrategy\nif strat='A' then\n....\nelseif strat='B' then\n...etc...
So I asked, what if you have 600 "features"? At which point you said use a different table. So the original design won't work, correct? As a matter of fact, 600 was just a number I pulled out of thin air. Checking the code (with grep, natch), we have about 4000 instances of parm-based decisions being made, and 10,000 instances of "if client = foo" decisions being made. Maybe if OO or some other goofy practice did not bloat up the REST of the system, we would have more margin for TOP practices. Again, 1 million lines of PL/SQL code. 99.3% of the system is OO, and that's the bridge login. And any margin is going to go towards adding more clients to the system and doing useful work, not supporting poorly performing, unnecessary practices. If you use OO or an OR mapper instead, then *something* has to store the state between submits. We're not talking about OO. We're talking about doing everything in the database. Web request comes into Apache, mod_plsql determines that a particular URL maps to a particular PL/SQL package, and the rest is ALL database code. This is a stateless environment. Since you can't cache (the connections have DBMS_SESSION.RESET_PACKAGE called on them between pages), the parm table becomes a performance bottleneck. And the system is hardly fragile. We have 400K pieces of inventory. Half a million users. 10 million users if you include representatives (look around: 1 out of 30 people you know uses our system in some way). We process a good 20% of all the transactions in our market. A billion dollars changes hands through our system every day. There's just no room for performance-sucking crap that doesn't add any value. And in fact, the whole procedural hairball doesn't scale as well as it needs to, so we're moving away from that now. And given your utter lack of experience in this arena, nothing you would be able to tell us after looking at the code is going to help, especially since you're not proposing anything we aren't already doing, albeit on a much larger scale than you've ever contemplated.
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #157,265
5/27/04 3:47:54 PM
|
how they relate
I am confused about how the config params relate to the login strategy, other than containing the strategy name as one of the params.
________________ oop.ismad.com
|
Post #157,271
5/27/04 4:11:02 PM
|
Re: how they relate
Login params are a configuration value. Just like all the other 4000 configuration values. It's a value used to determine what to do at a branch point: which decryption method do I use? which parsing method? what's the timestamp window? what's the home page? This is classic control table technique.
So your suggestion now is to just store bridge login parm values in the client table? Where do you draw the line? Why not the client's account control parms, or their routing parms?
Regards,
-scott anderson
"Welcome to Rivendell, Mr. Anderson..."
|
Post #156,986
5/25/04 9:30:28 PM
|
I don't understand why you mention "eval"
|
Post #157,002
5/26/04 1:27:26 AM
|
re: I don't understand why you mention "eval"
See: [link|http://www.geocities.com/tablizer/prpats.htm|http://www.geocities...blizer/prpats.htm]
________________ oop.ismad.com
|
Post #157,044
5/26/04 10:57:29 AM
|
I get it now.
Emphases mine: Some OO fans say that putting expressions or code into tables is "doing OO without OOP". Rather than get caught up in chicken-or-egg terminology battles, let's just say that I prefer tables because of their 2-D nature as opposed to the 1-D nature of OO code. Placing code in collections pre-dates the birth of OOP (Simula-67) by roughly a decade, at least. OOP gets credit for using classes to do such, not collections. LISP pioneered many aspects of treating code and data in a similar fashion. Even without Eval or Execute, the p/r version is still better than the OO version in my opinion. I will grant that the OO approach offers a bit more potential compile-time checking, but not much else. (Perhaps sacrificing run-time changes/additions in the process.) Further, it seems more logical to use the same collections management approach for both program code AND data. Otherwise, you are duplicating effort (special IDE's), duplicating schema information, and increasing the learning curve. Collection handling should be factored into a single methodology regardless of whether it is code or data. LISP came closer to the right idea by treating code like data instead of data like code, which is what OO does wrong.
Got it. Subjectivity wins again. I think from now on you're going to have to work very hard to convince anyone here that you don't prefer OO, albeit a twisted version.
|
Post #157,053
5/26/04 12:43:59 PM
|
OO is just a (bad) reinvention of 60's databases with
...behavioral dispatching tacked on.
________________ oop.ismad.com
|
Post #157,057
5/26/04 1:10:43 PM
|
No. You are a proponent of OO programming.
I just didn't see that you were so exotic in your methodologies.
Come on Bryce don't make me taunt you, causing Ben to hate me MORE!
You are a strong OO proponent.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Give a man a match, he'll be warm for a minute. Set him on fire, he'll be warm for the rest of his life!
|
Post #157,085
5/26/04 4:17:57 PM
|
The question that launched a thousand arguments
I keep saying that OO reinvents databases in app code. Thus, are OO programs really databases or are databases with code really OO? This issue came up before but nobody could agree on a definition of OO in order to determine for once and for all.
The biggest seperation between OO and TOP is that OO wants to use code (text) to make "records" (aka "objects/classes") and use pointer-hopping to navigate relationships (aka "navigational database") rather than relational algebra. Reduce dependence on text-code and pointer-based navigation, and TOP and OO would not be that much different.
________________ oop.ismad.com
|
Post #157,086
5/26/04 4:21:36 PM
|
That explains a lot
Thus, are OO programs really databases or are databases with code really OO? Do you really believe this question has an answer?
|