IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Database?
What database?

Some things we did in DB2.

1. Reduce the transaction atomicity as small as possible.
2. Reduce interdependence/locking between tables as much as possible. If you can live without foreign key relationships, you may want to remove them and then do "audits" during non-prime time.
3. SAN (cache and fast I/O) help a lot. Use Veritas to stripe multiple bricks (storage units, we call bricks).
4. Large bufferpools.
5. Use DB monitor tools to find long running transactions in the DB. You may not need the EJB monitor tools if the DB tools can tell you that it's the database.
6. Don't depend heavily on Data Bean logic, code your own queries in JDBC, directly, where the DB tools tell you the data bean performance is bad.
7. In the end, I think the big killer of performance is dependent table locking. That's why I listed transaction atomicity as #1. Even if the process writes several records, if the table dependency is reduced or eliminated, then the DB doesn't wait. One strategy we used well, is to create a master "magic number" for the base transaction, then use it, plus a new number for dependent tables. The tables were linked by the auto-generated number on the primary record.
8. Make sure you have enough DB connection pool objects, but not too many.
9. One big thing we found in Java on Solaris. Solaris will not let a process take over more than about 40-50% of the CPU of the machine. Even though our multithreaded Java process was the only application process on the machine, besides the database itself, Solaris still would not let us drive the CPU much above about 50%. To get higher performance rates, we split our application in to about 4 processes (on a 4 CPU system), then we were able to get everything out of the machine.
10. If you're doing #1, small transactions, then you need to keep your redo log space small and changed pages threshold, too. Checkpoint frequently, don't let too many transactions build up. For DB2 OLTP, IBM recommends the changed page threshold only get to about 10%. Cleaning out a large checkpoint can cause your system to pause for a time.
11. See how much of the processing you can move out of "prime time". Can you capture the transaction, parse it in memory to make a decision, then avoid the "normalized" writes to disk? Then, after business hours, reparse those same transactions again, and do the disk I/O to the normalized tables then. This doesn't work if you need to check the results of previous transactions from the same business day, or if the results of the decision require most of the data be stored as an audit.
12. If you're using JBuilder, try using OptimizeIt! to figure out where your Java is slow.
13. Finally, if the database is SQL Server (God help you!), make sure you have a good DBA do things like turn off auto runstats. We had one SQL Server pause our system for a good 2-3 minutes while it ran runstats on a million row table.

Our application was a multithreaded Java server with no EJB. The owner of our company was interested in getting to market quickly, and felt that EJB's were not a good investment. From what I've heard in the marketplace, data beans do not perform well in most environments. I've heard horror stories about tools which would issue a UPDATE/SET for each individual data element in a persistent bean. I would think most of these have been fixed by now, but if you're running an older server, maybe not.

Right before I left, we enhanced the product to run multiple instances (to get around the CPU utilization problem and drive transaction rates higher). The sockets code isn't that bad to get processes talking to one another.



New One More Thing..
Another beef I have with vendors telling me how good their data beans are.

Unless the tool reverse engineers the database, or asks you for which fields are indexed, then the bean can't know what the fastest access paths are to the records.

Even without beans, I've seen JSP applications speed up by a factor of 10x, just by adding an index to a commonly used column in the database (which wasn't previously indexed).

So, you need to get a schema of the database, and make sure that the columns which are hit most frequently using the EJBs, are indexed.

If they aren't, one good strategy is to add a date/time stamp temporal field, and index it. Then use date range searches to limit the amount of table the query needs to use to look up stuff.

I'll bet many beans are likely causing table scans. On large tables, that could be expensive, even with a SAN.

Glen Austin
     Delivering sub-second response times - (admin) - (11)
         Distributed monitoring tools... - (slugbug) - (3)
             To a certain extent. - (admin)
             Database? - (gdaustin) - (1)
                 One More Thing.. - (gdaustin)
         you may want to take a look here - (boxley) - (1)
             A year ago, maybe. - (admin)
         My two cents - (lincoln) - (1)
             Considering... - (admin)
         Query count versus query types. - (static) - (2)
             Just doing that move myself - (FuManChu)
             one thought.... - (slugbug)

Perhaps someone needs to clarify a few items in the process so we can actually complete the process.
75 ms