What database?
Some things we did in DB2.
1. Reduce the transaction atomicity as small as possible.
2. Reduce interdependence/locking between tables as much as possible. If you can live without foreign key relationships, you may want to remove them and then do "audits" during non-prime time.
3. SAN (cache and fast I/O) help a lot. Use Veritas to stripe multiple bricks (storage units, we call bricks).
4. Large bufferpools.
5. Use DB monitor tools to find long running transactions in the DB. You may not need the EJB monitor tools if the DB tools can tell you that it's the database.
6. Don't depend heavily on Data Bean logic, code your own queries in JDBC, directly, where the DB tools tell you the data bean performance is bad.
7. In the end, I think the big killer of performance is dependent table locking. That's why I listed transaction atomicity as #1. Even if the process writes several records, if the table dependency is reduced or eliminated, then the DB doesn't wait. One strategy we used well, is to create a master "magic number" for the base transaction, then use it, plus a new number for dependent tables. The tables were linked by the auto-generated number on the primary record.
8. Make sure you have enough DB connection pool objects, but not too many.
9. One big thing we found in Java on Solaris. Solaris will not let a process take over more than about 40-50% of the CPU of the machine. Even though our multithreaded Java process was the only application process on the machine, besides the database itself, Solaris still would not let us drive the CPU much above about 50%. To get higher performance rates, we split our application in to about 4 processes (on a 4 CPU system), then we were able to get everything out of the machine.
10. If you're doing #1, small transactions, then you need to keep your redo log space small and changed pages threshold, too. Checkpoint frequently, don't let too many transactions build up. For DB2 OLTP, IBM recommends the changed page threshold only get to about 10%. Cleaning out a large checkpoint can cause your system to pause for a time.
11. See how much of the processing you can move out of "prime time". Can you capture the transaction, parse it in memory to make a decision, then avoid the "normalized" writes to disk? Then, after business hours, reparse those same transactions again, and do the disk I/O to the normalized tables then. This doesn't work if you need to check the results of previous transactions from the same business day, or if the results of the decision require most of the data be stored as an audit.
12. If you're using JBuilder, try using OptimizeIt! to figure out where your Java is slow.
13. Finally, if the database is SQL Server (God help you!), make sure you have a good DBA do things like turn off auto runstats. We had one SQL Server pause our system for a good 2-3 minutes while it ran runstats on a million row table.
Our application was a multithreaded Java server with no EJB. The owner of our company was interested in getting to market quickly, and felt that EJB's were not a good investment. From what I've heard in the marketplace, data beans do not perform well in most environments. I've heard horror stories about tools which would issue a UPDATE/SET for each individual data element in a persistent bean. I would think most of these have been fixed by now, but if you're running an older server, maybe not.
Right before I left, we enhanced the product to run multiple instances (to get around the CPU utilization problem and drive transaction rates higher). The sockets code isn't that bad to get processes talking to one another.