It appears from the description you have several servers each with a cable to a switch in the server room. What speed those connections are is the main thing to look at.
If a room with it's own switch does transactions mainly with just one of the servers, performance isn't going to be improved (and cabling costs will be greatly increased) by wiring directly to the switch the servers are on - unless the server connection to that switch is a gigabit or 10-gigabit connection and the cable run to the server room is 100-Mbs.
If the rooms do heavy access to multiple servers (or if the server connection at it's switch is 1-Gigabit or higher), then separate cables to the switch the servers are on will be faster, depending on traffic density.
If the switch serving the room connections is peripheral, NOT the one the servers go to, then you're back to the single cable situation you have now - no difference whatever except much higher cabling cost - unless the server switch is a gigabit switch and the connections to the peripheral servers are gigabit connections - then it would be faster than 100-Mbs connections to the room switches.
If the traffic density is low, it doesn't make one bit of difference how you do it, but if you're doing intense work against an Oracle server or something like that optimization is worthwhile.
Even so, I'd recommend local switches with a single gigabit connection each back to a gigabit switch at the servers to minimize cabling expense - but if the server room peripheral switches aren't gigabit to the server switch and servers, you're just spinning your wheels to do anything different.
Cabling is expensive and difficult to move around for M&A (moves and adds), so you don't want to incur cable expense unless it's really worthwhile.