IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Nevermind, I think I see what you are saying
It is much faster now (see original posts & history), in that I just allocate a new array element in the loop and then use the toString method to convert the entire array to a string with commas in between each element.

Thanks. I'd still like to find a method whereby the src url is an input stream or generator function, instead of building the xbm array in memory.
New You almost have it...
You are building up the entire output array and then joining it. I was suggesting doing that incrementally. Like this (untested):\r\n
\r\n   var myImageString = [];\r\n   for (var i = 0; i < myImageArray.length; i++) {\r\n      var myImageThisRow = [];\r\n      for (var j = 0; j < myImageArray[i].length; j++) {\r\n         myImageThisRow.push(HexMap[myImageArray[i][j]]);\r\n      }\r\n      // Reduce the number of objects to be nice on gc\r\n      myImageString.push(myImageThisRow.toString());\r\n   }\r\n
\r\nPast experience with large JavaScript datastructures tells me that while you'll thrash a lot less than when you create tons of garbage, the garbage collector still thrashes on lots of small objects that aren't to be collected. (It keeps on figuring out what it can collect..and finds nothing.) The above strategy keeps the number of objects that it wants to handle under control.

\r\n\r\nAs for not building it in memory, I wish that I knew how to do that, but I don't. I only know how to speed things up slightly.

\r\n\r\nCheers,
\r\nBen
"good ideas and bad code build communities, the other three combinations do not"\r\n- [link|http://archives.real-time.com/pipermail/cocoon-devel/2000-October/003023.html|Stefano Mazzocchi]
New Gettin in Tune
I see the push method is better at managing memory than simply adding onto the array.

Much Much faster now by orders of magnitude. Thanx.
New It isn't the push that made the difference
I just did that to reduce bookkeeping in the code.

The problem is that as you use lots of memory, you trigger frequent garbage collection runs. If there are lots of small objects, those runs take a long time. So you are spending a lot of CPU looking for garbage that isn't there.

Switching from having a very large number of small objects to just having a bunch of large objects and another bunch more small ones doesn't materially effect how often it runs garbage collection, but it does make a garbage collection (at least with some allocators) run much faster because it has fewer objects to look at until it decides that there is no garbage to eliminate.

It isn't an obvious thing to do to tune performance, unless you've faced the same problem before. (Which I have.)

Cheers,
Ben
"good ideas and bad code build communities, the other three combinations do not"
- [link|http://archives.real-time.com/pipermail/cocoon-devel/2000-October/003023.html|Stefano Mazzocchi]
     Creating images on the fly in client side JavaScript - (ChrisR) - (10)
         ???? wtf? -NT - (deSitter) - (3)
             Just trying to generate a dynamic image on the client -NT - (ChrisR) - (2)
                 with Javascript? (scratches head) - (deSitter) - (1)
                     Admittedly, it's a hack - (ChrisR)
         The only thing that leaps out at me... - (ben_tilly) - (5)
             Not sure I follow your recomendation - (ChrisR)
             Nevermind, I think I see what you are saying - (ChrisR) - (3)
                 You almost have it... - (ben_tilly) - (2)
                     Gettin in Tune - (ChrisR) - (1)
                         It isn't the push that made the difference - (ben_tilly)

I'm sorry ... I have to go Google your threat.
74 ms