If you have to iterate over every possible combination
you're kinda fucked. The real task is figuring out how NOT to iterate over every possible combination.
For example, the closest pair of points problem. How do you take a collection of points and figure out which pair are the closest together? Well, one way is to compare every point's distance to every other point.
Assume each point is stored in an array foo (this is in object rexx, still my most familiar language;) and you're going to store the distances in array bar
do i = 1 to foo~items\n do j = 1 to foo~items\n if i = j then do\n bar[i][j] = 0\n iterate\n end\n bar[i][j] = foo~distance(i, j) /* assuming a method for finding distance exists for array foo */\n end\n end\n\ndo i = 1 to bar~items\n sort(bar[i])\n end\n\nsort(bar)
This really sucks. It's an n^2 complexity, and will take a long time for any set of points that's more than trivially large.
However, another way to do it is to break up the problem into manageable subsets. For example, sort the array of points by the x position (ie- each point is delineated by (x,y,z) coordinates).
To compare a set of three points takes three operations, so you use that as your base case; break the array of points sorted by the x coord into sets of three, and find the smallest distance in each. Now, you will have an idea of where the divisions are between them (they will be x1+x2/2 for the adjacent points on each side of the division) and you can then compare the the distance of the closest pair of points in the set of three to their distance to the nearest division. If it's closer than that, you then have to compare the closest point to each division to see if they are closer to each other than to any of the points in their set of three.
The neat thing is, if you use a merge sort approach, you can do this automagically as you break the set up and merge them together again. This will result in a complexity of O(nlogn) instead of O(n^2), which is something that can be done in a reasonable amount of time.
This is from an actual problem I solved for a friend of mine in biochem here at Queen's. He was doing the naive method above... in excel. He let the spreadsheet sit for a week on a set of three thousand or so points before he gave up and killed it. An entire week with his PC running balls to the wall the whole time... and for all I know, it may have hit only a few percent of the space in that time. When you do things like that, the amount of computation involved in getting an answer goes up really fast as the problem grows larger.
Once you get into the O(n^2) or even worse O(2^n) levels of complexity, you either figure out how to get an answer that's good enough if not the best, or you don't bother; as soon as you get into the exponential levels, you can easily be talking about heat death of the universe problems like the travelling salesman problem (which can't be solved for a sufficiently large value of n before the heat death of the universe). For reference, a sufficiently large value of n in the travelling salesman problem is something like seven or eight cities. If you get a manager type asking you to solve that problem, you need to show them why it's not solvable unless you're a genius mathematician who will make their entire career by solvingn it.
--\n-------------------------------------------------------------------\n* Jack Troughton jake at consultron.ca *\n* [link|http://consultron.ca|http://consultron.ca] [link|irc://irc.ecomstation.ca|irc://irc.ecomstation.ca] *\n* Kingston Ontario Canada [link|news://news.consultron.ca|news://news.consultron.ca] *\n-------------------------------------------------------------------