Sunday, June 23, 2013

TurnOnLamps editorial

This is the one other interesting problem in the match. Remember I solved this problem during the match and even wrote a quick explanation already? It is interesting, I knew there was a greedy, but I didn't know if I was going to prove it, so I proposed me to write the editorial about the same dynamic programming approach. Interestingly, while I was writing that explanation, I could see through the problem and notice the idea nad proof for the greedy approach. It is not the first time that the process of writing an explanation for a problem teaches me more about the problem than just trying to solve it. For this reason, there are times I consider it very (VERY) seriously to write explanations for problems at the same time as I am participating in the contest.

I should finally be done with this SRM 583 editorial soon. The other problems are all implementation ones (I fear the explanations will end up boring and tedious). Then I have to write the editorial for round 3B. This job is becoming a busy one.

TurnOnLamps

We have a trees. There are lamps in each of its edges. Some are on, some are off. In a single step, you pick two vertexes and toggle the lamps between the two vertexes. There is a list of important edges that must end with a turned-on lamp (state = 1). Return the minimum number of simple paths you need to pick.

Number of paths and parity

The city is a tree and there are important edges (roads) and unimportant ones. Whenever we pick a pair of vertexes, all the lamps in the simple path between them are toggled. An important edge that starts with a turned-on lamp has to belong to an even number of picked simple paths. If instead, the important edge starts with a turned-off lamp, then it has to be picked an odd number of times. To simplify, we can say that unimportant edges can be picked an even or odd number of times. So we just need to minimize the number of paths we pick following the requirement of the parity of the number of times each edge is picked.

At most once

Since the problem depends on the parity of the number of times we use each edge, let us try to see what happens after we make that decision for each edge. For each edge `(u,v)`, decide the number of times it is picked in a path. Then we try to think of the paths that follow this rule. This is the time where we find out something interesting. Imagine an edge is used two times, by two separate paths:

Since the edge is used twice, its state is not affected at the end. The trick is to notice that those two paths would alter the states in the same way as the following two paths:

And in this case, the edge in question is not visited at all. We can generalize this idea further, it is never necessary to pick the same edge more than once, whenever that happens, we can find an equivalent combination of paths that have the same effect.

Exactly once

We cannot pick an edge more than once, but there are edges that we must pick at least once (Important edges that start with a state equal to 0). Then we have to pick them exactly once. There are other edges that must be picked an even number of times (important edges with initial state equal to 1), since 2 or more picks are not allowed, we must pick these edges zero times - don't pick them at all.

Greedy approach

All important edges with lamp state = 0 must be picked exactly once, we just need to minimize the number of paths that include them. The paths must not include any important edge with lamp state = 1.

There are `O(n^2)` total simple paths in the tree. We can just find each valid one of them. For each path, count the number of important edges with state = 0 that it contains. Then we can just pick the path with the largest amount of important edges it changes and make the change. This approach is optimal because any remaining important edge with lamp state = 0 will have to be changed eventually anyway. Making the changes in the path, can only make other paths unable to be picked in the future, if the other paths share at least one important edge with state = 0 with the largest path. If they do, however, we will always be unable to pick a smaller path that will contain the remnants of the path that is now unable to be picked. Thus the decision can only decrease and never increase the total number of used paths.

Finding the paths

The step to find the simple path between two vertices in the tree is a implementation challenge of its own. Since it is a tree, there is only one simple path and we can use any graph search like a Depth-first search (DFS). We just need to count the number of important edges afterwards.

Saturday, June 22, 2013

SRM 583 Editorial preview: RandomPaintingOnABoard

Edit: So I just noticed a flaw with my strategy. Yep, MathJax renders my math perfectly... except for RSS, if you see giberish in your feed, look closer, it is TeX code, if you want to see the formulas you'll need to read the post in the site, sorry. Will look for work-arounds

Another week, another ultra late editorial ( Just in case, SRM 582 is being written by someone else ).

This was a interesting problem. Very math-focused and I took this chance as a chance to test my idea to start to use mathjax in editorials. I found out about mathjax, thanks to blue.boy's blog and since then I liked the idea, because frankly, my ascii formula art is very poor and I think we need TeX to explain some problems. Specially something like this problem.

I had a week-long difficulty trying to solve this problem. I actually had the main idea two days ago. Thanks to Petr and google translate. But something about the google translated explanation didn't make any sense, how was he pulling a magic trick to try `2^{12}` subsets instead of `2^{21}` ? . I could only notice last night, that there was another constraint for the number of cells...

RandomPaintingOnABoard

There is a board. In each turn a random cell is picked and painted black. Each cell `(i,j)` has probability equal to board[i][j] / (sum of all values in board[][]). The values are between 0 and 9 and each column and each row have at least one non-zero value. Return the expected number of steps before each column and each row have at least one painted cell. Note that the same cell can be picked multiple times.

A note about the constraints

Constraints indicate us that the maximum width and height will be 21. There is another constraint though: The maximum total number of cells is 150. This can help us because it means that the smaller dimension will be at most 12. `13 * 12 > 150`. Since rows and columns behave in the same way in this problem, we can just assume that the width is always the smallest dimension - In case it is not, we can just rotate the board. From now on assume that `1 <= w <= 12` and `w <= h <= 21` are the width and height of the board, respectively.

A sum of probabilities

The expected value of the necessary number of steps is:

\[ \begin{align*} E(\text{number of steps}) = \sum_{i=0}^\infty {i P(\text{i steps are needed}) } \end{align*} \]

We can think of many ways to find the probability that i steps are needed. This time it is convenient to think of the probability that the objective (at least one cell in each row/column is painted) is not fulfilled after `i` steps. Then you can tell, if the objective was not fulfilled after `i-1` steps, but it is fulfilled after `i` steps, then exactly `i` steps were needed. The following is a way to develop a formula to solve the problem after finding this conclusion:

\[ \begin{align*} P\left(\text{i steps are needed}\right) &= P\left(\text{Not done after i-1 steps}\right) - P\left(\text{Not done after i}\right) \\ E\left(\text{number of steps}\right) &= \sum_{i=0}^\infty {\left(i \left( P\left(\text{Not done after i-1 steps}\right) - P\left(\mbox{Not done after i}\right) \right) \right)} \\ E\left(\text{number of steps}\right) &= \sum_{i=0}^\infty {\left(i P\left(\text{Not done after i-1 steps}\right) - i P\left(\text{Not done after i}\right) \right)} \\ E\left(\text{number of steps}\right) &= \sum_{i=0}^\infty {i P\left(\text{Not done after i-1 steps}\right)} - \sum_{i=0}^\infty {i P\left(\text{Not done after i steps}\right)} \\ E\left(\text{number of steps}\right) &= 0 * P\left(\text{Not done after -1 steps}\right) + \sum_{i=1}^\infty {i P\left(\text{Not done after i-1 steps}\right)} \\ &- \sum_{i=1}^\infty {\left(i - 1\right) P\left(\text{Not done after i-1 steps}\right)} \\ E\left(\text{number of steps}\right) &= \sum_{i=1}^\infty {\left(i P\left(\text{Not done after i-1 steps}\right) - \left(i-1\right) P\left(\text{Not done after i-1 steps}\right) \right)} \\ E\left(\text{number of steps}\right) &= \sum_{i=1}^\infty {P\left(\text{Not done after i-1 steps}\right)} \\ E\left(\text{number of steps}\right) &= P\left(\text{Not done after 0 steps}\right) + P\left(\text{Not done after 1 step}\right) + P\left(\text{Not done after 2 steps}\right) + ... \end{align*} \]

With some changes, we have:

\[ \begin{equation*} \tag{1} E\left(\text{number of steps}\right) = \sum_{i=0}^\infty {P\left(\text{Not done after i steps}\right)} \\ \end{equation*} \]

Let us try to find probabilities that the objective is not done after `i` steps.

Inclusion-exclusion

We can see that probability as the probability that, after `i` steps, at least one row or column is not painted.

It is possible to calculate the probability each single row is painted or not after `i` steps. Let `s` be the probability we pick the row (or column) in a single step. The probability we do not ever pick it after `i` steps is ` ( 1 - s )^i `. We can add these probabilities together for each column or row. However ,we would be double-counting any situation in which there are two rows, two columns or a row and a column that are not painted after `i` steps. We can calculate, for each pair of two columns or rows, the probability `t` either of them are picked in a single step. The probabilities ` (1-t)^i ` are now the probability that neither of the two columns/rows/(column and row) will be picked after two steps, so we can subtract them from the original sum. But we would be subtracting some cases more times than necessary - The cases in which there are 3 or more columns or rows that are not painted. In short, we can use the inclusion-exclusion principle:

\[ \begin{align*} P\left(\text{Not done after } i \text{ steps}\right) &= \sum_{ x \in \left( \text{rows } \cup \text{ columns} \right) } { \left(1 - P(x) \right)^i } \\ &- \sum_{ \left\{x_0,x_1\right\} \subseteq \left( \text{rows } \cup \text{ columns} \right) } { \left(1 - P\left\{x_0,x_1\right\} \right)^i } \\ &+ \sum_{ \left\{x_0,x_1,x_2\right\} \subseteq \left( \text{rows } \cup \text{ columns} \right) } { \left(1 - P\left\{x_0,x_1,x_2\right\} \right)^i } \\ &- \sum_{ \left\{x_0,x_1,x_2,x_3\right\} \subseteq \left( \text{rows } \cup \text{ columns} \right) } { \left(1 - P\left\{x_0,x_1,x_2,x_3\right\} \right)^i } \\ &+ ... \\ \\ \tag{2} P\left(\text{Not done after } i \text{ steps}\right) &= \sum_{ s \subseteq \left( \text{rows } \cup \text{ columns} \right) , \left|s\right| \text{ is odd} } \left( 1 - P\left(s\right) \right) ^ i \\ &- \sum_{ s \subseteq \left( \text{rows } \cup \text{ columns} \right) , \left|s\right| \text{ is even} } \left( 1 - P\left(s\right) \right) ^ i \end{align*} \]

For each subset \(s\) of rows and columns, we calculate its probability \( P\left(s\right) \) that any of the rows and columns are picked in a single step. Then we subtract or add \( \left( 1 - P\left(s\right) \right)^i \) to the total, depending on the parity of the number of elements in the set.

Putting it together

Combining equations \( (1) \) and \( (2) \) together:

\[ \begin{align*} E\left(\text{number of steps}\right) &= \sum_{i=0}^\infty { \left( \sum_{\text{odd }s } {\left( 1 - P\left(s\right) \right) ^ i} - \sum_{\text{even }s}{\left( 1 - P\left(s\right) \right) ^ i} \right)} \\ E\left(\text{number of steps}\right) &= \sum_{\text{odd }s } {\left( \sum_{i=0}^\infty \left( 1 - P\left(s\right) \right) ^ i \right) } - \sum_{\text{even }s}{\left( \sum_{i=0}^\infty \left( 1 - P\left(s\right) \right) ^ i \right)} \end{align*} \]

The last trick we used was to distribute the outer summations inside the inner summations. For each set \(s\) of columns and rows, we just need to calculate \( \left( 1 - P\left(s\right) \right) ^ i \). Depending on the number of elements in the set, we add or subtract the resulting value from the total.

Let us find \( \left( 1 - P\left(s\right) \right) \), that is the probability that none of the cells that belong to a column or row in the set are picked. Let \(x\) be the sum of the values outside the set, the probability is then : \( (x / T) \). Where \(T\) is the total sum of all the cells in the board. The sum of all \( \left( x / T \right) ^ i \) is the sum of a geometric progression. Therefore:

\[ \sum_{i=0}^\infty \left( \frac{x}{T} \right)^i = \frac{T}{T - x} \]

For each set of columns and rows with a sum of cells outside it equal to \( x \), if the number of elements of the set is odd then add \( \frac{T}{T - x} \) else subtract \( \frac{T}{T - x} \). That is our result. Note that when \( x = T \), the formula is invalid. This can only happen if the set cannot be picked in any step, which is not true, because any row and column will always have at least one non-zero cell.

Count the sets

Since the value that is added/subtracted to the result depends only on \( x \), the sum of the cells outside the set, we can just count the number of even and odd sets for each value of \( x \). In fact, the result is to just multiply (number of odd sets for \(x\)) - (number of even sets for \(x\)) and \( \frac{x}{T - x} \) together. So we only need to find the total subtraction between the number of odd and even sets for each \( x \).

The sum \( (w + h) \) can be as large as 140, so the number of subsets of rows and columns can be insanely high. Remember though that \(w \leq 12\), so the number of sets of columns is not going to be very high. Assume that the set of columns included in the set \( s \)is already fixed, so we only need to decide which rows to add. If we decide to include the i-th column, then we already know the total sum of the cells outside the set - It is equal to the cells in the i-th row that do not belong to any of the picked columns. We can pre-calculate this sum after picking the set of columns.

Dynamic programming

Let mask be the fixed subset of columns. We precalculate \( b_i \), for each row \( i \), as the sum of the cells not included in any of the picked columns. Find the subtraction (number of odd sets) - (number of even sets) that is possible for every value \(x\) of the sum of cells outside sets of columns and rows that already include mask. Let \( f(i, x) \) be a function that solves this problem for the first \(i\) columns. If the \(i\)-th column is added then \(x\) remains the way it is, and the parity of the number of elements in the set changes. If \(i\)-th column is not added, \(x \) is increased by \(b_i\) and the parity does not change. From this we can make a dynamic programming solution. For each \( f(i, x) \), subtract \(f(i,x)\) from \(f(i+1,x)\) and add \( f(i,x) \) to \( f(i+1,x + b_i ) \). The base case is \( f(0,0) \) which is equal to -1 (If the number of columns in mask is even) or 1 (If the number of elements in mask is odd). After running this dynamic programming approach, we can accumulate the values we have found for each \(x\) given the mask.

For a complexity analysis, we repeat a \( O\left( h \times T \right) \) dynamic programming approach for \( O\left(2 ^ w \right) \) subsets of columns. In total, \( O\left(2^w \times h \times T \right) \). Remember that \( w \leq 12 \) and by constraints \(T \leq 1350\), so this solution can run comfortably under 2 seconds in TopCoder's servers.

Solution

// Assume we already processed the prob String[], we made an
// int[][] a, with the translated cell values and we rotated
// it in case w > h, so w is now guaranteed to be at most 12:
public double expectedScoresSmallWidth(int[][] a){
int h = a.length, w = a[0].length, T = 0;

for (int i=0; i<w; i++) {
for(int j=0; j<h; j++) {
T += a[j][i];
}
}
// cnt[x] : Subtraction between the number of odd sets and even sets that
// have a sum of x for the outside cells.
long cnt[] = new long[T+1];

for (int mask=0; mask<(1<<w); mask++) {
int sign = -1;
for (int i=0;i<w;i++) {
if ( (mask&(1<<i)) != 0) {
sign = -sign;
}
}
// base case: (if mask has an even number of elements -1, else 1)
long f[][] = new long[h+1][T+1];
f[0][0] = sign;

// precalculate b[i]: Elements in column i outside of the mask subset:
int b[] = new int[h];
for (int i=0; i < h; i++) {
for(int j=0; j < w; j++) {
if ((mask&(1<<j)) == 0) {
b[i] += a[i][j];
}
}
}

for(int i=0; i<h; i++) for(int x=0; x<=T; x++) {
f[i+1][x] -= f[i][x];
if (x + b[i] <= T) {
f[i+1][x + b[i]] += f[i][x];
}
}

for (int x=0; x < T; x++) {
cnt[x] += f[h][x];
}
}

double ans = 0.0;
for (int x=0; x<T; x++) {
ans += (double)cnt[x] * T / (T-x);
}
return ans;
}

Wednesday, June 19, 2013

How to fail in TopCoder Marathon round 3

You are given N circles. Just because the problem setter is evil, the distribution for N is not uniform , the numbers are from 50 to 500, and have a bias towards the bottom. Each circle has a random center inside the [(0,0), (1,1)] rectangle, a random radius (rather small) and a random mass m[i]. They are of course going to overlap initially. Move the circles to any other positions in such a way that they no longer overlap and the total work you spend is as little as possible. The total work is equal to the sum of m[i]*(distance between original and final position) for each i.

So the objective is to waste 2 weeks of your life trying to come up with a good solution to this problem that runs in 10 seconds. If you acquire a good rank (probably at least 44-th) you win a t-shirt. The kind of t-shirt you couldn't win in the algorithm track of the tournament. If you do very good, and that means top 4, you win a free trip. I have no idea what is necessary to ever be in the top 4 of a real Marathon match. I suspect witchcraft is somehow related to it, though.

First week

After a very poor performance in the round 2. I decided to take round 3 very seriously and start working on it since the first day of the match. It was a great plan, and I think I could have gotten a t-shirt if I went through it. However, the week this match started was quite a difficult week. I was busy with an editorial, and then I became busy having to work in problem setting Block3Checkers - That problem that was used in the algorithm round. I actually barely noticed the marathon started the Satuday after the problem was finished. But then I still had to work on the *new* editorial...

I could barely start to work in this problem on last week's Tuesday. Almost all of the first week was not productive.

First version

I dedicate the first version of every marathon to the most trivial approach I can think of. The main purpose of the first version is to test the test environment. I have to tweak plenty of things. I have to adapt the visualizer provided with the problem to my ends. Adapt the testing to the thing that runs 500 tests for me. Make the thing that grades and ranks solutions, etc.

In this first version, I pick an arbitrary circle as the first circle. Then for each of the other circles, I try to put them as close as possible to the first circle, moving in the same angle as the one between the first circle and the new one. Using a binary search for the distance.

In the image, small red points are the original locations of each circle. Darker circles have larger m[i].

Even this solution was non-trivial, large Ns were timing out. It turns out that large cases don't have enough time to try all circles as the best initial circle. So I had to start with the larger circles and break when the time runs out.

More so, there was a bug with this solution, so I had to submit it twice. The outcome was position 169, very low. I knew I needed to point towards top 40 if I wanted a t-shirt.

Third version

So I already knew this problem was going to be complex. Too much geometry and too many factors to consider. One of them was , how exactly to decide where to place each circle? So I thought of something. I created a grid of sqrt(N) x sqrt(N) circle centers that were so separated from each other to ensure that no circle overlaps if put in those centers. Then I ran a bipartite matching between each circle and the new centers. Picking the matching with the best total work.

After the positions are decided, of course there will be tons of empty space between the circles. Do not despair, for I am adding a function that will try the circles in order, and try to move each circle as close as possible towards its center (in that direction). Using binary search.

Bipartite matching is too slow for large cases. So large cases still use the old approach.

Didn't really improve my score that much. Note how the circles in the two images are not in the same scale. The green square is the square from corner (0,0) to (1,1), all centers are picked inside it.

Fourth version

It was the beginning of a new day, and had a silly idea to try to improve the score. The original grid of empty spaces where to place the circles looks like this:

As you can see, that's not the most efficient use of all the space. So how about adding new, smaller spaces between the gaps?

It did improve my results. One of the reasons is that the radii are not uniformly distributed either, so there is a good quantity of small radii as well as very large. This took me to rank 146, still too low.

Fifth version

I needed a good idea, and I needed it quick. However, all I could think of was in lower level optimizations, I made a new min-cost max flow solver that was a bit faster. And indeed it was, it allowed me to use the matching strategy in larger N, but not much else. There was a mild improvement.

Sixth version

Bipartite matching is not the only kind of matching. We could always use a greedy matching. I decided to use greedy matching for larger values of N. Finding the closest position for the large values of m[i] first.

I also noticed that the whole binary search I was using to tighten the circles was slow and not very precise. Then it struck me: I can use equations! So you have a circle at some position (x1,y1) away from the original position (x,y), you want to find the closest position to (x,y) in the same angle that does not overlap with other circles. This means that you want a position that is tangent to some other circle. For each other circle, it is possible to find the positions (in the picked angle, if they exist) that would make the circle tangent to them. Then you need to pick one of these tangets that doesn't overlap with other circles....

This improved things up, but the equation is the important part, it opened some doors...

Seventh version

Another day passed. Kept thinking that the matching idea is probably not the way to go. It is very slow, and at the end, it creates gaps that you then waste tons of time "fixing". A better approach would need to be fast (so I could have some sort of iterative improvement to a solution

What I did like was the equation stuff from the previous solution. Then I somehow thought of something. An approach sort of similar to the very first one, but smarter: First decide an order in which to add the circles. Let us go with heaviest first. For each circle, try to place it as close as possible to its original location without overlaps. To reuse the equation, in each step we pick an angle (from the center), one of 360 ones. Using the angle, it is possible to find the best position for the circle with that angle.

This approach was slow at first. For each of N circles, we needed to repeat the same operation 360 times. The operation itself was slow because it needed to calculate positions using all of the previous circles. For each position it found, it needed to use all the previous circles again to avoid overlap... So slow, that I needed to make it use 180 angles for large values of N, making it less accurate.

This was still the most significant improvement I made. First time in the first page of standings. Top 100. But not good enough. The good thing about this approach is that it has plenty of room for improvement. How to pick the order? Picking the order is very friendly with iterative randomized searches, which makes the question of how to make the process - given an order - faster. How to pick the angle? , for example. How not to do that much work using all of the circles and instead only use the relevant ones? etc.

Eight version

I took a break for three days, because I was very late in writing the editorial for the TCO Algorithm Round 3A. My chronic headache (Which has just entered its 9-th consecutive week! Congratulations!) was also acting up. So I couldn't do much for three days. But I was tweaking the solution and doing X and Y things trying to improve. In the last day, I put things up together for a new submission. Surprising enough, all the small things I did at random times during the three days could really improve my score.

I tried to do Hill climbing for picking the order. In each iteration, pick two random indexes of the order array, swap them and calculate the minimum work when using that order, if the work is an improvement, replace your current order array with the new one, so that in the next iteration, that is your base. There are two problems with Hill Climbing. One is that my approach was too slow (barely executing once in large cases) the other is that Hill Climbing is not very good. It is prone to local minimums and getting stuck. To deal with the speed issue a bit , I changed the number of angles to 180 (small Ns) and 90 (large Ns), so that even large cases would have 4 or 5 iterations to try a modification of the order.

The initial order could also use some fixes. m[i] is a very good marker, but there are other things. For example, you could argue that it is more convenient to add circles that are close to the center (0.5, 0.5) first. So that there are less overlaps. But the weight is still important, so you need a sort of heuristic to combine both ideas. I settled with: sqrt(m[i])*m[i] * (1.5 - (distance to 0.5,0.5) ). Why? You ask. I have no idea, but it seemed to do better in local tests than other things I thought of...

I was able to optimize the cost of the function that adds the circles. Because it tries angles in all directions, then all already-place circles are relevant in that step. But only some few circles are relevant to some angles. In an initial step, I process all of the already-existing circles and find the lower and upper bound for an angle (starting at the original position of the new circle) that would have a ray that intersects with the circle. This is easy done by noticing that there is a straight triangle in there, with the upper and lower angles and that we know two of the distances, so it takes as much as calling asin.

This was good enough to reach top 50, back on June the 18-th... (Everybody keeps improving and your rank keeps getting worse if you don't improve).

Ninth version

A new day. Forgot to mention that the reason I focused so much in this approach is that I could show to myself that the order in which you add the circles alone can greatly improve my score. I tried with N=8 and all permutations. It was clear that if I had a good way to pick the order in which to add the circles, I could beat the first place even. But the problem is how to find this order.

A good idea is to make sure the initial order is as good as possible. The better the initial order, the less iterations you need before finding a GREAT order. In theory at least.

So I changed the order. To the previous formula I multiplied: m[i] / (sum of all m[i]s that overlap with the circle). The idea is that if a circle displaces plenty of heavy circles, it is probably worse as an initial pick that a light circle that does not displace heavy circles. This is all out of my mind and completely made up, but these crazy formulas worked in tests. This is the reason I have 500 tests, and a tester system that runs one in each cores... Before finding a good formula I try tons and tons of bad ones. However, once you reach to the hill climbing phase of a problem, testing becomes very tedious. Change a small bit of code, wait 15 minutes for a result. It is frustrating to wait. Specially near the end of the match...

Tenth version!

So I worked on ways to increase the number of iterations. The best I could find was a logic that made me stop needing to check, for each of the possible positions, if they overlap or not. Back to the tangents approach from 0.6, the two tangents delimit an interval of distances that we are unable to use. So for each relevant circle we have one of these intervals. We can use a line-sweep algorith. Just consider each start of an interval as an open bracket event, and each end of an interval as an end bracket event. After sorting the start/end points, it is easy to find the smallest distance that is not inside any bracket...

So I was finally having plenty of iterations. More than a thousand for small cases, and tens of iterations for the large cases. Remember I said Hill Climbing is not that great because of local minimums? It is the most noticeable when you have plenty of iterations. So I tried to implement a simulated annealing. And I failed. I thought I wasn't failing, because I was getting some good results after adding the annealing. But I was a victim of confirmation bias. A couple of versions later, I noticed that the only reason my annealing was working at all was that it wasn't working at all. Whenever I fixed the formulas to make the annealing work, and keep a less than-optimal solution in memory, results became worse. All that the annealing was doing was add instability to my test results, which made them look better, or worse after small changes, but it wasn't really so. I really need to learn SA....

11-th version

It was yesterday's night. My sleep time was approaching and I had nothing. I reached that moment in Marathons in which you cannot improve anything and can only manage to improvise and do small tweaks. I had just discovered that the SA was a total failure and ended up disabling it completely and decided to stick to Hill Climbing instead.

I needed a good idea. 11:00 PM was approaching, but then I had a good idea!. The angle searching! I can improve the precision of the angle searching. First try angles in increments of 12 (0, 12, 24, ...) then pick the better angle. The trick is that then you go towards the 22 angles next to 12, so in a way you do fulfill precision similar to picking the angles in increments of 1 (In reality this misses a few cases in which the real minimum was well hidden between two multiples of 12 that weren't better than a local minimum). The benefit is that this is faster (needs to check 52 angles in total), but it is more accurate than the previous approaches. So this improves a solution in two ways. Accuracy, and more iterations = better chance to find a good order.

That was in theory. In practice I was not having success making it work. The second phase was giving way wrong results (tons of circles overlapping together). Something was definitely not good with this. Was the theory wrong? Or is there a small bug?

I looked for the bug, but I couldn't find it. I knew that today, I was not going to be able to spend much time in the problem during the morning. It is also not good to do major changes during the last hours because you might end up making a strong mistake that you can't find and fix in time. So that night was probably my last chance to try a significant improvement...

11:00 PM passed, but I still didn't want to give up. Surely one day of not sleeping at 11:00 PM, won't delay my headache recovery much longer than it already is! (And after all, the whole idea to keep a strict sleep schedule seems based on a small study and doesn't seem to have that much success (Or does it? I am noticing some improvement, even though my days with headache have definitely not ended)... Nevertheless, I was still looking for the bug and it was already 12:00 AM...

I kept looking, decided to basically compare the two phases variable by variable until I find a discrepancy. Indeed, some variables weren't working well. I slowly got to the rood of it... It was near 1:00 AM, when I finally saw it, when converting the angle to radians I had int a = angle * PI / 180. I wonder if my readers have noticed the blatant mistake in this.... when I fixed it, and ran tests, it was beautiful. Finally a significant change! It wasn't so significant, per se.. , but enough to climb to 86XXXX score and back to the top 50. It was a symbolic victory, I knew that today, every coder was going to improve and my top 50 was going to be short-lived...

12 and 13-th versions

More tweaks there and there. I tried to think of good ideas. I had little time in the morning, but eventually it occurred me to try a dirty trick. I now use the second iteration to try a new formula for the initial ordering. I was trying this new formula (giving a small advantage to things with small radii) but it was having mixed results- Better results in some cases, worse in others. Why not combine both? I planned this as my last submission, and then went to college. Results were not that bad.

I came back from college earlier than I planned, there were 15 minutes before 12:00 PM, I decided to try another dirty trick! Third iteration tried the simple ordering by m[i]. It seemed to be working fine in local tests, but I didn't have time to finish testing all 500s, so I decided to risk it and submit...

Surprise! Turns out that my 12-th submission was done at 10:15 AM, so I actually had no way to make a new submission before 12:00 PM. That's a bummer. ... wait.. Surprise! The match ends at 13:00 not at 12:00. (what?) so I have a whole hour to try more things. I waited for the local tests for the new dirty trick to end. A small improvement with less variance!. So I decided that I could always do with a new dirty trick. Went to lunch while testing it. Just blatant m[i]/r[i].

I came back around 12:50, the new dirty trick did great in the local tests. When combined, the two tricks would make me improve around 10^4 in points, not bad. So I decided to submit it. ... Surprise! I can't submit because I made a whole submission at 12:35... What? I really didn't remember making that submission. I think I intended it to be an example submission to make sure I don't have any new time outs, but I accidentally made it a full submission. So, only one dirty trick could be used :(.

Preliminary results

Position 57-th, I have to improve 13 positions (not impossible, but unlikely) in the final tests if I want a t-shirt. Ouch.

Did I mention that because I went to sleep at 1:00 AM last night, my headache has gotten very bad today?

But it was a great and fun match. I think I really need to learn Simulated Annealing though.

Tuesday, June 18, 2013

SRM 583: Last minute (and explanations)

So, I decided a new strategy. Not opening the 250-points (Easy) problem is surely going to be interesting. Today's match seemed intentionally directed towards making that strategy fail. Medium and Easy were both solved by plenty of coders and I lost rating even after solving the medium.

Delay

I had a delay, I was using the time between 11:00 AM and 11:10 AM to try to improve my solution for the Marathon Round 3 problem. Near 11:10, my most recent implementation wasn't working as intended. So I kept insisting even after 11:10 AM, I didn't want to start the SRM without removing this bug from my mind. I think I started the match seven minutes afterwards.

950 points

I opened the hard problem. Couldn't do much and gave up and moved to 500 I guess that's going to take me a while to solve for the editorial.

500 points - The one with trees

Lately TopCoder is taking ages before the results and problem statements are published.

In short, you have a tree of at most 50 vertices. You have to pick up the minimum number of simple paths between its vertices. Some edges can be picked any number of times, some edges must be picked an even number of times and some edges must be picked an odd number of times.

So, let us try to decide the number of times each edge will be used and then pick the minimum cost to do that...

The root of the tree has some children nodes, each connected with an edge. Let us decide to use its edge #0 in a total of i simple paths. Assume that this pick is valid (even or odd as needed). Interesting things happen afterwards:

  • Assume that we then solved for child #0, we need to decide what to do with edges 1, 2, etc of the root. It is exactly a new version of the same problem, the only difference is that we have used i edges in "the past". i edges are [available].
  • The children 0 of the root now has a parent edge that is used i times. We can consider this scenario a new version of the problem, the only difference is that the root has i [available] edges.
  • We call these edges "available", because if we decide to use a later edge in a path, we can use the root to connect these two edges and take part in the same simple path (and save a cost unit).

So the problem is split in two sub-versions of the same problem.

Let us solve the general version: f(x, paths, e) returns the minimum cost to solve the sub-tree rooted at x. We are only allowed to use edges indexed greater than or equal to e. We know that the "past" edges (including the parent edge) were used a number of times and we have paths "available" egdes.

Let us pick the number of times i we will use edge e in a simple path (Has to have a valid parity). This could increase the cost, if i <= paths, it wouldn't, because we can just use the previous paths without increasing the cost. i > paths, then we have created (i - paths) new paths. The number of paths for the next edge changes. It is abs(i - paths).

  • We need to solve the sub-problem for the child connected by edge e: f(child, i, 0) (Because its parent edge is used i times).
  • We need to solve the sub-problem of the remaining edges of the root: f(x, new value of paths, e + 1 ).

The sum between the cost and the two new calls to f() is the result. We can implement this with memoization.

const int MAX_VERTICES = 50;
const int MAX_EDGES = 49;

const int MAX_STEPS = MAX_EDGES; //we can just pick each important edge once.

struct TurnOnLamps
{
int degree[MAX_VERTICES];
int g[MAX_VERTICES][MAX_EDGES];
bool can[MAX_VERTICES][MAX_EDGES][2];

int mem[MAX_VERTICES][ MAX_STEPS + 1][MAX_EDGES+1];

int rec(int x, int paths, int e)
{
int & res = mem[x][paths][e];
if (res == -1) {
if (e == degree[x]) {
//base case. All the sub-trees were decided, well-done!
// we can just decide not to use the available roads
res = 0;
} else {
res = MAX_STEPS * 2; //Arbitrarily large value
// how many paths use edge e?
for (int i = 0; i <= MAX_STEPS; i++) {
// Use this edge in i paths:
if (can[x][e][ abs(i) % 2]) { //Valid parity for the number of paths
int npaths = abs( paths - i );
int cost = max( i - paths , 0 );
int p = rec(g[x][e], i , 0 );
int q = rec(x , npaths, e+1);
res = std::min(res, cost + p + q );
}
}
}
}
return res;
}

int minimize(vector <int> roads, string initState, string isImportant)
{
memset(mem, -1, sizeof(mem));
int n = roads.size() + 1;
for (int i=0; i<n; i++) {
degree[i] = 0;
}
for (int i=0; i<n-1; i++) {
//between roads[i] and i+1
int u = roads[i], v = i + 1;
int x = degree[u]++;
g[u][x] = v;
can[u][x][0] = can[u][x][1] = true;
if (isImportant[i] == '1') {
can[u][x][ initState[i]- '0' ] = false;
}
}
return rec(0, 0, 0);
}
};

During the match, it took me a while to think of a dynamic programming approach. Then I started to code it, and it was already a bit late, I think only 15 minutes were left before the end of the match.

When I finished coding and debugging that approach, it turns out that it passed example tests, but it was too slow for larger test cases. I was unnecessarily nesting a second dynamic programming inside the main one. I noticed that I could merge the two ideas into a single dynamic programming. But there were very few minutes left, at most 8, if I remember correctly.

I rushed and did the best I could to implement the fix quickly. I was able to compile in the last second and submitted. I think that I literally did it at the last second.

The easy problem

I opened this later after the match. Turns out it was just a simple shortest paths problem! They are not even weighted edges!. If you use Floyd-Warshall this problem is ridiculously easy.

int minTimes(vector <int> range, int startCity, int endCity)
{
int n = range.size();
int dist[n][n];
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
if ( std::min(abs(j - i), std::min( j+n - i, i+n - j) ) <= range[i] ) {
dist[i][j] = 1;
} else {
dist[i][j] = 1000000000;
}
}
}
for (int k=0; k<n; k++) {
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
dist[i][j] = std::min(dist[i][j], dist[i][k] + dist[k][j] );
}
}
}
return dist[startCity][endCity];
}

Conclusion

I really think the easy problem was too easy this time. The other problems were fine. Although it seems that there was a greedy solution for the 500 points. I think it is still a interesting problem.

The strategy is doing fine. If I used my old strategy, I would have switched to the easy problem before the last 10 minutes. Bore myself trying to implement Floyd-Warshall as fast as possible and then miss the chance to prove that I could have solved the medium problem. It was a lot more exciting and fun to solve the medium this time.

Change of plans

If you remember the post: Topcoder SRM 560-562: You people have stood in my way long enough. I'm going to clown college!. You would know that's the moment in which I decided to start using a new contest strategy: To open the "Easy" problem only when there are 10 minutes left for the match.

As it was expected, that strategy is too risky. I had very bad matches afterwards. A good example is the last one: SRM 582.

In SRM 582, I opened the division 1 hard, it seemed a bit complicated. Then I opened the div1 medium. I spent the remaining tens of minutes trying to code an easy (but super slow) dynamic programming approach. In the hopes that maybe, once that approach is coded and understood, I can optimize it to O(N^2). (This is an approach that has worked for me in the past). But I kept having issues with the example cases, so something was probably wrong, either in the code or in my understanding of the problem.

I opened 250 points problem. I only had 10 minutes. I realized that it could be solved by a binary search. The matching afterwards could be done with some greedy strategy. But I had the *brilliant* idea to use min-cost-flow to solve this. Unfortunately, it seems I spent more time than I planned pasting and tweaking my min-cost-flow solver (It didn't help that KawigiEdit has the habit of turning slow when there is a lot of code).

At the end, this one last SRM has taught me. The strategy is too risky. I need a change.

This is the reason that , from now on, I will change my strategy: I will no longer open the easy problem during the coding phase. I will only open the medium and hard problems and dedicate all the coding time to attempt to solve at least one of them.


I bet this new strategy will be much less risky!
(Wikimedia Commons)

Saturday, June 15, 2013

Block3Checkers editorial preview

Remember when I didn't take more than a week to write an editorial? I miss those times too. I am very sorry about this. Here is the first preview:

http://apps.topcoder.com/wiki/display/tc/TCO+2013+Round+3A#Block3Checkers will contain the updated version of this explanation. It might eventually be improved and corrected. The following is the initial version:

Block3Checkers

There are three checkers that belong to Alice in a n x n grid board. There may also be some neutral checkers in some of the cells. Bob must place some checkers on the board in such a way that it is impossible for Alice to make any pair of her checkers meet after a number of moves. A valid move involves Alice moving one of her checkers up, down, left or right to an empty cell. Bob cannot move his cells or add new ones after Alice starts moving. Return the minimum number of checkers Bob needs to place, or 100 if it is impossible no matter what Bob does. n <= 20. Alice's checkers will always be initially placed in the border rows/columns.

Cut the paths

The initial thing to notice is that all that we need to do is to put new checkers in a way that all paths between each pair of checkers are blocked. For now on, we will forget of the distinction of neutral ('N') checkers and Bob's checkers. We will instead consider 'N's in the board as obstacles that don't allow Alice's checkers to move to them. The objective is to find the minimum number of additional obstacles to add to the board so that the paths between Alice's checkers are blocked (Or return 100 in case it is not possible).

After some analysis we should conclude that although the constraints are small, it does not seem approachable to find a brute force method. Classical ways to cut paths like the min-cut algorithm don't seem to fit this three checkers scenario. There is only one interesting bit left: The constraints ensure us that Alice's checkers will be at the edge/border of the board. It is likely that the solution requires us to use this constraint.

Two checkers

In order to find a way to take advantage of the edges, let us first try to solve an easier version. Alice only has two checkers. How would you block them? Imagine various scenarios in which the checkers are blocked:

Your browser does not support SVG

The key is to imagine that the two checkers divide the board's border cells in two groups, delimited by the checkers:

Your browser does not support SVG

We can now see that the cells with obstacles form a sequence of cells that connects the two halves of the board's border:

Your browser does not support SVG

These sequences of connected obstacles are curiously 8-connected (Two connected cells need to be adjacent or have a corner in common) as opposed to 4-connected as the actual paths that the checkers can take. As long as the sequence of cells connects the two partitions of the border there is otherwise no other requirement for the sequence of obstacles that block the checkers. This sequence of connected cells is, of course, a path of cells.

This knowledge is useful when solving the minimization problem:

Your browser does not support SVG

We need to add obstacles to the minimum number of cells such that there is a path of cells with obstacles connecting the two parts of the board. We can approach this problem as just "find the path of cells with the minimum cost to fill with obstacles". Let us then find a shortest weighted path between the two edges. We can use any cell that does not contain Alice's checkers, the cost to include a cell is 0 if the cell already contains an obstacle (Neutral checker) or 1 otherwise (Place one of Bob's checkers). There can be plenty of paths and plenty of shortest paths, but we only need to find the minimum cost.

Your browser does not support SVG

In the image you can see the shortest path between the two edges of the board and the two cells that were empty before. So the shortest path has cost 2, and we need to add two checkers to this board in order to block Alice's cells.

Three checkers

When we have three checkers there are many similarities. The board's border this time is divided in three parts. There are once again ways to connect these edges and block the paths between the checkers:

Your browser does not support SVG

This times the pictures use black checkers for the neutral / Bob's checkers, just obstacles. The first observation is that we do not need all three of those paths connecting the edges. Only two of them are enough:

Your browser does not support SVG

To further complicate things, the two paths are not necessarily disjoint - they may have cells in common:

Your browser does not support SVG

The overlap - one cell

The trick to solve the overlap problem is to notice that in any case where there is an overlap:

Your browser does not support SVG

We can pick any of the cells in the overlap as "the center":

Your browser does not support SVG

This center cell has a interesting property: For each edge, there is one path of cells with obstacles that connects the center cell and the edge. If the minimum paths between this cell and two of the edges overlap, then there will be another cell for which those paths will not overlap.

Solution

All the knowledge we have found so far is actually enough to design a solution. Let us call the three edges that are separated by the checkers X-Y-Z. Then we need to add new obstacles such that one of the following statements is true:

  • There is a 8-connected path of obstacle cells connecting X and Y, and also a path connecting X and Z.
  • There is an 8-connected path of obstacle cells connecting X and Y, and also a path connecting Y and Z.
  • There is an 8-connected path of obstacle cells connecting X and Z, and also a path connecting Y and Z.
  • There is one cell Center that has 8-connected paths of obstacle cells that connect it to X, Y and Z.

We can try to calculate the minimum cost to have each of those options and then pick the one that costs the least. Each of the first two require us to do two instances of the shortest-path problem. The fourth one requires us to pick a cell (one of the n x n available) as the center cell, and find the shortest paths that connect it to each of the edges. It could be that in the first two cases, the paths that we find overlap, but if that is the case, then the cost to make the fourth case will be smaller and it will be correct.

Depending on the shortest paths algorithm we use, this approach can work even for N=50. Let us, however, take advantage that N is low and just use Floyd-Warshall.

{html}{code}
vector<string> board;

int n;
int dist[20*20 + 3][20*20 + 3];
int segments;
int segmentId[20][20];

// This Depth-first-search is just a convenience to find the border cells and
// to which of the three segments each belongs:
void borderDFS(int x, int y, int s)
{
bool a = ( (0 <= x && x < n) && (0 <= y && y < n) ); //cell belongs to board
bool b = ( (1 <= x && x < n-1) && (1 <= y && y < n-1) ); //cell belongs to inner square
//(board minus border)

if ( a && !b && (board[x][y] != 'A') && (segmentId[x][y] == -1) ) {
//Save the segment of this border cell:
segmentId[x][y] = s;
dist[x*n + y][n*n + s] = 0;
// try adjacent cells:
borderDFS(x,y+1, s);
borderDFS(x,y-1, s);
borderDFS(x+1,y, s);
borderDFS(x-1,y, s);
}
}

void checkBorder(int x , int y)
{
if (board[x][y] != 'A') {
if (segmentId[x][y] == -1) {
borderDFS(x,y, segments++);
}
}
}


int blockThem(vector <string> board)
{
this->board = board;
n = board.size();
int N = n*n + 3; //total number of vertices in our graph.
// n x n cells + the three edges.

// initialize distance array with INF
const int INF = n*n*n;
for (int i=0; i < N; i++) {
for (int j=0; j < N; j++) {
dist[i][j] = dist[j][i] = INF;
}
}

// Find the border cells, count the number of segments:
segments = 0;
memset(segmentId, -1, sizeof(segmentId));

for (int i=0; i<n; i++) {
checkBorder(i,0);
checkBorder(i,n-1);
checkBorder(0, i);
checkBorder(n-1, i);
}
if (segments != 3) {
// If there are less than three segments, the only explanation is that
// two 'A' checkers are adjacent, that means it is impossible to block them:
return 100;
}
// Vertices 0 to n*n-1 are cells, n*n, n*n+1 and n*n+2 are the edges:

// Make the cells 8-connected:
for (int i=0; i<n*n; i++) {
int x0 = i / n, y0 = i % n;
int dx[8] = { 1,1,1, 0,0,-1,-1,-1};
int dy[8] = {-1,0,1,-1,1,-1, 0, 1};
for (int k=0; k<8; k++) {
int x1 = x0 + dx[k];
int y1 = y0 + dy[k];
if ( (0<=x1 && x1<n) && (0<=y1 && y1<n) && (board[x1][y1]!='A') ) {
dist[i][x1*n + y1] = ( (board[x1][y1] == 'N') ? 0 : 1);
}
}
}
// Floyd-Warshall:
for (int k=0; k<N; k++) {
for (int i=0; i<N; i++) {
for (int j=0; j<N; j++) {
dist[i][j] = std::min(dist[i][j], dist[i][k] + dist[k][j]);
}
}
}


int withMiddle = INF;
// Case 4: A center cell connected to each of the three edges:
for (int i=0; i<n*n; i++) {
// Pick cell i-th as the center.
int c = ( (board[i/n][i%n] == 'N')? 0 : 1); //Cost to have an obstacle on i:
// Minimum paths between i and each of the edges.
int total = c + dist[i][n*n] + dist[i][n*n+1] + dist[i][n*n+2] ;
// Remember the cell that yields the minimum cost:
withMiddle = std::min(withMiddle, total);
}
// Cases 1-3, there is no overlap, try pairs of edges.
// Find the minimum cost to connect each pair of eges:
int segDist[3][3];
for (int i=0; i<3; i++) {
for (int j=0; j<3; j++) {
segDist[i][j] = INF;
}
}
// For each border cell:
for (int i=0; i<n; i++) {
int bx[4] = {0, n-1, i, i};
int by[4] = {i, i, 0, n-1};
for (int k=0; k<4; k++) {
int p = bx[k] * n + by[k];
// The border cell that we picked is p, with coordinates bx[k], by[k]
if (board[bx[k]][by[k]] != 'A') {
// Find the minimum distance between this cell and the other edges
// Remember the minimum for each pair of edges:
// (This cell belongs to edge s:)
int s = segmentId[bx[k]][by[k]];
for (int j=0; j<3; j++) {
int c = ( (board[bx[k]][by[k]] == 'N') ? 0 : 1 );
segDist[s][j] = std::min(segDist[s][j], c + dist[p][n*n + j] );
}
}
}
}
// Pick the most expensive pair of edges and ignore it:
int choices[3] = { segDist[0][1], segDist[0][2], segDist[1][2] };
int noMiddle = choices[0] + choices[1] + choices[2] - *max_element(choices,choices+3);

// Minimum possible:
return std::min(withMiddle, noMiddle);
}