May 1, 2019 2019 Topcoder Open Algorithm Round 1B Editorials
The second round of the algorithm track of 2019 Topcoder Open was held on the first of May – Labour day – which, ironically, is a non-working day in many countries. Way to celebrate labour, right?
Similarly to Round 1A, this one was also given by me – espr1t. This time the tasks were a bit easier than in the previous round (at least the 250 and 500), so if you didn’t manage to qualify then – this was a perfect opportunity! We expect that less people will participate (as many have qualified already), thus aimed for something even more approachable.
EllysSki
The problem was rather straightforward. Once we figure out that we are looking for the length of the longest non-increasing or non-decreasing (that is, non-increasing in the opposite direction) sub-array of the given array, we only need to implement it.
One way to find the answer is to make two passes over the heights: in the first one we’ll keep track how many of the previous elements (without interruption) create a non-increasing sequence; in the second pass will do the same, but keeping track of the non-decreasing elements. We should make sure to re-initialize the counter to one whenever we encounter an element, which “breaks” the sequence. The two passes can, in fact, be merged into a single one, just keeping track of both the increasing and decreasing length up to the current element:
int solve() {
int curDec = 1, curInc = 1, ans = 1;
for (int i = 1; i < n; i++) {
a[i] <= a[i - 1] ? ans = max(ans, ++curDec) : curDec = 1;
a[i] >= a[i - 1] ? ans = max(ans, ++curInc) : curInc = 1;
}
return ans;
}
A slightly shorter/simpler implementation is possible if we do the second pass the same way we did the first one – still looking for the longest non-increasing sub-array, but in the reversed array. This is equivalent to looking for the non-decreasing sub-array in the original array.
All of the mentioned solutions are O(N) in terms of complexity. The low constraints allowed for slower solutions to also pass without any issues – maybe the simplest one being O(N2). One should try every single starting point and simulate going left and right.
EllysTeleport
This task was noticeably harder than the previous one, but the relatively low constraints allowed a not-so-hard-to-implement bruteforce to pass. There were several things the contestants had to avoid:
- Avoid the optimal algorithm (which is much harder)
- Avoid introducing logarithms in the inner cycle
- Avoid building the graph with an efficient way (since we have N2 for the bruteforce, we can also build the graph with this complexity (which is somewhat easier))
The task was the following: given a graph where each node has at most one outgoing edge, find the longest path in the graph.
If you are not familiar with graphs – either representation or algorithms – then this tutorial may help you get started.
Finding the longest path in a graph is a NP-hard problem, but in this case the graph was very specific, which allowed efficient solutions. Since we have at most one outgoing edge from each node, there are no “multiple choices”, thus the exponential time we get in general graphs.
The first thing we should do is actually create the graph – see where the edge from each node leads to. This could be done in O(N * logN) using sorting + binary search, or an ordered set, but we can notice that the constraints are low enough to actually do it in the most straightforward way – for each node calculate the height where Elly’s hero goes and iterate all other nodes to find the closest below (for a total of O(N2)).
Once we have the graph, finding the longest path from a given starting node is trivial – we just follow the links until we either reach a node we’ve already been to, or the game ends. Again, since the number of nodes was relatively low, we could try *all* starting nodes and take the max. This is another O(N2), but works fast enough for the given constraints. This was the intended solution.
Please note that if we didn’t create the graph first and were building the edges on the fly, we would have to build them again and again for each starting node of the graph. This would lead to a O(N2 * logN) algorithm, which most likely wouldn’t pass (except if we introduce some optimizations).
Interestingly enough, this task is also solvable in O(N * logN) (thus, N could be up to 1,000,000 or so). Note that the graph is very specific. It kind of looks like a cactus, but is even simpler – each of its cycles (or loops) is in a different component. This makes it somewhat similar to a DAG, in the sense that since we have at most one cycle in each component and we don’t have any outgoing edges from that cycle, we can remove it and solve the problem as if it were a DAG (but add the length of the cycle to all remaining nodes from the component). After we’ve built the graph, the solution is in fact O(N), however, there is no way of actually building the graph in less than O(N * logN), thus the complexity of the whole algorithm.
EllysPearls
The task was the following. Given an array of N numbers between 1 and M, re-order the array in such a way, that all numbers with equal value are next to each-other (the order of the numbers themselves does not matter). The goal is to do the re-ordering with minimal number of moves, where one move is removing a number from the array and inserting it back at an arbitrary position.
If we know the final ordering of the numbers (e.g., “5, 1, 4, 2, 3” for M = 5) the task still isn’t trivial, but there is O(N^2) solution which can find the answer. However, we don’t know that ordering, thus we should try all options. All possible orderings are O(M!) – way too much for the given constraints, so we should try finding another solution.
One thing that more experienced people certainly will see right away was the low constraint on the number of colors – only up to 15. Constraints around or under 20 for something that can be used/unused or taken/not taken is usually an indicator for bitmask dynamic programming, which was also the case in this problem. If you are new to the concept of Dynamic Programming, this tutorial may be suitable for you.
We can use the bitmask to indicate which of the colors are already placed (and whenever we encounter an element of such a color, we should move it to the left to its established color group). Let’s for example, we are in the following state: “[3322255]2113554”. The numbers in the square brackets are already “ordered” and we are evaluating the next one – the two. Since we already know the 3s, the 2s and the 5s will be (which, in terms of a bitmask, will be 10110(2) = 22(10)), we know that we should move the current 2 to the left.
One exception to the rule above is if the current element matches the last used color. That would happen if, for example, we had a 5 instead of the current 2: “[3322255]5113554”. In this case we wouldn’t need to move it and will simply extend the last group. In order to know this, we need another dimension of the DP, which will be up to 15 and will indicate which is the color of the rightmost group we have already established (in other words, which was the last 1-bit added to the mask).
So, what we need for our DP is the following:
- A dimension which tells us which is the current element we’re evaluating. We will process the elements from left to right, thus a dimension with size N is enough.
- A dimension which tells us what is the color of the rightmost group (last set bit in the bitmask). Since we have up to M colors, this dimension has size M.
- A dimension which tells us which color groups we have already established – the bitmask. Since we have up to 2**M possible values, this dimension has size 2**M.
This makes a DP with state [N] [M] [2**M], or, with the constraints of the task, [50][15][ 32768]. This is around 25 million elements, or around 100MB of memory (if we use ints). We can make the observation that the answer will never be larger than 50 and use bytes (chars) to reduce the memory to only 25MB, but this was not necessary for this task.
One thing we still haven’t considered is what if the current element has a color which we still don’t have in the bitmask but we don’t want to establish here either. For example, this can happen if we have “[22211111]311133”. It would make more sense to move the current element (the 3) to the right (one move), instead of establish its color group there and move the three ones (three moves). A nice thing is that when we move an element, we can place it anywhere, thus we can only increase the answer with one and forget about it (we can place it wherever we see fit after we’ve dealt with all other elements).
Thus, there are three things we should consider in the DP function:
- Try moving the element, not caring where we put it (either to the left or to the right).
- Try extending the previous color group – this can only be done if the current element is with the same color as the last group.
- Try establishing a new color group – this can only be done if we haven’t established a color group with the color of the current element (thus, the bit for this color in the bitmask is 0).
In terms of code, this looks as following:
int recurse(int idx, int col, int mask) {
if (idx >= n)
return 0;
if (dyn[idx][col][mask] != -1)
return dyn[idx][col][mask];
// Move it (can be placed anywhere)
int ans = recurse(idx + 1, col, mask) + 1;
// Current group is of this color
if (col == a[idx])
ans = min(ans, recurse(idx + 1, col, mask));
// Start a new color group
if (!(mask & (1 << a[idx])))
ans = min(ans, recurse(idx + 1, a[idx], mask | (1 << a[idx])));
return dyn[idx][col][mask] = ans;
}
Since we don’t know which color group will be first, we can invoke the recursion trying every one of them (and take the minimum):
int ans = n;
for (int col = 0; col < m; col++)
ans = min(ans, recurse(0, col, 1 << col));
A slightly easier option is to introduce a “fake” color (which we shouldn’t store in the mask, but must store in the “last color group” dimension, making the dp table [N][M+1][2**M]). This allows us to remove the cycle calling the DP:
int ans = recurse(0, m, 0);
Since we have O(N * M * 2M) cells in the DP table, and each recursion call is O(1), this makes the complexity of the whole solution also O(N * M * 2M).
espr1t
Guest Blogger