August 24, 2021 2021 TCO Regionals Qualification Rounds Editorials
Qualification Round 1 easy: OlympicShooting
The shooters are compared according to the following criteria, in order, with bigger being better:
- The sum of all results.
- For each block of 25 shots, going backwards: the sum of that block.
- For each shot, going backwards: the value of that shot.
- Negative value of their ID.
We can create such a record for each competitor, sort them using a built-in sort function, and then read off and return the sequence of IDs. Alternately, we can implement a custom comparison function that takes two IDs as arguments and compares those shooters by looking at their results. If we do that, we can then use the library sort function with this custom comparison function.
Of course, the data is so small that a quadratic sorting algorithm (e.g., MinSort) would also be fast enough, so it was also possible to solve the task that way: n times find the best of the remaining shooters and append them to the output sequence.
Qualification Round 1 Medium: TreeTokens
Suppose we root the tree at the goal vertex G. The opponent must not be able to get a token into G. This means that for each son of G the opponent must be able to reach it with at most one token. Now we can generalize this. Suppose that for some vertex V we know that the opponent must be able to reach it with at most X tokens. How do we proceed?
If V is a leaf, we simply place X tokens there and we are done. If V has some children, we clearly don’t want to place any tokens at V: instead of placing one token here, we can place two more tokens into one of its children (which has the same effect and a bigger total number of tokens).
Let C be the child of V with the deepest subtree among all children of V, and let D be the depth of that subtree. We then claim that there is an optimal solution in which all tokens that reach V reach it from C. Why? For each token that reaches V from any other subtree, we can take all the original tokens that produced it and replace them by 2^D new tokens in a deepest leaf in C’s subtree. This clearly doesn’t decrease the total number of tokens. (It increases the total if some of the original tokens was in a depth strictly smaller than D.)
Thus, we will have a recursive function that will get two arguments: a vertex V and the maximum number M of tokens that can reach it. This function will determine the child C and then it will call itself on C with 2M+1 tokens and on each of the other children with 1 token. Clearly, we can solve the full task by calling this function on G with 0 tokens.
Qualification Round 1 Hard: AlmostOptimalBST
Worst-case optimal BSTs are those in which the maximum node depth is minimized. Average-case optimal BSTs are those in which the sum of node depths is minimized.
For each x, the nodes at depth x in the BST are called the layer at depth x. It can easily be shown that in each average-case optimal BST we must have some number of layers completely full (layer at depth x has all 2^x possible nodes), followed by at most one partially-full layer. Thus, clearly, each average-case optimal BST is also worst-case optimal and therefore the second question we were asked has always the answer zero.
We can count the permutations that produce worst-case optimal BSTs using dynamic programming. One way of doing so: let dp[n][d] be the number of permutations of length n that produce a BST of depth at most d. Then:
- dp[0][d] and dp[1][d] are 1 for all d.
- For a general n, we will iterate over all possible values r of the first value in the permutation. The value r is inserted first and thus it will be in the root of the tree. This splits the problem into two independent subproblems: the values 0..r-1 will form the left subtree and the values r+1..n-1 will produce the right subtree. When computing dp[n][d], each of these new subtrees must have depth at most d-1. Thus, there are dp[r][d-1] good permutations of the small values and dp[n-r-1][d-1] good permutations of the large values. We can take any such pair of permutations and interleave them arbitrarily (alternately adding values to the left and right subtree in any order we like). There are binomial[n-1][r] ways to interleave the two permutations: we choose which r of the n-1 remaining elements of our permutation of size n are the small ones.
We can count the permutations that produce average-case optimal BSTs in a very similar way. Here, let dp[n] be the number of permutations of size n that produce an optimal tree. We have:
- dp[0] = dp[1] = 1
- In general, we can look at n, determine the depth of the optimal tree and the number of nodes it has in the last layer. These nodes must be split between the left and right subtree. We will iterate over all possibilities for the number of leaves in the left subtree. Once we fix that, we know the size L of the left subtree, and thus the value that must be in the root. That is the first element of our permutation. We then have dp[L] sequences that produce an optimal left subtree, dp[n-1-L] sequences for an optimal right subtree, and we can interleave those in binomial[n-1][L] ways.
Qualification Round 2 Easy: RestrictedSwaps
Consider a graph in which indices into our array are vertices and allowed swaps are edges.
Obvious claim: We cannot move elements between connected components of our graph.
Less obvious claim: Within each component, we can move elements arbitrarily.
Proof of that: Consider any spanning tree of the component. Repeatedly: pick a leaf of the spanning tree, use swaps along the spanning tree to get the desired element into that leaf, and then forget that the leaf and the element exist.
Thus, we find the connected components of the above graph, and for each of them we collect the letters, sort them, and return them back to the same places but in this new order.
Qualification Round 2 Medium: MergeToSort
Each move decreases the length of the sequence by 1, so an equivalent task is to maximize the length of the final sequence. And yet another way to look at it is that you are dividing the input sequence into pieces in such a way that the sums of those pieces are non-decreasing and their number is maximized.
Given a non-decreasing sequence, let its “width” be the size of the last (biggest) element.
Lemma 1: If you have two options how to make a non-decreasing sequence out of a prefix of input and the options have equal length, the one with the smaller width is better. This is because anything you can append to a wider sequence can also be appended to the narrower one.
Lemma 2: The optimal solution to a longer prefix of the input sequence always has at least as many elements as the optimal solution to a shorter prefix. (Proof: we can take the optimal solution for the shorter prefix and make the last element bigger.)
The most important observation of the task is the following one: out of all non-decreasing sequences that can be produced from a given sequence, the minimum possible width is always obtained for some sequence that also has the maximum possible length.
Proof: By induction. Clearly true for 0 and 1 elements. Now, suppose it’s true for up to n-1 elements and let’s look at n elements. Among all sequences of optimal length, let S be the one with minimum width. Can there be another sequence T that is shorter but has a smaller width? No. Here’s why.
Let S’ and T’ be the sequences S and T without their last elements. As S is wider than T, S’ is produced from a shorter prefix of our sequence than T’. Now, let T’’ be the optimal solution for the prefix that formed T’. On one hand, T’’ is at least as long as S’ because it is formed from a longer prefix (see Lemma 2). On the other hand, by the induction hypothesis T’’ is at most as wide as T’. Thus, if we take T and swap T’ for T’’, we will get a sequence that is at least as long as S but less wide – which is impossible.
Thus, for each prefix of the input sequence we are only interested in one solution: for the least wide among all optimal ones.
When looking for an optimal solution for a prefix of length n, we are looking for the smallest k such that there is a valid sequence whose width is the sum of the last k values in the prefix. This happens when said sum becomes greater or equal to the width of the optimal solution for the first n-k values. This smallest k can be found using binary search, which gives us an O(n log n) solution.
It is also possible to improve the time complexity to O(n). Observe that if we have two optimal solutions (length L1, width W1) and (L2, W2) such that W2 >= W1 and L2 <= L1, we can forget about the second one. Once we eliminate all such useless data points, the ones that remain will be sorted both according to their length and according to their width. We can maintain these candidates in a deque. Each time we add a new element to the prefix we are processing, we start at the beginning of the deque (shortest and thinnest candidate) and we pop all candidates that can now be extended by adding one more element. The last popped candidate gives us an optimal solution for the current prefix. We now pop from the back of the deque if we have some candidates of the same length but greater width, and finally we append the new solution as a candidate to the deque.
Qualification Round 2 Hard: ParkingSequences
Consider the problem backwards. The last car produced anywhere between 0 and N-1 collisions and then it parked in the last empty spot. Before that car we had a contiguous segment of N-1 parked cars. Which one of them was parked last? We can try all possibilities. Each of them will give us a set of solutions, and these sets are clearly disjoint so we can simply sum their sizes.
Suppose the last car parked in the segment was at 0-based index r along the segment. This car produced between 0 and r collisions, inclusive. Before this car, we had (at most) two separate segments of consecutive parked cars: one consisting of r cars, the other consisting of n-2-r cars.
The most important observation of this problem is that these two segments are now completely independent: all cars that are parked in the first one must have “lived all their life” within the first one and vice versa.
Thus, we will solve the task as follows:
- For each k from 0 to n-1 and each c, we will find dp[k] = the count of parking sequences of length k on a line that have length k and exactly c collisions.
- Our final solution is then computed as follows: we have N options where the last car parked, and once we fix that we can iterate over all possibilities for the number c of collisions this car caused and then we know that there are exactly dp[n-1][B-c] parking sequences of length n-1 that can be extended into a full solution of this type.
Computing the values dp[k] has a main idea similar to the hard problem of the previous round (which is precisely why these two problems were used in two consecutive rounds with the same participants). When counting these sequences, we can iterate over all r. Once we fix r (the place where we split the segment), we can take any solution for the left part, any solution for the right part, and we can interleave them arbitrarily.
A naive implementation of this DP would require also iterating over the number of collisions in each subproblem. We will now show a slightly smarter iteration order that saves one degree in the time complexity polynomial.
Suppose we already know all the values dp[k’] for k’ < k. We will now compute all values dp[k]. We will initialize all of them to zeros and now we’ll proceed as follows:
For each r, for each c1, for each c2: we can take any of the dp[r][c1] solutions for r cars and c1 collisions and combine it with any of the dp[k-1-r][c2] solutions for k-1-r cars and c2 collisions in binomial[k-1][r] ways. Let X be the product of those three values, i.e., the number of ways to park the first k-1 cars and produce c1+c2 collisions while we do so. Once we pick any one of those X ways, we have r+1 options for where the last car started. These produce 0, 1, …, r new collisions. Thus, we want to increase each of the values dp[k][c1+c2] to dp[k][c1+c2+r] by X. Instead of doing this in linear time, we can do it in constant time by just making two notes: “do +X starting at c1+c2 collisions” and “do -X starting at c1+c2+r+1 collisions”.
Once we finish iterating over all (r,c1,c2) we then fill in all values dp[k] in increasing order of c in linear time by looking at those notes.
misof