Single Round Match 845 Editorials
ARight
One reasonably clean way of implementing the given operation is to do it in two passes. Start with an output string in which all N positions are empty spaces. Then, iterate over the given string S and each time you see an ‘a’ on some position x, write an ‘a’ into the output string on position K+x. Once this pass finishes, all ‘a’s are in the properly shifted places. Now iterate over S again and use the non-’a’ letters to fill in the remaining empty spaces in the output string, from left to right.
Another easy way is to go from right to left, and each time we encounter an ‘a’, we shift it K positions to the right directly by editing the current string. (Doing it in the right to left direction is necessary so that we don’t try to shift one ‘a’ through another.)
SpecialWeights
The key observation is that when each stone is heavier than all the smaller ones combined, all subsets must have mutually distinct sums of weights, and the order of those sums must correspond to the natural order of subsets as represented by bitmasks. (If you don’t see why this is the case, think about comparing two subsets of stones. What decides which one has the bigger total weight?)
For example, if the smallest three stones are {a, b, c, …}, it is guaranteed that the smallest eight subsets must be precisely the eight subsets of these stones, in the following order: 0, a, b, a+b, c, a+c, b+c, a+b+c. As bitmasks, these subsets correspond to 000, 001, 010, 011, 100, 101, 110, and 111. (Note that the least significant bit – the one that describes whether a is in the subset – is on the right.)
This gives us a very elegant solution. Subtract 1 from K to get a 0-based index of the weight we seek. Now, the bits of K tell you which elements do and which elements don’t belong into the corresponding subset. (If the new K is 2^N or more, there is no such subset.)
AlmostCheckerboards
If D is less than one half of the checkerboard area, we can count all distance-D checkerboards as follows: we choose which checkerboard to start from and then we choose exactly D pixels to toggle. There are two options for the starting checkerboard and choose(R*C, D) options for choosing the set of toggled pixels. All of these options are mutually distinct, as you need to toggle all R*C pixels to get from one checkerboard to the other – which implies that if we produce a bitmap from one checkerboard using D flips, we would need R*C - D > D flips to get there from the other checkerboard, and thus each pattern is counted only once.
If D is exactly equal to one half of the checkerboard area, the above argument fails, but it still fails nicely: now we are counting *each* good bitmap twice because each good bitmap can be reached from one checkerboard by flipping some D = RC/2 pixels and also from the other checkerboard by flipping the other D = RC/2 pixels. Thus, in this special case there are only choose(R*C, D) bitmaps.
If D is bigger than that, the answer is clearly zero.
A simple local test you could do once you finished your solution: for a fixed (R, C) iterate over all D, sum up the values returned by your solution and check that they add up to 2^(R*C), which is the total number of bitmaps of that size. If you missed something (such as the special case described above), this checksum is quite likely to fail.
LRSort
We can start by doing coordinate compression. The exact values in A don’t matter, we may as well replace them with 0 to N-1 while preserving their relative order. Duplicates in A should simply be relabeled left to right, as it’s easy to see that we never swap two elements with the same value.
Once we turned A into a permutation, we can now also calculate its inverse: for each value the position at which it appears.
Now we can do the required simulation. There are many efficient ways to do it. Ours will use just a simple standard Fenwick tree. For each index into A we will remember 1 if the element that started there is still in the unsorted part of A, or 0 if it isn’t. Initially, all elements are 1. We maintain an interval [lo, hi) of values that haven’t been sorted yet. For each phase, we determine the value we want (either lo or hi-1), we use the inverse permutation to find its original location, and then we use a sum query to the Fenwick tree to count all other elements that are still alive in the direction in which we want to move the current value. To conclude a phase, we update the Fenwick tree to mark that this particular starting position is no longer alive.
DivNim
The first thing we need to do is to solve this particular game of NIM. Let’s start with normal play. As we always interact only with one “pile”, we are playing a sum of games and thus the Sprague-Grundy theory can be applied.
What are the Grundy numbers of positions when we are playing the game with a single number?
If the current number has the prime factorization n = p1^a1 * p2^a2 * … * pk^ak, the new number must always have the form p1^b1 * p2^b2 * … * pk^bk where each bi is at most equal to the corresponding ai, and at least one bi is strictly smaller than the corresponding ai.
The key observation is to realize that we don’t actually care about which primes we keep in the prime factorization, only their total number matters. E.g., if the current number has 14 primes in its factorization, we can change it into a number with 0, 1, 2, …, or 13 primes in its factorization, but not 14 or more. Hence, any number with X primes in its factorization is exactly isomorphic to a standard NIM pile with X stones.
Once we make this observation, we can conclude that the Grundy value of a game with a single number n is simply the number of (not necessarily distinct) primes in the factorization of n – that is, sum(ai).
The second step is now to compute Grundy values for everything in range [lo, hi]. We can do this using a variation on segmented sieve: first do a standard sieve to find primes up to 10^6, and then use those to efficiently factor everything in our range with hi <= 10^12 = (10^6)^2.
And now we are almost done: we can simply iterate over everything in the range and each time check whether we got a winning position.
As our game turned out to be completely isomorphic to a standard n-pile NIM, misere play is also easy to solve. If any pile has two or more stones, the winning positions are exactly the same as in normal play. The only thing that changes is that if each pile has at most one stone, the winning positions for the next player to move are now those in which the total number of stones is even, not odd.
(And when playing the game, the second thing that changes are the winning moves when we first reach a position of this second type. E.g., if we are in the winning position (1, 1, 1, 4), in original NIM the winning move is to (1, 1, 1, 1) while in misere NIM it is to (1, 1, 1, 0). And vice versa for the opposite parity of 1s.)