Single Round Match 736 Editorials
Div2 Easy: A0Paper
In this task, there are many possible solutions. One of them simply greedily combines as many A(i+1) papers into A(i) as possible, for i decreasing. If there are any A0 papers after the process, the answer is clearly “Possible”, otherwise it is “Impossible”. This solution runs in O(n).class A0Paper {
string canBuild(vector<int> a) {
for (int i = a.size()-1; i > 0; --i) a[i-1] += a[i]/2;
return a[0] ? "Possible" : "Impossible";
}
}
Another possibility is to sum the areas of the papers in stock (multiplied by 2^20, so they are integers instead of doubles). If the total area is at least 1 square meter, it is always possible to build an A0. Keep in mind that 32-bit integers might overflow using this approach.
Div2 Medium: Reroll
The limits are small enough so that we can try all possible subsets. For each subset, we can calculate the range of values the reroll can produce as the sum of ranges of all individual dice. For a die that is left as is, the range is [a_i, a_i], and for a rerolled die it is [1,6]. If target is inside this range, then it is achievable. This solution works in O(n * 2^n).
There is in fact a greedy solution. Without loss of generality let target be larger than the current sum. While this is the case, remove the smallest die and replace it with a 6. Once the current sum is larger or equal to target, we are done. Similarly, if the target is smaller than the current sum, we replace the dice with the largest roll with ones.class A0Paper {
int minimumDice(int target, vector<int> rolls) {
int N = rolls.size();
int total = 0;
for (int roll: rolls) total += roll;
sort(rolls.begin(),rolls.end());
if (target == total) return 0;
else if (target > total) {
for (int i = 0; i < N; ++i) {
if (total += 6 - rolls[i] >= target) return i+1;
}
} else {
for (int i = 0; i < N; ++i) {
if (total += 1 - rolls[N-i-1] <= target) return i+1;
}
}
}
}
The complexity of this approach is O(n log n) or O(n) if sorting counting sort is used.
Div2 Hard: MazeWithKeys
Let’s try all possible starting points, and for each of them we determine whether the target is reachable.
If there were no keys, we could have solved the graph using any graph traversal algorithm, such as DFS. How do keys and doors affect the problem? Let’s break down the cases:
We reach a key k, but we have not visited the door K yet: remember in a global variable (i.e. bitset) that the key k has been found
We reach the door K, but we have not found the key k yet: remember in a global variable (i.e. bitset) that the door K has been found and do not traverse the neighbours of this vertex
We reach the door K and a key k has been found: continue the DFS as normal
We reach a key k and the door K has already been found: start the DFS from the door K
This algorithm works as the edges are undirected, and thus if a key k and the door K are both reachable, we can always select a path that first visits the key and then the door, thus unlocking the door. The steps above ensure that the DFS continues from the door vertex only after the appropriate key has been found.
If we reach the target point from a given start point, we know that such level would be doable, and we can count all the doable levels. Note that all boring levels are also doable, so we just need to subtract the number of boring levels to get the answer.
To count the number of boring levels, we run DFS from the target point, ignoring all doors. All reachable cells represent starting points that would create a boring level.public class MazeWithKeys {
vector<string> a;
int R, C;
vector<int> doorR, doorC;
vector<bool> hasKey, hasDoor;
vector<vector<bool>> visited;
bool dfs(int r, int c) {
if (r < 0 || r >= R || c < 0 || c >= C || visited[r][c]) return false;
char w = a[r][c];
if (w == '#') return false;
if (w == '*') return true;
if (w >= 'A' && w <= 'Z') {
hasDoor[w-'A'] = true;
if (!hasKey[w-'A']) return false;
}
visited[r][c] = true;
if (w >= 'a' && w <= 'z') {
if (!hasKey[w-'a'] && hasDoor[w-'a']) {
hasKey[w-'a'] = true;
if (dfs(doorR[w-'a'], doorC[w-'a'])) return true;
} else {
hasKey[w-'a'] = true;
}
}
return dfs(r+1, c) || dfs(r-1, c) || dfs(r, c-1) || dfs(r, c+1);
}
int dfs2(int r, int c) {
if (r < 0 || r >= R || c < 0 || c >= C || visited[r][c]) return 0;
char w = a[r][c];
if (w >= 'A' && w <= 'Z') return 0;
if (w == '#') return 0;
visited[r][c] = true;
return (w=='.') + dfs2(r+1,c) + dfs2(r-1,c) + dfs2(r,c-1) + dfs2(r,c+1);
}
int startingPoints(vector<string> a) {
this.a = a;
R = a.size(); C = a[0].size();
int ans = 0;
doorR = doorC = vector<int>(26);
visited = vector<vector<bool>>(R, vector<bool>(C,false));
hasKey = hasDoor = vector<bool>(26, false);
for (int r = 0; r < R; ++r) {
for (int c = 0; c < C; ++c) {
char w = a[r][c];
if (w >= 'A' && w <= 'Z') {
doorR[w-'A'] = r;
doorC[w-'A'] = c;
} else if (w == '*') {
ans -= reachable(r,c);
}
}
}
for (int r = 0; r < R; ++r) {
for (int c = 0; c < C; ++c) {
if (a[r][c] != '.') continue;
for (int i = 0; i < 26; ++i) hasKey[i] = hasDoor[i] = false;
for (int i = 0; i < R; ++i)
for (int j = 0; j < C; ++j) visited[i][j] = false;
if (dfs(r,c)) ans++;
}
}
return ans;
}
}
If we store the locations of all doors in an array so that we don’t have to look for them each time we find a key, simulating one starting point takes O(RC) time, and there are at most O(RC) of them. Hence the total complexity is O(R^2 C^2).
Div1 Easy: DigitRotation
Evaluation of a large integer in modular arithmetic can be done using Horner’s schema in O(N). We can try all possible triples of indices, generate the respective rotated number and sum all evaluations, but that would be too slow. Notice that each rotated number differs from the original number only at most 3 indices. We can thus generate all triples of indices and calculate their difference from the original number. To do that in O(1), we simply need to precompute all powers of 10.constexpr int MOD = 998244353;
class DigitRotation {
int sumRotations(string s) {
int N = s.size();
vector<Field<998244353>> W(N, 1);
for (int i = N-2; i >= 0; --i) W[i] = 10 * W[i+1];
Field<998244353> sum = 0, ans = 0;
for (int i = 0; i < N; ++i) sum = 10 * sum + (S[i]-'0');
for (int a = 0; a < N; ++a) {
for (int b = a+1; b < N; ++b) {
for (int c = b+1; c < N; ++c) {
if (a == 0 && S[c] == '0') continue;
int d = S[a]-'0', e = S[b]-'0', f = S[c]-'0';
auto cur = (f-d)*W[a] + (d-e)*W[b] + (e-f)*W[c];
ans += sum + cur;
}
}
}
return ans;
}
}
The time complexity of the above is O(N^3).
Div1 Medium: MinDegreeSubgraph
The smallest graph (in terms of the number of edges) that is locally k-dense is a clique on k+1 vertices. Thus, if m < k(k+1)/2, the answer is clearly NONE.
By induction on n we can prove that all graphs having m >= t(n,k) := k(k+1)/2 + (n-(k+1))(k-1) edges are localy k-dense. The base case where n = k+1 is trivial, as it is a k+1-clique. Consider a graph G with >k+1 vertices and let v be the vertex of minimum degree. If deg(v) >= k, we are done. Otherwise, remove v from G to form G’. G has n’ = n - 1 vertices and at least m’ >= m - (k-1) >= k(k+1)/2 + (n’-(k+1))(k-1) = t(n’,k) vertices. By induction, G’ is locally k-dense.
On the other hand, for m < t(n,k) we can construct a graph G with n vertices, m edges and no mindegree k subgraph. Clearly it suffices to show that there is such graph with t(n,k) - 1 edges, as graphs with fewer edges can be obtained simply by removing edges. Start with a (k-1)-clique on vertices 1,..,k-1. Connect each vertex with index k,..,n to each of the vertices 1,..,k-1. There are exactly t(n,k)-1 edges in this graph. Additionally, there are only k-1 vertices with degree at least k, so G is not locally k-dense.
This means that if m < t(n,k), the answer is SOME and if m >= t(n,k), the answer is ALL.class MinDegreeSubgraph {
string exists(int n, ll m, int k) {
if (m < ll(k) * (k+1) / 2) return "NONE";
else if (m < ll(k) * (k+1) / 2 + ll(n - (k+1))*(k-1)) return "SOME";
else return "ALL";
}
}
The time complexity is O(1).
Div1 Hard: Subpolygon
It is evident that as P is convex, each choice of W yields a different convex polygon Q. We need to sum the areas across all such polygons.
Recall that one can compute the area of the polygon as half of the sum of X[i]*Y[i+1] - X[i+1]*Y[i], where (X[0],Y[0]), (X[1],Y[1]), ... , (X[M],Y[M]) = (X[0],Y[0]) are the coordinates of vertices in counter-clockwise order.
Consider an edge between P[i] and P[i+j] separately. If we knew the number of subpolygons in which it appears, we could calculate its contribution towards the sum of areas. In order for this edge to appear in Q, none of the vertices of the form P[i+k] (0 < k < j) may be present in Q, and at least one of the vertices of form P[i+k] (j < k < N) has to be present in Q (so that Q does not degenerate to a line segment). There are thus 2^{N-j-1}-1 such polygons.
If we precompute all powers of 2, we can compute the sum of areas in O(n^2). To make this faster, note that the total area is, up to some multiplicative factors, a sum of two convolutions and can be thus computed with FFT.
In the sample source below, the FFT implementation is omitted for brevity. Also note that the template class Field is used for modular arithmetic (i.e. performing all additions, subtractions and multiplication modulo given prime, and using modular exponentiation for division).class Subpolygon {
public:
int sumOfAreas(int n) {
vector<Field<998244353>> x(8*n), y(8*n), revX(8*n), revY(8*n);
// generate polygon P
for (int i = 0; i < n; ++i) {
x[i+1] = x[i] + (n-i);
y[i+1] = y[i] + i;
}
for (int i = 0; i < n; ++i) {
x[n+i+1] = x[n+i] - i;
y[n+i+1] = y[n+i] + (n-i);
}
for (int i = 0; i < n; ++i) {
x[2*n+i+1] = x[2*n+i] - (n-i);
y[2*n+i+1] = y[2*n+i] - i;
}
for (int i = 0; i < n; ++i) {
x[3*n+i+1] = x[3*n+i] + i;
y[3*n+i+1] = y[3*n+i] - (n-i);
}
// concatenate x (resp. y) with itself, and let revX,revY be reverse of x,y
for (int i = 0; i < 4 * n; ++i) {
x[4*n+i] = revX[4*n-1-i] = x[i];
y[4*n+i] = revY[4*n-1-i] = y[i];
}
// convolve x with revY, y with revX
fft(x); fft(y); fft(revX); fft(revY);
for (int i = 0; i < x.size(); ++i) {
x[i] *= revY[i];
y[i] *= revX[i];
}
fftInverse(x); fftInverse(y);
// calculate answer
Field<998244353> ans = 0, p = 1;
for (int i = 8*n-3; i >= 4*n; --i) {
ans += p*(y[i]-x[i]);
p = 2*p+1;
}
return ans/2;
}
};
The total complexity is O(n log n).