Blog Articles Banner
SRM

Problem Of The Week: Adhoc Math And Binary Search



This week, let's analyze a problem statement that featured in SRM 699, a contest sponsored by Cisco.


This problem statement appeared in the Division 2 problem set, as the level 2 challenge.


The summary of the problem is as follows -


We start with a sum of zero. Now let's choose a number N, after which we repeatedly perform the following algorithm, until N has no digits -

  1. Add N to our current sum.

  2. Remove the last digit from N.


Now the question provides as input the final sum, and asks the user to output the initial N.


For example, if the input were 564, the output must be 509.


As 509 + 50 + 5 = 564.


Now lets think about how we can solve this problem.


Let us call our input number as Y.


Let us call our final answer as X.


We must solve for X, such that by performing the algorithm specified in the problem statement on X, we obtain Y.


Obviously, the naive solution would be iterating over all possible values of X, from 1 to a reasonable upper bound (which we'll define soon) until one of the values of X return Y after performing the algorithm on it.


This raises two questions -

  1. What would be the upper bound of X?

  2. What would the time complexity of the naive method be?


Here are the answers -

  1. Let's assume the upper bound of X, is equal to Y itself.


This is because, performing the algorithm on Y would return: Y + Y/10 + Y/100 + Y/1000 + ... + 0


Obviously this result is greater than or equal to Y.


Thus we now know an upper bound for X.

  1. As we know our upper bound, the time complexity for this naive approach would be O(Y*logY) as each value of Y has logarithmic number of digits.


Now that we've answered those two questions, let's make an observation.


The algorithm defined in the problem statement is non decreasing, for all values of X.


Meaning that if X1 < X2 then Y1 < Y2 where Y1 and Y2 are the return values of running our algorithm on X1 and X2.


How can we prove this?


Let us consider a 3-digit number ABC (where A, B, C are the digits of the number).


Our algorithm will return ABC + AB + A.


Now let us consider another number that is greater than ABC, for example DEF.


If D > A, then the expression -


DEF + DE + D > ABC + AB + A


evaluates to be true, as D is the most significant digit in all terms on the left hand side, while A is the most significant digit in all terms of the right hand side.


Now let D = A, and let E > B.


A similar proof follows -


DEF + DE + D > ABC + AB + A


Let us substitute D = A


AEF + AE + A > ABC + AB + A


Therefore, EF + E > BC + B, where E>B which is always true as E is the most significant digit in the left hand side while B is the most significant digit in the right hand side.


And finally, let us consider the case where D = A, E = B, but F > C


Here too,


DEF + DE + D > ABC + AB + A


since, if we substitute and remove equal terms, we get


DEF + DE + D > ABC + AB + A


which as per our assumption is true, and hence we have proved that our algorithm is strictly increasing for all its inputs.


Because of the monotonic nature of our algorithm, we can apply binary search.


The pseudocode for this would be as follows -


result = -1
low = 0
hi = Y
while lo<=hi:
mid = (lo + hi + 1)/2
currentsum = 0
power = 1
for i in 1...18:
currentsum += mid/(10^(power))
power += 1
if currentsum <= Y:
if currentsum == Y:
return mid
else if currentsum < Y:
result = mid
lo = mid+1
else:
hi = mid-1
return -1


The power of binary search is evident here as this brings our overall time complexity to O(log Y), a drastic improvement over our previous O (Y log Y) approach.


This was also the same approach used by youaremysky [https://www.topcoder.com/members/youaremysky] who implemented the solution the fastest, in C++, and whose submission is available here - https://community.topcoder.com/stat?c=problem_solution&cr=40181036&rd=16803&pm=14397