By Vedensky At some point, every coder experiences the intense feeling of disappointment that comes when you are told your carefully written solution has "Failed System Test." How can you keep from repeating such a mishap? The answer is quite simple: learn the art of testing. Testing is a cornerstone in every aspect of software development. TopCoder SRMs, though, include several unique characteristics that require a new approach:
Of course, the hardest part of testing has nothing to do with the TopCoder Arena - it all comes down to attitude. Many programmers seem to fundamentally misunderstand the term "testing." Defining testing as a process meant to show "that the program has no errors" or "that the program functions correctly" is totally wrong. Testing should begin with the assumption that the program does hold errors - which, in fact, is rather likely - and that we need to find them. Testing is executing the program with the goal of finding as many errors as possible. A correct understanding of the purposes of testing is vitally important. Someone who assumes his code is perfect will unconsciously attempt tests that don't uncover any faults, or he won't try tests at all. A coder who expects to find mistakes, though, will construct tests which are likely to fail. The deliberate and single-minded search for faults is an arduous task -- it's difficult to try and tear down what you just built -- but you should learn to do so if you want to succeed. Managing time Each coder should approach the SRM with a strategy for managing time -- a model for how he will approach problem statement understanding, coding and testing for each problem. For successful project development in the real world, the experts recommend that programmers should spend one third of the total project time thinking and planning, one half on testing (including unit testing, system testing and beta testing), and only one sixth on the actual coding itself. In TopCoder competitions such time allocation is unreasonable, but it's useful to have a plan for managing your time in the Arena. Time spent on testing should depend on the problem's complexity: the more complicated the problem is, the higher the probability of a mistake or pitfall -- and the more you should test your solution. Let's consider an example: you have coded and submitted both the easy and the medium problems in your division, and there are still 10 or 15 minutes left. You have several options:
Too often competitors rely on the problem writer and don't look beyond the "Examples" section of the problem statement. Of course, there are many problems, usually in Division 2, for which the "Examples" describe all possible cases exhaustively, and such testing is enough. But most problems in Division 1 tend to contain pitfalls that aren't obvious at first glance, and aren't cited in the "Examples." It's up to the coder to take a creative approach to problem-solving, to devise tests that differ qualitatively than those in the "Examples," and to watch out for problem statements that intentionally mislead competitors. Let's view several problems which should be tested more inventively than the "Examples" section permits. SnakeTurbo (SRM 273, Div1 Medium) It might seem that changing the moving direction more than once is senseless. With a little thought, though, it becomes clear that you should write a more complicated DP solution that takes into account a case involving multiple direction changes, like (startLocation = 100, endLocation = 1000000, orbs = {0, 99, 110}). Smooshing (SRM 284, Div1 Medium) Reserved words could contain capital letters, while not starting with them. Most of the incorrect solutions would have failed test "qWerty," considering "Werty" as a variable name and trying to shorten it. MooresLaw (SRM 287, Div1 Medium) If you solve this problem using an exact formula, you should regard years = 1 and years = 2 separately, because for them the optimal value "when you should buy the computer and start computations" is negative, and you can't start earlier than now. The "Examples" section has a minimum value of 3. Think the tests through! If you are testing the algorithm itself and not its processing time, you should know the desired result for a given input parameter in order to compare it with the actual return value. If determining the desired output is hard to calculate, you should at least estimate the output's accuracy, and not be satisfied with its mere presence. You should never first run the test, then watch the output, mutter "OK, credible," and continue with the next test! That's why the algorithm itself should be tested using either the "Examples" section inputs, or small inputs which are easy to calculate mentally or on a scrap of paper. A sensible test and how to find it You're convinced - you've come to believe that testing your solution is as essential as the coding itself. But now what? How do you go about selecting a good test - in other words, an input that is likely to reveal a flaw in your solution? After all, you can't run the program on every single input in the definitional domain of the problem. So how do you identify inputs that are small and easy to calculate, but that are also effective for error mining? The worst of all possible methods is so called "stochastic testing", i.e. running the program on randomly chosen inputs. By their very randomness, these tests are hard to calculate and unlikely to have useful properties. The following are some useful principles for sensibly selecting test cases:
Testing your solution can result in one more very useful effect. If you have found some special or boundary case which is not covered in the "Examples", and you believe that it's rather unobvious, you can make use of it in the Challenge Phase. A five-minute intermission can be just enough time to test your submitted solution once last time and invent some intricate test. Even if you find your mistake now, there's still time to redeem your position! You need only to devise a test to reveal this mistake and to use it as a challenge case. If you succeed, you could possibly score more points for that challenge case than you would have achieved for the correct submission. Conclusion No doubt there are mistakes that can only be found through the system tests. In the FallingCoconuts (SRM 273, Div1 Easy, Div2 Medium) problem, it is difficult to determine whether the character which stands for a coconut is the numeral '0' (zero) or the capital letter 'O' (capital letter) while using the default font in the Arena. And if you miswrite the symbol, your otherwise correct solution will fail. In general though, taking the right approach to testing - and assuming that your own code is wrong -- will earn you many more points and help increase your rating. Related reading:
|
|