Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Our client has around 20 testing scripts/files, each of which contains around 100 testing cases. They use this test suite to check if a set of external APIs are working as expected or not.  The problem they face is that they don’t have a way to define and execute targeted test-plans or to chain tests together, and instead they have to run the entire test suite even if they want to test just a few of the features.
To solve their problem, in this challenge you need to simulate their test suite and create a Python based tool, which can take in a JSON ‘test plan’, and perform the tests according to it.

About PyTest:
The client's scripts make use of PyTest for implementing the testing cases, which is a prominent testing framework for Python. Usage of PyTest is expected for this challenge, but considering that probably only a handful of PyTest’s features will be required for this challenge, prior experience with it is not essential. For your reference, consider referring to this talk on Youtube, which provides a nice coverage of many of the important features available in PyTest.

Details:
In the client’s current solution, although each individual test method is doing its job of testing an API endpoint as expected, the client would like to have an automated solution for the following problem for this challenge:

The problem  - Right now if the client wants to use a new ‘test plan’, i.e. if they want to run only a few tests, or if they want to run some meaningful sets of tests in stages, then it is difficult to do so without making changes directly in the script or creating a custom testing script, for each such ‘test plan’.

The expected solution - In order to solve the stated problem, the following steps need to be fulfilled:

  1. Create simulated/dummy testing scripts
    Create a simulated/dummy test suite of around 5  scripts, each containing around 10 testing cases, with each testing case making anywhere between 1 to 5 calls to some dummy API service. Each of the 5 scripts can be thought of as belonging to a particular ‘feature’ of the API service that is being tested.
    In the client’s current solution, some of the scripts have the same ‘setup’ and ‘teardown’ logic (the test setup and clean code respectively) and there are some common methods that are executed for every test, across all scripts. It might be useful, although not critical to simulate it.  

    The naming of the script should be done following some convention. For example, Test11, Test12, Test13, …, Test21, Test22, Test23, …. The first digit represents the major feature category, and the second digit represents the minor feature category. You can create a better naming convention as long as they can help promoting your design. This will mimic the actual production testing scripts.
     
  2. Think of a JSON schema for test-plans
    Think of and create a generalized JSON schema for ‘test plans’, which the client can use to specify their test plans. A very important component of any test plan is 'staging', where the tester can specify several stages of tests, where each stage executes only if the previous stage has completed successfully, and the testing will end if any stage fails.
    Here the client should be able to specify the subset of tests that make up a particular stage. You are free to suggest additional features/properties of this test-plan schema, which can make creating staged plans more efficient.
     
  3. Create the automation script
    Create a command line tool/python script that will take in the test-plan JSON file as the input, will execute the tests in the scripts according to the plan, and will save the final results of the test.
     
  4. Log & save the results
    After the execution of each test method, its results along with all other information should be logged. Here the logging should be done via a single centralized logging method, so that it can be easily changed and extended.
    For future extensibility, pass all the details of the testing method along with the details of the test request and its corresponding response to this logging method, even if it is not directly used in the final reports. After the tests end, all the collected logs along with all the available meta-data should be saved into a JSON file. In addition to acting as a raw log, this JSON file will be used in the future to extract relevant information and create more readable and relevant reports.
     

Future Challenges - In future challenges, this solution will be integrated to a Web based UI, where the tester should be able to define and chain up tests through an easy to use interface. If required, please keep this future integration roadmap in mind while making design decisions for this challenge.

Open Source Tools - You are free to use any free open-source tools to solve any of the requirements above, partially or completely.

Please feel free to use the forum or the contact manager option for further clarifications.

 

 



Final Submission Guidelines

Submission Deliverables:
A zip file containing:

  • Your code
  • A quick video of your solution in action. You could just screen record while you are executing your solution. You are NOT required to narrate or write captions, as long as what you are trying to do is apparent. Note - If you use YouTube, please keep the privacy set to ‘unlisted’.
  • Deployment guide containing clear details of how to setup, run and verify your submission. Feel free to add any details in this document or in the video that might allow the reviewers/client to better understand your implementation and how it fulfills the challenge requirements.

 

ELIGIBLE EVENTS:

2018 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30060643