Challenge Overview
Introduction
This is the Hard - 750 point problem of RDM-4 . Please Note - The files linked in the specs will only be accessible after you register for the challenge.
Topcoder Bug Hunt Scoring Application Development - Hard Problem Detail
Hard Part
In previous parts of this challenge, you have read and displayed the content of CSV file using required filters like displaying Bug Tickets order by Author Username, displaying the count of Bug Tickets of each Author Username order by Highest Number of Bug Tickets.
Now in this Hard part of the challenge you have to parse the CSV file by keeping track of each Author Username Bug Tickets and their “Label” Field which contains 3 or 2 comma separated values i.e. an Integer between 1 to 5, Bug Priority Code P1 or P2 or P3 or P4 or P5 and “Valid” or “Invalid”. There is also another type of Challenge, where instead of the Bug Priority code ("P1" - "P5"), we have the Bug Type: "Functional" or "UI/UX". Included in the sample submission and tester are two different Scoring Sheet files, which determine how many points to assign for each different type. Your solution will have to parse the Scoring sheet and assign points to the given Bug Ticket files based on the chosen type.
For scoring purpose, you only have to use the “Label” field which contains “Valid” text. If Label field contains “Invalid” you should not use it in
the calculation of Points. Also if the Label field contains “Duplicate” text then it will also not be used in the calculation of Final Score. Please note that for scoring arrangements and point distribution you have to use two different csv file namely Scoringsheet1.csv and Scoringsheet2.csv and save the final result file for each score file separately. This is required because format and scoring of Bug Hunt challenges varies. So for the CSV file Ticketfile-1.csv you have to select Scoringsheet1.csv, for Ticketfile-2.csv you have to choose Scoringsheet2.csv. Following Point No.1 is valid only for the those input csv file which contain the comma separated values in the “Label” field out of which one field is an integer.
-
Integer Value between 1 to 5 indicates the number of web browser in which issue a bug has been found. This value will only be used in Calculation of Points for Author Usernames who score 1st and 2nd Highest Points. This calculation step should be implemented after the evaluation of points of all Author Usernames. For example, if 1st or 2nd place Author Username found Bug of P3 Priority in 4 Web Browsers
through any valid Bug Ticket then 6 Points should be calculated 4 Times i.e 6X4=24 , if they found Bug of P4 Priority in 2 Web Browsers then for this valid ticket 4 point will be calculated 2 times i.e 4X2=8 -
Priority Code Points given in the Scoring Sheet file For example: P1=10, P2=8, P3=6, P4=4, P5=2
Technology Stack
Web Application: PHP, JavaScript, C#-ASP.Net, Python, HTML, CSS Any technology can be used, the testing is only performed against the front-end in this particular challenge.
CSV File Information
Desired Output should be as follows. It should be displayed on the Page as well as you have to save these results into a CSV file programmatically.
Errata: while the picture shows column name like "P1 (10)", your solution should have the exact same names for these columns as given in the Scoringsheet .csv files.
For Ticketsfile-2.csv and Scoringsheet-2.csv you have to check Label Field which contains only two comma separated values 1. “Valid” or “Invalid” and 2. “Functional” or “UI/UX”. So for this you to follow Score2.csv for points calculation and follow the below format of result
display.
the picture shows column name like "Functional Bug", your solution should have the exact same names for these columns as given in the Scoringsheet .csv files.
Steps
The following are the exact steps you need to implement that your solution will be checked against.
- Host your app on port 8080 and the page should be at root level (index.html)
- Include a HTML input with id "ticketFile" element to select file input
- Include a HTML dropdown select element with id "challengeType" to select the scoring .csv-file (See the sample submission)
- Include a HTML button with id "btnRead" that reads the input files, and displays the table with the desired column
- Make sure the button has exactly the text "Read File & Display Result"
- Include a div with id "results"
- Within the div, include a table element with id "resultsTable"
- Make sure the table contains the required header row, columns and the correct data according to the pictures and description above. The first column should be "S.No"
- The columns should have exact same names as in the original input .csv-files
- Include a button element with id "btnSave", which will allow the user to download the processed .csv with the exact same content in CSV-format as the displayed results table "resultsTable". (Including header row - but not including the "S.No"-column, which represents the index.)
- Make sure the button has exactly the text "Save Result as CSV File"
Sample Submission
This challenge uses a purely automated testing approach to score the submissions so we are providing a sample submission and an automated tester with a basic test case assembled in a way that simulates the final testing. Docker is used to achieving this. Please read the README.md file to find out how to run the setup. The sample submission will be in the code folder and it should be extended to implement the requirements. The README.md file contains some guidelines on how to extend the sample submission.
Sample submission with local tester for this challenge: sample-and-tester-hard.zip
Sample Submission to submit on the platform: submission.zip
Final Submission Guidelines
Submission Deliverables
Your submission must be a single ZIP file not larger than 10 MB containing just the code folder with the same structure as the one from the sample submission. The sample tester should not be included in the submission. Also make sure you don't submit any build folders generated locally like node_modules, dist etc. You must follow this submission folder structure so our automated test process can process your scoring: Create a folder with “code” as the folder name then zip. Inside the “code” folder, there needs to be a file named Dockerfile. This is the docker file used to build the user’s submission. Refer to the provided Docker file in Sample Submission for each level. Zip that “code” folder and submit to the challenge. Execution Details and Submission Logs Each time you submit, the platform will leverage Docker to run your code. The execution logs will be saved as “Artifacts” that can be downloaded from the Submission Review App: https://submission-review.topcoder.com/.
Checking Passing and Failing Test Cases
Using the Submission Review App (https://submission-review.topcoder.com/), navigate to the specific challenge, then to your submission, and then to the Artifacts for your submission. The zip file you download will contain information about your submission including a result.json file with the test results.
Docker Structure
Make sure you can run your submission with Docker from a clean slate. Your Docker needs to expose port: 8080. It needs to build on a completely clean machine when the platform runs your submission. If you are using something locally to build and run your submission, make sure it’s included as part of your Docker configuration as well. Please look at the sample submission to understand the structure better.