Challenge Overview
Problem Statement | |||||||||||||
Prize DistributionPrize USD 1st $ 8,500 2nd $ 6,500 3rd $ 4,500 4th $ 3,500 5th $ 2,500 Total Prizes $ 25,500 SummaryNASA has an experimental Radio-Frequency Identification (RFID) tracking system on-board the International Space Station (ISS) that can provide the location of tagged items with average, standard deviation, and maximum errors of 1.5, 0.5, and 3 meters, respectively. We would like to see how much improvement can be obtained using other algorithms that mine the archived RFID data. Background and motivationTracking items in space habitats can be more challenging than it might at first seem. The environment is predominantly closed, with the exception of visiting vehicles that deliver new cargo and jettison of trash or return of some items. However, there are a number of factors that complicate tracking, including crews that change out in 6 month intervals, laboratory space that doubles as living space, cargo transfer bags (CTBs) that are nearly identical in appearance, and limited stowage space. To address cargo tracking issues in future deep space missions, NASA has initiated the REALM (RFID-Enabled Autonomous Logistics Management) experiments on the International Space Station. The first phase, REALM-1, involves an RFID reader system with 24 fixed antennas in 3 of the ISS modules, which are about 3.5m in diameter and range in length from 6-8m. These 3 modules are referred to herein as "instrumented modules". There are about 3,200 RFID tags on a variety of items, as well as about 100 marker tags that are placed on the ISS internal structure and serve for calibration or machine learning. Many of the individual tagged items are contained within CTBs, which are also tagged. The raw RFID data contains the tag identification code, estimates of the signal strength and phase received by the reader (or interrogator), the reader and antenna on which the tag was read, and a few other parameters. All RFID data is downlinked and archived on the ground. Scattering of the RFID signals in the confines of the ISS complicates triangulation methods, although a location accuracy of about 1.5m (average) has been obtained. Objective Your task will be to detect the location of RFID tagged items within the ISS as accurately as possible. The location your algorithm returns will be compared to ground truth data, and the quality of your solution will be judged by how much your solution correlates with the expected results. See Scoring for details. Input DataAll files described in this section are available in 3 zip files:
ISS shape The solution domain for the contest is indicated in Figure 1 and includes modules that are instrumented with RFID readers and antennas as well as some that are not instrumented. The instrumented modules are highlighted in blue in Figure 1. Modules shaded purple are not instrumented, but are considered "in bounds" for the purpose of the contest. That is, some CTBs will be stowed in non-instrumented modules during static events (see later). Modules shaded green in Figure 1 are considered invalid in the sense that targeted cargo will not be stowed there during the static data events. ISS.pdf contains the approximate dimensions of the relevant ISS modules. Note that in this contest all distances are measured in inches. Also note there are slight differences in naming compared to Figure 1, e.g. "Node 1" is shown as "N1" in ISS.pdf, "LAB" is shown as "US Lab", etc. A detailed 3D model of the ISS is available in .blend and .obj formats. Antenna locations antenna_locations.csv contains the positions of the 24 RFID reader antennas. These positions are fixed in this contest. Tagged items The contest features 3 kinds of tagged items: marker tags (tags with fixed and relatively accurate positions), community tags (tags stored at ISS stowage racks) and target CTBs (tags stored in or attached to a limited number of cargo transfer bags especially selected for this contest). More details of these 3 kinds follow. Marker tags marker_tag_locations.txt contains data of RFID tags that are affixed to ISS structure, and thus have fixed positions. The accuracy of the marker tag positions is within 6 inches. Column epc_id contains the unique identifier of these items (EPC: Electronic Product Code). Marker tags are of two different physical types, Metalcraft and Squiggle, denoted by 1 and 0, respectively, in the "tag type" field. All other tags (in communities and CTBs) are of Squiggle type. The signal strength of Metalcraft tags is approximately 6dB lower than of the Squiggle tags. Raw RF signal coming from marker tags can be used as training data, but some marker tags will be used for testing as well. Community locations and community tags community_locations.txt contains the center positions of those stowage racks of the ISS that are available for training. These positions are fixed in this contest. Note that these locations are inaccurate, and the inaccuracy is varied. The dimensions of the stowage locations greatly exceed the accuracy with which the centers are listed in community_locations.txt.
communities.txt contains the list of tagged items stored at each of the the community locations. The file contains sections in this format: COMMUNITY_ID [EPC_ID,...] Note that this data is known to contain errors, it's ~90%-95% accurate. Target CTBs There are tags attached to the exterior of 6 target CTBs as well as to some of the items within these CTBs. Some CTBs have 1 external tag, while others have 2 external tags on orthogonal faces, each with the same EPC ID code. Each of the 6 CTBs are placed either in specific stowage areas of the ISS or in open regions. The CTBs were placed in specific locations for a period that includes crew sleep, and it is anticipated that data acquired during crew sleep is less likely to be perturbed by crew movements. These events are referred to as static data events, which encompass specific times at which estimates will be evaluated. At the start of the next crew day, some or all of the 6 target CTBs may be temporarily relocated prior to the next static data event. The next static data event is staged with the crew moving the 6 CTBs to new locations prior to crew sleep, and data is again acquired during the crew sleep period. This was repeated each day during the data collection period. ctb_locations.txt and ctb_tags.txt contain the known positions and tag distribution per target CTB in the same format as above for communities. The uncertainty of the target CTB positions is 1 foot (12 inch). Note that there may be overlap between the community tags and CTB tags: the same tag that is listed as a community member during a static data event may be placed into a CTB during another static data event. Data from 2 of the 6 CTBs is available for training, one of them has public data spanning 3 static data events, the other for only one static data event. Raw RF data rfid_raw_readings_train.txt (4.8 GB unpacked) contains timestamped raw RF signal originating from RFID-tagged items (all 3 kinds listed above), as read by all antennas. The format of this file is the following: date,epc_id,antenna_id,rssi,freq,phase,power,cnt 2018-07-16 04:03:31.78,5154,27,-48,910750,53,3000,1 2018-07-16 04:03:31.797,61890,27,-57,910750,87,3000,2 ...
Tasks A task has two components:
There are 20 tasks in this challenge, with IDs task-01, ... ,task-20. System Description and ConfigurationThe REALM readers each transmit approximately 1/2 W signals that are compatible with EPCglobal Generation 2. The readers frequency hop according to a frequency hopping spread spectrum protocol that is defined in Gen2_Protocol_Standard. Each reader cycles through 4 connected antennas, dwelling on each for a period of 1 second. After the 4 antennas have been sampled, there is a very brief off period of approximately 25 ms before the cycle repeats. The readers are configured to operate in Session 1, as defined in Gen2_Protocol_Standard. The REALM system ceases transmission for periods typically lasting 4-4.5 hours each day, although occasionally these periods are longer. These will manifest as periods without data. There are multiple reasons why any specific tag may not be read during a 1 second antenna dwell time:
It should be noted that tags can typically be read in non-instrumented modules that directly connect to the instrumented modules. However, the coverage areas in the non-instrumented modules are not well established at this time. Output FilesYour output must be a single text file (with a .txt extension) that contains the location predictions for each tag listed in all 20 tasks. The file should contain comma-separated lines formatted like task-id,epc_id,x,y,z,confidence_radius Make sure you measure x, y, z and confidence_radius in inches. A sample line: task-01,9876,-192.0,55.0,12.0,25 Your output must only contain algorithmically generated location predictions. It is strictly forbidden to include manually created predictions, or answers that - although initially machine generated - are modified in any way by a human. FunctionsThis match uses the result submission style, i.e. you will run your solution locally using the provided files as input, and produce a text file that contains your answer. In order for your solution to be evaluated by Topcoder's marathon system, you must implement a class named IssRfidLocator, which implements a single function: getAnswerURL(). Your function will return a String corresponding to the URL of your submission file. You may upload your files to a cloud hosting service such as Dropbox or Google Drive, which can provide a direct link to the file. To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getAnswerURL() function. If you use Google Drive to share the link, then please use the following format: "https://drive.google.com/uc?export=download&id=" + id Note that Google has a file size limit of 25MB and can't provide direct links to files larger than this. (For larger files the link opens a warning message saying that automatic virus checking of the file is not done.) You can use any other way to share your result file, but make sure the link you provide opens the filestream directly, and is available for anyone with the link (not only the file owner), to allow the automated tester to download and evaluate it. An example of the code you have to submit, using Java: public class IssRfidLocator { public String getAnswerURL() { //Replace the returned String with your submission file's URL return "https://drive.google.com/uc?export=download&id=XYZ"; } } Keep in mind that your complete code that generates these results will be verified at the end of the contest if you achieve a score in the top 5, as described later in the "Requirements to Win a Prize" section, i.e. participants will be required to provide fully automated executable software to allow for independent verification of the performance of your algorithm and the quality of the output data. ScoringA full submission will be processed by the Topcoder Marathon test system, which will download, validate and evaluate your submission file. Any malformed or inaccessible file, or one that doesn't contain the expected number of lines will receive a zero score. If your submission is valid, your solution will be scored using the following algorithm. First item-level scores are calculated for each {task-id,epc_id} pair in the test set. Let {xt, yt, zt} be the coordinates of the tag's true location, {x, y, z} the coordinates you returned, and R is the confidence radius you returned. Then let E be the Euclidean distance between the true and predicted locations (measured in inches): E = sqrt((xt-x)^2 + (yt-y)^2 + (zt-z)^2) Your score for this item is 0 if E > R. Otherwise score = min (Rmin / R, 1), where Rmin is a tag specific, fixed minimum distance. The value of Rmin is
Note that Rmin is not disclosed for each tag, it is known only by the scoring algorithm. Then the overall score will be calculated as a weighted average of the item-level scores. Weights will be assigned in a way that the target CTBs contribute 4 times more to the score than the other 2 kinds of tags: FinalScore = (marker_score + community_score + 4 * ctb_score) / 6, where marker_score is the average of item-level scores of marker tags, community_score is a weighted average of item-level scores of the tags stored at community locations, where weights are inversely proportional to the number of items in the community, ctb_score is a weighted average of item-level scores of the tags stored in target CTBs, where weights are inversely proportional to the number of items in the CTB. In all 3 components of the score (marker_score, community_score, ctb_score) the average is taken across all tasks. For example if task-01 contains 10 marker tags, task-02 contains 20 marker tags and no other tasks contain marker tags then marker_score is the average of 30 item level scores. A special case: tags that produce no signal at all during the measurement period of a task will receive a weight of 0. (Typically these are community tags but may belong to the other 2 kinds of tags as well.) Note that you still need to include them in your prediction file but the location you give will have no effect on your score. Finally, for display purposes your score will be multiplied by 1,000,000. Example submissions can be used to verify that your chosen approach to upload submissions works and also that your implementation of the scoring logic is correct. The tester will verify that the returned String contains a valid URL, its content is accessible, i.e. the tester is able to download the file from the returned URL. If your file is valid, it will be evaluated, and detailed score values will be available in the test results. The example evaluation is based on a small subset of the training data, see rfid_raw_task-example.txt available in tasks.zip. Example submissions must contain 5 lines of text, corresponding to the 5 tags listed in task-example.txt. Though recommended, it is not mandatory to create example submissions. The scores you achieve on example submissions have no effect on your provisional or final ranking. Example submissions can be created using the "Test Examples" button on TopCoder's submission uploader interface. Full submissions must contain in a single file the location predictions that your algorithm made for all {task-id, epc_id} pairs in the test set. Full submissions can be created using the "Submit" button on TopCoder's submission uploader interface. Final ScoringThe top 10 competitors according to the provisional scores will be invited to the final testing round. The details of the final testing are described in a separate document. The main requirement is that you should create a dockerized version of your system within 5 days after the end of the provisional phase that we can run in a uniform way, using new tasks. Your solution will be subjected to three tests: First, your solution will be validated, i.e. we will check if it produces the same output file as your last submission, using the same input files used in this contest. Note that this means that your solution must not be improved further after the provisional submission phase ends. (We are aware that it is not always possible to reproduce the exact same results. E.g. if you do online training then the difference in the training environments may result in different number of iterations, meaning different models. Also you may have no control over random number generation in certain 3rd party libraries. In any case, the results must be statistically similar, and in case of differences you must have a convincing explanation why the same result can not be reproduced.) Second, your solution will be tested against a new set of tasks. Third, the resulting output from the steps above will be validated and scored. The final rankings will be based on this score alone. Competitors who fail to provide their solution as expected will receive a zero score in this final scoring phase, and will not be eligible to win prizes. General Notes
Requirements to Win a PrizeIn order to receive a final prize, you must do all the following: Achieve a score in the top 5 according to final test results. See the "Final scoring" section above. Once the final scores are posted and winners are announced, the prize winner candidates have 7 days to submit a report outlining their final algorithm explaining the logic behind and steps to its approach. You will receive a template that helps creating your final report. If you place in a prize winning rank but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all of the above. Additional EligibilityNASA Employees are prohibited by Federal statutes and regulations from receiving an award under this Challenge. NASA Employees are still encouraged to submit a solution. If you are a NASA Employee and wish to submit a solution please contact Topcoder who will connect you with the NASA Challenge owner. If your solution meets the requirements of the Challenge, any attributable information will be removed from your submission and your solution will be evaluated with other solutions found to meet the Challenge criteria. Based on your solution, you may be eligible for an award under the NASA Awards and Recognition Program or other Government Award and Recognition Program if you meet the criteria of both this Challenge and the applicable Awards and Recognition Program. If you are an Employee of another Federal Agency, contact your Agency's Office of General Counsel regarding your ability to participate in this Challenge. If you are a Government contractor or are employed by one, your participation in this challenge may also be restricted. If you or your employer receiving Government funding for similar projects, you or your employer are not eligible for award under this Challenge. Additionally, the U.S. Government may have Intellectual Property Rights in your solution if your solution was made under a Government Contract, Grant or Cooperative Agreement. Under such conditions, you may not be eligible for award. If you work for a Government Contractor and this solution was made either under Government Contract, Grant or Cooperative Agreement or while performing work for the employer, you should seek legal advice from your employer's General Counsel on your conditions of employment which may affect your ability to submit a solution to this Challenge and/or to accept award. | |||||||||||||
Definition | |||||||||||||
| |||||||||||||
Examples | |||||||||||||
0) | |||||||||||||
|
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.