Register
Submit a solution
The challenge is finished.

Challenge Overview

Challenge Overview

Topcoder is working with a group of researchers organized by the University of Chicago that are competing to understand a series of simulated environments.  We are looking for simulation models that can predict the future states of our Conflict World.  The challenge builds on previous work in the following two challenges:  Causal Graph for Conflict World and Phase 2 Conflict World Prediction Challenge.  You be using data from the previous challenges in completing the task defined below.

The dataset to use can be downloaded here.

Here are links to the research request data and research request document itself.

Important Note:


Each University of Chicago Team has the ability to request additional information from the virtual world simulation teams beyond what is initially provided through a “Research Request” process.  Data files or folders that are denoted with an “RR” are the output of this process.  In the Code Document forum you’ll find a link to a Research Request document which provides the original request submitted by the University of Chicago researchers that can provide some context.  The requests have to include a plausible collection methodology (e.g. surveys or instruments that can collect data).  There may be additional data that is provided over the course of this challenge submission period.  You are encouraged to include this input into your analysis.

Task Details

Please recommend steps we might take to achieve each of the following objectives over a period of 500 days. The event and agent identifiers below are consistent with the data we have already shared with correspondence. 

  • Maximize participation in E5451
  • Minimize participation in E8368
  • Maximize participation in E6739
  • Maximize participation in E954
  • Maximize the aggregate satisfaction of agents like A5205
  • Maximize the aggregate satisfaction of agents like A3711
  • Minimize the aggregate satisfaction of agents like A7347
  • Maximize the aggregate satisfaction of agents like A2180

“Aggregate satisfaction” here has the same meaning that it did in our earlier challenges.

There may be multiple different ways to achieve the same outcomes. In each case, we want to achieve the maximum change with the fewest interventions and will evaluate your recommendations based both on expected impact and number of required interventions. 

 


Final Submission Guidelines

The final submission must include the following items.

  • A Jupyter notebook detailing:

    • How the data is prepared and cleaned, from the tsv files

    • How each model is created, trained, and validated

    • How individual predictions are created

    • How we can plug in new data into the model for validation purposes.

      • This will be part of the review, so please ensure the reviewer can easily put in the held-back data for scoring purposes.

    • Answers / Exploration of the counterfactual prediction questions detailed above.  This should be well documented and clearly described.


      Judging Criteria

      Winners will be determined based on the following aspects:

    • Model Usability (70%)

      • The research teams will be comparing your work with theirs in answering the questions above.  Your submission will receive a subjective evaluation from this team.  The more insightful and creative your model the more helpful it will be to this team that's preparing their own analysis.

    • Model Transparency (20%)

      • How easy is it to deploy your model?

      • Is your model’s training time-consuming?

      • How easy is it to understand your assumptions and analysis?  Can we understand your methods and conclusions?

    • Clarity of the Report (10%)

      • Do you explain your proposed method clearly?

Review style

Final Review

Community Review Board

Approval

User Sign-Off

ID: 30109376