Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Previously, we used https://github.com/topcoder-platform/challenges-logstash-conf/ to populate the elasticsearch indexes, but we saw problems when using it.

1. Each input queries a small set of data for challenge and populate the elasticsearch, which makes the data is partially available in elasticsearch sometime
2. We saw cases that the data are lost in elasticsearch.

For this challenge, we'd like to create a different way to populate the data for marathon match and srms into elasticsearch, which is more predicable and for admin to use and fix data.

Notes, pereviously, we have migrated the logstash config for challenges (https://github.com/topcoder-platform/challenges-logstash-conf/blob/populate_more_challenge_info/conf/challenges-feeder-conf.j2), you can follow the same approach.

Followings are the general requirements:

1. The scope of this challenge is to migrate https://github.com/topcoder-platform/challenges-logstash-conf/blob/dev/conf/mmatches-feeder-conf.j2 and https://github.com/topcoder-platform/challenges-logstash-conf/blob/dev/conf/srms-feeder-conf.j2
2. Define unified model type for marathon matches and srms in elasticsearch index. Be sure define the types to align with db column type.
2. Define the aggregate method that will doing the queries like https://github.com/topcoder-platform/challenges-logstash-conf/blob/dev/conf/mmatches-feeder-conf.j2 and https://github.com/topcoder-platform/challenges-logstash-conf/blob/dev/conf/srms-feeder-conf.j2 with a given list of round ids (can be one or several)
2.1 Define separate DAO methods for different inputs in logstash config
3. Refactor the Bulk Load logic which uses Jest to push the aggreated list of data by using the Bulk API, making it be generally available for different purpose, like endpoint for populuating challenges, marathon matches and srms.
4. Define a RESTful endpoint (PUT /elastic/srms and PUT /elastic/mmatches) to purposely push data into elasticsearch. 
4.1 This endpoints are admin only
4.2 This endpoint should accept the following parameters
- index name
- type name (default to the value used in logstash)
- a list of round ids
4.3 The endpoint will first use the aggregate method to load the challenges data, then using the Bulk Load method to push the data into elasticsearch.
5. For Jest client, it needs to support standard Elasticserch and AWS Elasticsearch Service, the current implementation already supports this, be sure to test it propperly.
6. For configuration, we'd like have default value in yaml file (so easy for local setup) and can be replaced by using environment variables (purposely for dev and prod environment), see http://www.dropwizard.io/1.1.0/docs/manual/core.html#environment-variables
7. swagger.yaml should be created for describing the RESTful endpoint clearly, including error cases.

Local Setup and Test

Please check the files in /docs directory to setup, deploy and testing.

Final Submission Guidelines

- Code Changes
- Deployment and Verification Guide
- Winner will be responsible to do the final fix and Pull Request Creation
 

ELIGIBLE EVENTS:

2018 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30062437