Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Previously, we used https://github.com/topcoder-platform/challenges-logstash-conf/ to populate the elasticsearch indexes, but we saw problems when using it.

1. Each input queries a small set of data for challenge and populate the elasticsearch, which makes the challenge data is partially available sometime
2. We saw cases that the challenge data are lost.

For this challenge, we'd like to create a different way to populate the challenge data into elasticsearch, which is more predicable and for admin to use and fix data.

Followings are the general requirements:

1. Define models to hold the challenges type in elasticsearch index, see https://github.com/topcoder-platform/challenges-logstash-conf/blob/populate_more_challenge_info/conf/challenges-feeder-conf.j2, be sure define the types to align with db column type.
2. Define the aggregate method that will doing the queries like https://github.com/topcoder-platform/challenges-logstash-conf/blob/populate_more_challenge_info/conf/challenges-feeder-conf.j2 for a given list of challenge ids (can be one or several)
2.1 Define separate DAO methods for different queries in https://github.com/topcoder-platform/challenges-logstash-conf/blob/populate_more_challenge_info/conf/challenges-feeder-conf.j2
3. Define the Bulk Load method which will use Jest to push the aggreated list of challenge data by using the Bulk API, using Bulk API is for performance purpose.
4. Define a RESTful endpoint (PUT /elastic/challenges) to purposely push challenges data into elasticsearch
4.1 This endpoint is admin only
4.2 This endpoint should accept the following parameters
- index name
- type name (default to challenge)
- a list of challenge ids
4.3 The endpoint will first use the aggregate method to load the challenges data, then using the Bulk Load method to push the data into elasticsearch.
5. For Jest client, it needs to support standard Elasticserch and AWS Elasticsearch Service, related logic can be found in challenge service (posted in forum).
6. For configuration, we'd like have default value in yaml file (so easy for local setup) and can be replaced by using environment variables (purposely for dev and prod environment), see http://www.dropwizard.io/1.1.0/docs/manual/core.html#environment-variables
7. swagger.yaml should be created for describing the RESTful endpoint clearly, including error cases.
8. For source code structure, you can follow challenge services (posted in forum) and terms service, raise questions if you are uncertain.

Local Setup and Test
- Please follow Build and Run with Docker compose to build and run direct app and online review locally
- Login in to Direct App to create some challenges and using online review to move phases.
- Call your endpoint to trigger the push to elasticsearch 
- Verify the data in elasticsearch is properly populated.
- OPTIONAL: The data in elasticsearch is purposely to used by challenge service, you can run challenge service to check with.

Final Submission Guidelines

- Complete Source Code For Elasticsearch Feeder Service
- Deployment and Verification Guide

ELIGIBLE EVENTS:

2018 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30061272