Challenge Overview

Challenge Objectives

 
  • Build data synchronization tool for the Topcoder Terms API


Project Background

 
  • Topcoder currently keeps the challenge terms in a legacy Informix database. The v2 Terms API provides access to some of the terms management features. This project aims to supplement v5 Terms API by building a processor to put data created by the v5 Terms API back into the legacy informix database. When other systems are eventually upgraded to use the new v5 API, the informix database can be deprecated, and so can this legacy processor

  • In a previous challenges we have migrated the v2 Terms API endpoints to the v5 standard service and created a migration tool that moves all the data from informix to postgres

 

Technology Stack

 
  • NodeJS

  • Kafka

  • Informix

 

Code access

 

Base API code is available in the project repository, dev branch. See the forums for v2 api and database details.

 

Individual requirements

 


Our new terms service is storing data in postgres database and sending event messages to a Kafka queue (tc-bus) for all terms modifications. In this challenge we want to use those events to send the data back to the informix database. This tool is needed because we have many services that read the data from the old v2 api and informix and we can’t update them all at once, so this is a temporary solution to ease the migration until legacy systems can be updated and the informix db deprecated.

 

Terms updates consist of four parts (in this order):

  • Raw terms (entities)

  • Mapping of required terms for challenges (ie resources)

  • Mapping of which user agreed to which terms

  • Docusign envelope information

 

Data update in Informix can be done using raw sql queries. 

 

See the data migration script in the data_migration directory for details on target tables in informix (same tables that the migration script is reading from). Kafka message details are available in the individual api services.

 

Processor code will be in a separate repository and a standalone tool, not dependent on terms service. Create a dockerfile to build a docker image and a docker-compose script for local testing with kafka, postgres, informix and test data. Create a npm command that will start the data synchronization tool. You can use a similar code structure as another legacy processor we have built in the past - challenge processor.

 

On startup the processor should subscribe to kafka topics and wait for events. Log start/end of processing for each event. In case of any errors, log the error, send the message to a dead letter queue (a dedicated topic in kafka), and inform the support staff about it by sending an email. See this code for an example of sending an email using the event bus.

 

All the configuration should be extracted to a config file and read from environment variables:

  • Informix connection details

  • Kafka parameters

  • Token details for sending support email

 

Last task in this challenge is to analyze the current service code and the processor you built to check if there could be any “ordering” conflicts when processing messages that would lead to possible errors. Your task here is to document those issues and we would try to analyze and fix them in future challenges.


Submission Guidelines

  • Submit a Readme with details on how to configure, build, run and test the processor

  • Submit a git patch with the updated code



Final Submission Guidelines

See above

ELIGIBLE EVENTS:

2020 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30117359