Challenge Overview

Challenge Overview

Welcome to the Pioneer Demo Platform Setup Challenge

In previous challenges we have defined the high level architecture for the event streaming platform. In this challenge we will create a demo environment - Kubernetes, Kafka, Strimzi, KafkaConnect, KsqlDB, Prometheus, Grafana.

Background

We are building a scalable event streaming platform that will be used to provide customers with real time notifications on events that occurred in the system. Specific use case that we’re targeting is in the financial sector, but the platform will be designed as a generic event streaming solution. Scalability is a major concern as the solution would be used to process millions of events daily.

Event streaming platform will consist of three parts:

  1. Producer - that ingests source data into Kafka cluster

  2. Aggregation and filtering of the source data

  3. Delivery of the events to end users

 

Most of the source data is generated in real time (ex Bob sent $5 to Alice) and some data is generated during the night work process (ex balance for Bob’s account is $10). Regardless of how the data is generated, it is available in Kafka topics and will be used by our Producer to send event notifications. 

 

See the project architecture document for more information (posted in challenge forums). You should read and understand the existing architecture before reading the challenge requirements below.

 

Task Details

Your task in this challenge is to set up all the services used in the Pioneer platform and provide demo usage of each of the services (to verify everything is set up correctly). 

All the services will be deployed to a Kubernetes cluster. Setting up the cluster itself is NOT in scope - you can assume the cluster already exists - we can easily create clusters in EKS or locally with minikube.

You will need to go through the architecture document to understand the details of the services that need to be deployed - here is a short list:

  • Strimzi (all kafka services will be managed by strimzi)

  • Kafka Cluster

  • Kafka Connect

  • KsqlDB

  • Prometheus (with Kafka Exporter)

  • Grafana

For each of those services create a small verification demo - for example for Kafka Connect configure it to load data from a dummy database via JDBC (you can deploy postgres in Kubernetes cluster too), configure some sample ksqldb aggregations, etc. For each of the services make sure it’s api is exposed and can be called externally.

All the deployment should follow gitops.

Finally, we need to deploy placeholder services for subscription api, data ingestion api, data ingestion frontend and subscription frontend. You can use dummy “Hello world” docker containers for these and deploy with helm and flux (helm/flux are optional). 

 

Submission Guidelines

 

These should be the contents of your submission:

  1. Deployment configuration files for all the services

  2. Readme with steps for deploying all the platform services

  3. Verification details for each of the services



Final Submission Guidelines

See above

ELIGIBLE EVENTS:

2021 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30161423