Challenge Overview
Problem Statement | ||||||||||||||||||||||||
Facial Emotion Analysis Marathon Match
Prizes The best 5 performers of this contest (according to system test results) will receive the following prizes:
An additional 7 bonus prizes of $250 each will be awarded for the best performance on each individual emotional attribute.
Problem background HP IDOL OnDemand, part of the Haven OnDemand platform that was recently announced by HP, is a complete solution for bringing extensive data analytics to your cloud- or mobile-app with 50+ REST APIs to augment your Big Data solution. Through our Early Access program, we're exposing the capabilities of HP IDOL (Intelligent Data Operating Layer), the world's leading on-premise human information processing engine, as a managed solution to deliver a broad selection of web services to developers. Sign up for a free developer account to access all the APIs and for early access to many new APIs and services that are on the way. One of the API's that HP has released is the Face Detection API. This API analyzes an image to find faces and estimate ages of the people in the photo. Developers get a free monthly quota with access to the platform and you can use it with your own images by signing in with your free developer account and executing the web service call using the "Try It" functionality on the API page. In this challenge, we'll be developing the algorithms that could be used to further expand the capabilities of this API by detecting emotions in the faces that are displayed.
Getting Started with IDOL OnDemand Before you can use the API's you'll need to sign up for an IDOL OnDemand developer account: http://www.idolondemand.com/signup.html Please indicate that you heard about IDOL OnDemand through [topcoder] in the "How did you hear about IDOL OnDemand?" field: Once your account has been verified you'll be assigned a developer account and API Key that will allow you to make API calls. Complete information about available IDOL OnDemand API's can be found here: https://www.idolondemand.com/developer/apis You'll need to register for a developer account with HP in order to get access to additional Try functionality in the API console. Use of the APIs is free and restricted to non-commercial use at this time. Commercial use and pricing will be announced in the near future. Before you compete in an IDOL-related challenge on [topcoder] please create a topcoder-specific key in your IDOL OnDemand Account. You can do this by Clicking on Account->API Keys from the developer home page. Simply generate a new key and rename it to "topcoder" as shown above. This should be the key that you use in [topcoder] challenge completion. This will also give you visibility to Preview API's which may not yet be in public release.
Problem description HP IDOL OnDemand wants to challenge the [topcoder] Data Science community to develop new algorithms that detect human emotions in facial expressions. You will be writing algorithms and code to recognize emotions from headshot images that contain faces. In this challenge you'll be tasked with recognizing emotions present in these images such as anger, anxiety, confidence, happiness, sadness, and surprise. Your algorithm will receive a set of training images. Each image will contain a face approximately centered in the middle of the image. The emotions for each face will also be given. During algorithm testing, your algorithm will receive only image data and have to return the facial emotions in the given image. The size of all the images will be 250 by 250 pixels. Example images with emotions can be seen below: Happy Neutral Surprised Angry
Implementation Your task is to implement training and testing methods, whose signatures are detailed in the Definition section below. int[] imageData contains the unsigned 24 bit image data. The data of each pixel is a single number calculated as 216 * Red + 28 * Green + Blue, where Red, Green and Blue are 8-bit RGB components of this pixel. The size of the image is 250 by 250 pixels. Let x be the column and y be the row of a pixel, then the pixel value can be found at index [x+y*250] of the imageData array. The training method will provide the emotions for the face image provided in imageData. Exactly 7 values will be present in the emotions array in the following order: angry, anxious, confident, happy, neutral, sad and surprised. Each value will be 0 if the emotion is not present in the face and 1 if it is present. Each face image will have only one emotion present. Your training method will be called multiple times, you can return 1 if you do not want to receive any more training images. The testing method will be called multiple times. Only imageData will be provided. You should return a double[] containing the probability of each facial emotion being present. Your returning array should be in the same format as the emotions parameter passed to the training method, containing exactly 7 elements with values in the range of [0,1].
Testing and scoring There are 1 example test, 2 provisional tests and at least 10 system tests.
For each testing image, the squared Euclidean distance (E) between your returned emotion values and the ground truth values will be calculated. Let SSE be the sum of these squared distances (E). A baseline sum of squared distances (SSE_Base) will be calculated by predicting the mean value for each emotion. The mean emotion values can be found in the source code of the offline tester. Your score for a test case will then be calculated as a generalized R2 measure of fit. More specifically, the test case score will be: Score = 1000000 * MAX(1 - SSE/SSE_Base, 0) You can see these scores for example test cases when you make example test submissions. If your solution fails to produce a proper return value, your score for this test case will be 0. The overall score on a set of test cases is the arithmetic average of scores on single test cases from the set. The match standings displays overall scores on provisional tests for all competitors who have made at least 1 full test submission. The winners are competitors with the highest overall scores on system tests. An offline tester/visualizer tool is available.
Data set generation The publicly available Labeled Faces in the Wild1 dataset of images were processed multiple times on Amazon Mechanical Turk in order to create the emotion values. The median among the multiple results were used to determine the most accurate emotion values. Note that the resulting dataset is real world data and may contain some noise. You need to deal with any noise, it will not be fixed or removed from the data. You can download the training data here.
Special rules and conditions
| ||||||||||||||||||||||||
Definition | ||||||||||||||||||||||||
| ||||||||||||||||||||||||
Notes | ||||||||||||||||||||||||
- | The match forum is located here. Please check it regularly because some important clarifications and/or updates may be posted there. You can click "Watch Forum" if you would like to receive automatic notifications about all posted messages to your email. | |||||||||||||||||||||||
- | Time limit is 60 minutes per test case and memory limit is 4096MB. | |||||||||||||||||||||||
- | There is no explicit code size limit. The implicit source code size limit is around 1 MB (it is not advisable to submit codes of size close to that or larger). | |||||||||||||||||||||||
- | The compilation time limit is 60 seconds. You can find information about compilers that we use, compilation options and processing server specifications here. | |||||||||||||||||||||||
- | 1Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007. | |||||||||||||||||||||||
Examples | ||||||||||||||||||||||||
0) | ||||||||||||||||||||||||
|
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.