Challenge Overview
NASA Lunar Image Co-Registration Code Refinement Challenge
Prize
1st: $1,500
2nd: $700
3rd: $300
Challenge Overview
"Help us find where the Eagle has landed, Again!!"
Have you ever looked at images on your favorite mapping webpage and noticed changes in the world depicted? Maybe you looked at your childhood home and noticed an old car in the driveway or noticed stores on a street that have since been built. More dramatically, have you noticed changes in the landscape like the differences in these glaciers over time?
NASA is currently collecting data from the Moon and also has images from the 1960s. They are looking to develop a software application - Lunar Mission Coregistration Tool (LMCT) that will process publicly available imagery files from past lunar missions and will enable manual comparison to imagery from the ongoing Lunar Reconnaissance Orbiter (LRO) mission. Check out this introductory video to find out more!
The imagery processed by this tool will be used in existing citizen science applications to identify long lost spacecraft components- such as the Eagle, the lunar module that returned Buzz and Neil back to the command module on Apollo 11 and has since impacted the lunar surface, but we don’t know where. It will also be used to identify and examine recent natural impact features on the lunar surface by comparing the old images against the new images. This is known as image (co)-registration in the field of computer vision. The successful result of this project will allow us to better understand what’s been going on on the Moon for the past sixty years (maybe now is when we’ll discover if it’s really made of cheese).
Task Detail
As you may know, we have run a series of ideation and code challenges before. We are happy with the winning solution so far. This solution is based on the USGS Integrated Software Imaging Software (ISIS) 3 library.
Objective. In this challenge, we would like to refine the codebase to
-
Make it support two new formats
-
USGS ISIS Cubes image format
-
JPEG2000 image format
-
Bonus: Suggest potential improvements to the existing approach
For the self-containedness, here are some details from the previous challenges:
Image Data: The client has provided a few examples for a better understanding. It compares a modern LRO image with an Apollo image from 1971. More details are available here.
We also provide more sample data for your local test use. For example, you can play with the low-resolution TIF images here. Information on how to process LROC NAC images natively is available here. Another example: an Lunar Orbiter image and an LROC image.
We would like you to find more paired test cases with justification of how it will be useful for the later code challenge. Please try to explore the NASA website. Here are a few places that worth exploring:
-
Lunar Orbiter 3 (As good as 60 meters): https://pds-imaging.jpl.nasa.gov/data/lo/LO_1001/DATA/LO3/
-
Lunar Orbiter 4 (As good as 60 meters): https://pds-imaging.jpl.nasa.gov/data/lo/LO_1001/DATA/LO4/
-
Lunar Orbiter 5 (As good as 1 meter): https://pds-imaging.jpl.nasa.gov/data/lo/LO_1001/DATA/LO5/
-
Apollo 15 Metric (10-20 mpp): https://pdsimage.wr.usgs.gov/Missions/Apollo/Metric_Camera/
-
Apollo 16 Metric (10-20 mpp): https://pdsimage.wr.usgs.gov/Missions/Apollo/Metric_Camera/
-
Apollo 17 Metric (10-20 mpp): https://pdsimage.wr.usgs.gov/Missions/Apollo/Metric_Camera/
-
Apollo Metric Mosaic (10 meters per pixel): https://pdsimage2.wr.usgs.gov/Individual_Investigations/Apollo_Metric_Albedo_Mosaic/
-
LROC WAC (~100mpp): http://wms.lroc.asu.edu/lroc/rdr_product_select
-
LROC NAC (~.5mpp): http://wms.lroc.asu.edu/lroc/rdr_product_select
One thing to note is that older pictures will lack the resolution of current pictures and may have been taken at different times of day on the Moon. They also may not be properly aligned. Your program should account for this- it’s not always going to be as straightforward as the above example. The data we have from older missions is imperfect and the task will be tricky at times. That’s why we turned to the crowd!
You are encouraged to use available open-source computer vision and image processing libraries (OpenCV or alternatives). If you have any questions, ask them in the forums.
Evaluation Method: Our plan is to eyeball the results, as we don’t expect to have many annotated test cases. Any data to rate the goodness of co-registration is appreciated.
Final Submission Guidelines
Submission
You submission should contain:
-
A working codebase. You are allowed to use C++/Python. It should be wrapped up as one single command line entry point. The image file names should be a part of your command line call.
-
A detailed document about your algorithm. How did you end up with your final designs? Have you tried other algorithms?
-
A detailed deployment instruction. What are the required libs? How to install them? How to run your codebase?
-
Example results. Please use enough, diverse example results to showcase the effectiveness of your solution.
Judging Criteria
You will be judged on the quality of your algorithm and implementation, the quality of your documentation, and how promising it is as the base solution for the follow-up challenges. Note that the evaluation in this challenge may involve subjectivity from the client and Topcoder. However, the judging criteria will largely be the basis for the judgement.
-
Effectiveness (40%)
-
Is your algorithm more effective than the provided codebase, at least on the provided example images?
-
Is your codebase runnable to other new images and new image formats?
-
Is your output on other new images reasonably good?
-
-
Feasibility (40%)
-
Is your algorithm efficient, scalable to large volumes of data?
-
Is your algorithm easy to deploy?
-
-
Clarity (20%)
-
Please make sure your documentation for algorithm, code, and results is easy to read. Figures, charts, and tables are welcome.
-
Submission Guideline
We will only evaluate your last submission. Please try to include your great solution and details as much as possible in a single submission.