Key Information

Register
Submit
The challenge is finished.

Challenge Overview

Challenge Overview

"Help us find where the Eagle has landed, Again!!"

 

Have you ever looked at images on your favorite mapping webpage and noticed changes in the world depicted? Maybe you looked at your childhood home and noticed an old car in the driveway or noticed stores on a street that have since closed down. More dramatically, have you noticed changes in the landscape like the differences in these glaciers over time?

 

NASA has new data from the moon and images from the early 1970s. They are looking to develop a software application - Lunar Mission Coregistration Tool (LMCT) that will process publicly available imagery files from past lunar missions and will enable manual comparison to imagery from the Lunar Reconnaissance Orbiter (LRO) mission. Check out this introductory video to find out more!

 

The imagery processed by this tool will be used in existing citizen science applications to identify long lost spacecraft components- such as The Eagle, the lunar module that returned Buzz and Neil back to the command module on the first Lunar Mission and has since been lost to time. It will also be used to identify and examine recent natural impact features on the lunar surface by comparing the old images against the new images. This is known as image (co)-registration in the field of computer vision. The successful result of this project will allow us to better understand what’s been going on on the moon for the past sixty years (maybe now is when we’ll discover if it’s really made of cheese).

Task Detail

As you may know, we have run a Topcoder code challenge before but its result is not satisfactory. Therefore, we decide to run an ideation challenge first and then run another code challenge. 

 

The objective of the ideation challenge is to better prepare the next code challenge. There are a few things we want to explore in this challenge, as follows.

 

Image Data Collection: The client has provided a few examples for a better understanding. It basically compares a modern LRO image with an Apollo image. More details are available here.

We also provide more sample data for your local test use. For example, you can play with the low-resolution TIF images here. Information on how to process LROC NAC images natively is available here. Another example: an Lunar Orbiter image and an LROC image.

 

We would like to ask you to find more paired test cases with justification of how it will be useful for the later code challenge. Please try to explore the NASA website. Here are a few places that worth exploring:

 

Method Brainstorming: NASA has shared the following papers. The links have information on similar work being done for Mars Imagery. They do not have details on the algorithms used but it may be available with some digging. The first link is just an abstract but that is the one from someone the client identified as a potential source of working algorithms that are being used for Mars imagery.

  1. https://www.hou.usra.edu/meetings/lpsc2018/pdf/1178.pdf

  2. https://www.researchgate.net/publication/279553601_On_the_status_of_orbital_high-resolution_repeat_imaging_of_Mars_for_the_observation_of_dynamic_surface_processes

  3. https://c ordis.europa.eu/project/id/607379/reporting

 

One thing to note is that older pictures will lack the resolution of current pictures and may have been taken at different times of day. They also may not be fully aligned. Your program should account for this- it’s not always going to be as straightforward as the above example. The data we have from older missions is imperfect and the task will be tricky at times. That’s why we turned to the crowd!

 

We would like to hear comments and alternative ideas from you. Some proof-of-concept development will be a great plus! You are encouraged to use available open-source computer vision and image processing libraries (OpenCV or alternatives). If you have any questions, ask them in the forums. 

 

Evaluation Method: Our plan is to eyeball the results, as we don’t expect to have many annotated test cases. Is there any other way you would suggest?

Final Submission Guidelines

Submission

Your submission should include a text, .doc, PPT or PDF document that includes the following sections and descriptions:

 
  • Overview: describe your approach in “layman's terms”

  • Methods: describe what you did to come up with this approach, eg literature search, experimental testing, etc

  • Materials: did your approach use a specific technology?  Any libraries?  List all tools and libraries you used

  • Discussion: Explain what you attempted, considered or reviewed that worked, and especially those that didn’t work or that you rejected.  For any that didn’t work, or were rejected, briefly include your explanation for the reasons (e.g. such-and-such needs more data than we have).  If you are pointing to somebody else’s work (eg you’re citing a well known implementation or literature), describe in detail how that work relates to this work, and what would have to be modified

  • Data:  What other data should one consider?  Is it in the public domain?  Is it derived?  Is it necessary in order to achieve the aims?  Also, what about the data described/provided - is it enough?

  • Assumptions and Risks: what are the main risks of this approach, and what are the assumptions you/the model is/are making?  What are the pitfalls of the data set and approach?

  • Results: Did you implement your approach?  How’d it perform?  If you’re not providing an implementation, use this section to explain the EXPECTED results.

  • Other: Discuss any other issues or attributes that don’t fit neatly above that you’d also like to include

Judging Criteria

You will be judged on the quality of your ideas, the quality of your description of the ideas and its effective demonstration on sample data along with success measures. The winner will be chosen by the most logical and convincing reasoning as to how and why the idea presented will meet the objective. Note that, this contest will be judged subjectively by the client and Topcoder. However, the judging criteria will largely be the basis for the judgement.

 
  1. Effectiveness (50%)

    1. Is your collected image pairs helpful?

    2. Is your algorithm effective? Results on toy examples would be a great plus.

    3. Is there any interesting insights from the data analysis?

    4. PoC is not required, but it’ll be great to see some example results.

  2. Feasibility (30%)

    1. Is your algorithm efficient, scalable to large volumes of data? 

    2. Is your proposed additional data possible to get?

    3. Is your algorithm easy-to-implement? Is there any related toolkit that we can use?

  3. Clarity (20%)

    1. Please make sure your report is easy to read.

    2. Figures, charts, and tables are welcome.

Submission Guideline

Only your last submission will be evaluated. We strongly recommend you to include your great solution and details as much as possible in a single submission.

 

ELIGIBLE EVENTS:

2021 Topcoder(R) Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30159272