AI Challenge

2025 MIT ARCLab Prize for AI Innovation in Space 🛰️

Announcing the Prizewinners for our 2025 AI Challenge!

Thank you to all who participated in the 2025 MIT ARCLab Prize for AI Innovation in Space! Over the course of two challenge phases, teams from around the world tackled the problem of forecasting space weather-driven changes in atmospheric density using real orbital and solar data.

Congratulations to each of our competitors — from first-time entrants to final-round prizewinners — for helping to push the boundaries of AI for space weather resilience. You can view the full list of awardees in the final leaderboard below.

TeamModel_normQTotal
🥇 Bimasakti1.0000.6240.925
🥈 Millennial-IUP0.9380.6400.879
🥉 cteceliker0.8030.7200.786
🦭4️ SAADAT0.7090.8070.729
🦭5️ mattmotoki0.7580.4070.688
🦭6️ Digantara0.5430.7440.583
🦭7️ Is_Fr0.3010.5470.350
🦭8️ JMU-ARIES0.2090.5120.270
Prizewinners & Final Leaderboard for the 2025 MIT ARCLab Prize for AI Innovation in Space

Phase 1 weighted performance scores are available on the public leaderboard. You can still explore information about the competition dataset, baseline model, and more in our devkit docs.

Interested in what made the top submissions stand out? Stay tuned for our upcoming report! Click below to join our mailing list and receive future challenge announcements.

Recent Updates

JuLY 17, 2025

Dear Participants,

Thank you all for your participation in the challenge. After carefully evaluating all the submissions, we are pleased to present the final leaderboard:

Congratulations on this achievement!

The prize distribution is as follows:

  • 1st Place: USD 2,000
  • 2nd Place: USD 1,200
  • 3rd Place: USD 800
  • Seal of Excellence: USD 200

A few remarks:

  1. The cash award is taxable and automatic tax withholding will be carried out for nonresidents, while a 1099 will be issued for U.S. residents:
    • Taxes for domestic payments are subject to MIT rules.
    • Taxes for international payments (payments to non-U.S. citizens, including human subjects and recipients of student prizes or awards) are subject to a mandatory 30 percent tax withholding per U.S. government regulations.
    • For some international awards, a reduced cash prize of $200.00 USD is given out of discretionary, due to MIT regulations. Sorry for the inconvenience!
  2. To process the cash award payment, we will need the following info (please send them to ai_challenge@mit.edugiordani@mit.edu, and jeileef@mit.edu ):
    • For international payments (wire transfer):
      • Bank Name: … .
      • Bank Address: … .
      • Swift Code or Wire Transfer Routing Number: … .
      • Account on IBAN Number: … .
    • For domestic payments (check only):
      • Given and Family Name: … .
      • Mailing Address (where the check will be sent): … .
    • If you’re dividing the prize money among multiple team members, please have each member submit their information individually.
  3. All cash awards are subject to MIT and any relevant government policies.
    • Eligibility and Country Restrictions: Please note that all payments are subject to screening for compliance with U.S. law. Due to U.S. government sanctions and export control regulations, MIT is prohibited from making payments to individuals ordinarily resident in certain countries and regions, including Iran, Cuba, Syria, North Korea, and the Crimea, Donetsk, or Luhansk regions of Ukraine. Payments may also be restricted if a recipient appears on a U.S. government sanctions list.
  4. We are still exploring the possibility of inviting a team representative to present their results at a technical meeting at MIT. However, due to recent funding constraints, we are unable to provide an additional award to cover travel or other expenses.
  5. The top 3 winners of the competition are invited to work together with the organizer to prepare a single journal paper. Each team will be responsible for a subsection of the paper to document your methodology and insights.
    • Here is the link to Overleaf project.
    • We are planning to submit an abstract to the AGU-2025. The abstract deadline is July 31st, and we will take care of it.
    • Subsequently, we will send out an email to provide feedback based on the reviews in order to improve your reports that will be part of the journal paper. 

Thank you for your participation, support, and understanding!

Best regards,
2025 MIT ARCLab Prize for AI Innovation in Space – Organizers

June 17, 2025

Dear Participants,

To ensure our judges have adequate time to conduct a thorough and careful review of all the reports, we decided to postpone the winner announcement to June 30th.

Additionally, we need to inform you of an adjustment to the prize award structure. Due to unforeseen funding uncertainties that have recently arisen, we’ve had to revise the total prize pool. The revised prize distribution is as follows:

  • 1st Place: USD 2,000
  • 2nd Place: USD 1,200
  • 3rd Place: USD 800
  • Seal of Excellence: USD 200

Thank you for your participation, support, and understanding, and good luck with the competition!

Best regards,
2025 MIT ARCLab Prize for AI Innovation in Space – Organizers

May 5, 2025

Dear Participants,

The Phase 1 scores are posted in the leaderboard as follows:

  • Public weighted scores, which are obtained from the public dataset.
  • Private weighted scores, which are obtained from the private dataset.
  • Preliminary Model scores, which are computed as: M = 0.85 * public_weighted_score + 0.15 * private_weighted_score.

The reason of the preliminary model scores is to allow you to test your own model against the private dataset which is now available on Codabench. In this way, you can verify your own scoring on the model you submitted for the private evaluation before the deadline. After Thursday, May 8th, 11:59:59 PM EDT, the Model scores will be finalized and the invitations to Phase 2 will be sent out.

In the meantime, here you can find instructions and details for the finalists who are admitted to Phase 2, and a LaTex template that you can use to prepare your technical report.

A quick reminder that Phase 2 deadline is Monday, May 26th, 11:59:59 PM EDT.

Best regards,
2025 MIT ARCLab Prize for AI Innovation in Space – Organizers

Why Space Weather?

In 2024, solar storms have lit up the skies with stunning Auroras across the United States. But while these displays are captivating to observers on the ground, space weather has the potential to wreak havoc on our global satellite infrastructure. Geomagnetic storms cause rapid heating in Earth’s thermosphere, which can lead to more than a 10x increase in satellite drag in mere hours. In May 2024, the Gannon storm caused the largest mass migration of satellites in history and severely degraded satellite collision avoidance systems worldwide for multiple days (Parker and Linares, 2024). This challenge tackles the urgent need for more efficient and accurate tracking and orbit prediction capabilities for resident space objects in the increasingly crowded near-Earth environment. As space activities expand, the demand for advanced technologies to monitor and manage satellite behavior becomes paramount. 

This year’s challenge objective is to develop cutting-edge AI algorithms for nowcasting and forecasting space weather-driven changes in atmospheric density across low earth orbit using historical space weather observations. The available phenomenology include solar and geomagnetic space weather indices, measurements of the interplanetary magnetic field, and measured solar wind parameters which can be used in conjunction with existing empirical atmospheric density models. Participants are provided with a baseline prediction model and spacecraft accelerometer-derived in situ densities and are tasked with training or creating models to forecast the atmospheric density.

Dataset

The Satellite Tracking and Orbit Resilience Modeling with AI (STORM-AI) dataset contains a collection of historical orbital elements and satellite atmospheric densities, as well as information on magnetic field, plasma, indices, particles, X-Ray flux, and additional derived parameters. All training data is derived from public data sources distributed by organizations that are not affiliated with the AI Challenge, including the ESA, NASA Goddard Space Flight Center, and NOAA.

Development Toolkit

The STORM-AI DevKit is accessible on GitHub here. It includes the code for the baseline model that appears on the public leaderboard, a high-fidelity orbit propagator, and more. Additionally, the STORM-AI Wiki Page reports extensive documentation about the challenge dataset, example code, tutorials, Codabench submission process, supplemental resources, and FAQs.

Prizes

We offer up to 10 cash prizes and the opportunity for 3 teams to present their work in a technical journal. Terms and conditions apply. Here is the prize breakdown:

  • First place 🥇 USD 2,000 in cash and an invitation to contribute their methodology and insights in a collaborative journal publication.
  • Second place 🥈 USD 1,200 in cash and an invitation to contribute their methodology and insights in a collaborative journal publication.
  • Third place 🥉 USD 800 in cash and an invitation to contribute their methodology and insights in a collaborative journal publication.
  • Seal of Excellence (4th – 10th)* 🦭 USD 200 in cash.

We are still exploring the possibility of inviting representatives from the first, second, and third winning teams to present their results at a technical meeting at MIT. However, due to recent funding constraints, we are unable to provide an additional award to cover travel or other expenses.

Terms and conditions: Cash awards are taxable, and automatic tax withholding will be carried out for nonresidents, while a 1099 will be issued for U.S. residents. Taxes for domestic payments are subject to MIT rules. Taxes for international payments (payments to non-U.S. citizens, including human subjects and recipients of student prizes or awards) are subject to a mandatory 30 percent tax withholding per U.S. government regulations. For some international awards, a reduced cash prize must be awarded due to MIT regulations.
Eligibility and Country Restrictions: All cash awards are subject to MIT policies and any relevant government policies. Please note that all payments are subject to screening for compliance with U.S. law. Due to U.S. government sanctions and export control regulations, MIT is prohibited from making payments to individuals ordinarily resident in certain countries and regions, including Iran, Cuba, Syria, North Korea, and the Crimea, Donetsk, or Luhansk regions of Ukraine. Payments may also be restricted if a recipient appears on a U.S. government sanctions list.

Citations

The challenge dataset contains multiple data sources and should be credited in accordance with the policies of the original data providers. Please refer to the Dataset and Resource sections of the wiki for more information on how to cite the 2025 AI Challenge and the STORM-AI dataset.

Contact Us

For general questions about the challenge, please contact the organizers at ai_challenge@mit.edu. If you have any questions regarding our development kit, you may submit them to our GitHub discussion forum.

Acknowledgement

Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

© 2024 Massachusetts Institute of Technology.


2024 Leaderboard

Check out the 2024 competition here, and learn more about the results here.

Rank TeamPhase I Score (F2, norm)Phase II Score (Q)Final Score (CS)
1Hawaii20240.9940.8270.960
2Millennial-IUP1.0000.7130.943
3QR_Is0.9790.7870.941
4MiseryModel0.9870.7530.940
5K-PAX0.9510.6530.892
6Go4Aero0.9520.6400.890
7FuturifAI0.9630.5200.874
8Astrokinetix0.8750.6270.826
9Colt0.9350.2930.807
Final scores are a function of Phase I and Phase II scores: CS = 0.8 F2,norm + 0.2 Q.