AI Challenge

Help satellites weather the storm by participating in the 2025 MIT ARCLab Prize for AI Innovation in Space!

Welcome to Phase 0!

The warm-up phase allows us to provide participants with early access to several challenge resources. Submissions will be accepted and evaluated beginning in Phase 1, but why wait when you can get a head start developing your code now!

Check out our devkit docs for the latest information about the competition dataset, submission instructions, tutorials, and more. Participants will gain access to new features of our development toolkit on GitHub over the course of Phase 0. Click below to join our mailing list and stay up-to-date on new wiki entries and devkit features.


Why Space Weather?

In 2024, solar storms have lit up the skies with stunning Auroras across the United States. But while these displays are captivating to observers on the ground, space weather has the potential to wreak havoc on our global satellite infrastructure. Geomagnetic storms cause rapid heating in Earth’s thermosphere, which can lead to more than a 10x increase in satellite drag in mere hours. In May 2024, the Gannon storm caused the largest mass migration of satellites in history and severely degraded satellite collision avoidance systems worldwide for multiple days (Parker and Linares, 2024). This challenge tackles the urgent need for more efficient and accurate tracking and orbit prediction capabilities for resident space objects in the increasingly crowded near-Earth environment. As space activities expand, the demand for advanced technologies to monitor and manage satellite behavior becomes paramount. 

This year’s challenge objective is to develop cutting-edge AI algorithms for nowcasting and forecasting space weather-driven changes in atmospheric density across low earth orbit using historical space weather observations. The available phenomenology include solar and geomagnetic space weather indices, measurements of the interplanetary magnetic field, and measured solar wind parameters which can be used in conjunction with existing empirical atmospheric density models. Participants are provided with a baseline prediction model and spacecraft accelerometer-derived in situ densities and are tasked with training or creating models to forecast the atmospheric density.

Dataset

You can download the challenge dataset here.

The Satellite Tracking and Orbit Resilience Modeling with AI (STORM-AI) dataset contains a collection of historical orbital elements and satellite atmospheric densities, as well as information on magnetic field, plasma, indices, particles, X-Ray flux, and additional derived parameters. All training data is derived from public data sources distributed by organizations that are not affiliated with the AI Challenge, including the ESA, NASA Goddard Space Flight Center, and NOAA. 

The dataset consists of a public challenge dataset that can be used to train and develop AI algorithms and a private evaluation dataset of the same type and format. For valid submissions, algorithm inputs must be limited to the phenomenology and data formats present in the public training dataset, but utilizing additional phenomenology or data sources for model validation and development is allowed and encouraged.

Development Toolkit

Participants will gain access to new features of our development toolkit on GitHub over the course of Phase 0. All features will be available by the launch of Phase 1 on December 16, 2024.

The development kit is coded in Python and comprises a set of essential utility functions, tutorials, and baseline implementations to assist participants in getting started with the challenge problem. The tutorials will guide participants in data reading, parsing, and manipulation, as well as training, evaluating, and submitting their ML algorithms to the competition platform.

Important Dates

The below timeline is subject to change. We recommend signing up for the challenge mailing list to stay up-to-date on key dates and deadlines.

  • November 15, 2024: Warm-up phase starts.
  • December 16, 2024: Phase 1 of the competition starts. Submissions accepted on a rolling basis.
  • March 17, 2025: Phase 1 ends. Top 10 finalists are notified of advancement to Phase 2.
  • April 14, 2025: Phase 2 ends. Technical report deadline.
  • May 16, 2025: Winners announced.

Prizes

We offer 10 prizes with a total value of USD 25,000 in cash and travel expenses for three presenters to share their results at a technical meeting. Terms and conditions apply. Here is the prize breakdown:

  • First place*: USD 10,000 in cash and a trip for one team-member to present their results at a technical meeting.
  • Second place*: USD 5,000 in cash and a trip for one team-member to present their results at a technical meeting.
  • Third place*: USD 3,000 in cash and a trip for one team-member to present their results at a technical meeting.
  • Seal of Excellence (4th – 10th)*: USD 1,000 in cash.

*Terms and conditions: Expenses for travel and accommodations may be reimbursed for one person from each of the first, second, and third place teams. Airfare is reimbursable for economy class and U.S. Flag carrier airlines only. Travelers must submit a budget for approval prior to the trip. Travelers must provide comparison airfare if their trip exceeds the bounds of one day prior to and one day following the designated trip dates. Expenses will be reimbursed after the trip is complete. Cash awards are taxable, and automatic tax withholding will be carried out for nonresidents, while a 1099 will be issued for U.S. residents. Taxes for domestic payments are subject to MIT rules. Taxes for international payments (payments to non-U.S. citizens, including human subjects and recipients of student prizes or awards) are subject to a mandatory 30 percent tax withholding per U.S. government regulations. For some international awards, a reduced cash prize must be awarded due to MIT regulations. All cash prizes will be awarded after the technical meeting. All cash awards are subject to MIT policies and any relevant government policies.

Citations

The challenge dataset contains multiple data sources and should be credited in accordance with the policies of the original data providers. Please refer to the Dataset and Resource sections of the wiki for more information on how to cite the 2025 AI Challenge and the STORM-AI dataset.

Contact Us

For general questions about the challenge, please contact the organizers at ai_challenge@mit.edu. If you have any questions regarding our development kit, you may submit them to our GitHub discussion forum.

Acknowledgement

Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

© 2024 Massachusetts Institute of Technology.


2024 Leaderboard

Check out the 2024 competition here, and learn more about the results here.

Rank TeamPhase I Score (F2, norm)Phase II Score (Q)Final Score (CS)
1Hawaii20240.9940.8270.960
2Millennial-IUP1.0000.7130.943
3QR_Is0.9790.7870.941
4MiseryModel0.9870.7530.940
5K-PAX0.9510.6530.892
6Go4Aero0.9520.6400.890
7FuturifAI0.9630.5200.874
8Astrokinetix0.8750.6270.826
9Colt0.9350.2930.807
Final scores are a function of Phase I and Phase II scores: CS = 0.8 F2,norm + 0.2 Q.