Pages

Monday, March 2, 2026

A Interview With Robert Zubrin

https://www.patreon.com/posts/long-with-dr-151978948?utm_campaign=patron_engagement&utm_source=post_link&post_id=151978948&token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyZWRpc19rZXkiOiJpYTI6MDA0MmYzNjEtNjM4ZC00NDVlLWJkZDgtYWZlZGVjMzMzZDlkIiwicG9zdF9pZCI6MTUxOTc4OTQ4LCJwYXRyb25faWQiOjQ2MTU3MjM0fQ.tvmm0geBjGJzRKmnouWlLjmgqM5JGm0uI55D3tagwro&utm_id=56419506-ede6-496c-a496-11a14fd5e84c&utm_medium=email

Wednesday, February 25, 2026

The Starliner Failure

Starliner Boeing’s CST-100 Starliner docked to the International Space Station during the CFT mission in 2024. (credit: NASA) “We failed them”: NASA grapples with Starliner by Jeff Foust Monday, February 23, 2026 In briefings before the launch earlier this month of the Crew-12 mission to the International Space Station using a SpaceX Crew Dragon, reporters asked NASA officials about the status of the other commercial crew vehicle, Boeing’s CST-100 Starliner. That vehicle has been grounded since its flawed crewed test flight in mid-2024 that led NASA to bring the spacecraft back without the two astronauts who launched on it. Those astronauts, Suni Williams and Butch Wilmore, returned last March on a Crew Dragon, an eight-day test flight that turned into an eight-month expedition. Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman said. NASA had said nothing about Starliner since last November, when it announced a contract modification. Boeing’s original commercial crew contract included six operational missions to the ISS after 2024’s Crewed Flight Test (CFT) mission. Under the modification, the first of those, Starliner-1, would carry only cargo; in effect, another test flight after CFT. After that would be three more crewed operational flights, with options for two more. At the time NASA said Starliner-1 could launch as soon as April, but provided no updates since then, hence the questions at the Crew-12 briefings. “Starliner, we’re planning to fly on a another uncrewed mission this spring, sometime in the spring to summer,” Ken Bowersox, NASA associate administrator for space operations, said at one briefing January 30. “We’ll launch it when it's ready. We’ll gather all the information we need to complete the certification, and then we want to work Starliner back into the rotation,” he added, alternating flights with Crew Dragon as originally planned. “Right now, we’re continuing to have a no-earlier-than April launch date,” for Starliner-1, said Steve Stich, NASA commercial crew program manager, at a second briefing February 9. He pointed to great strides in resolving the technical problems seen on the CFT mission. That included changing seals that caused helium leaks before and after launch, work he said had been closed out. There had also been tests of thrusters, several of which failed in flight, with engineers taking the data to develop models to predict thruster behavior, he noted. “When we get through that and get to a point where we’re comfortable predicting thruster performance, then we’ll go move forward and look towards a launch date,” he said. Stich added that NASA had not yet decided if the next crew rotation mission after Crew-12, launching late this year, would be another Crew Dragon mission, designated Crew-13, or the first operational Starliner crewed mission, Starliner-2. “We’d want to work through and get through Starliner-1 into the summer timeframe,” he said. “We have some time to decide.” Those comments offered a relatively optimistic assessment of Starliner, with a cargo test flight as soon as April and the beginning of routine missions as soon as the fall—six years after SpaceX’s Crew-1—suggesting that, perhaps, the worst of Starliner’s development problems was behind it. A report last week, though, changed that. At a press conference held on less than two hours’ notice, NASA administrator Jared Isaacman started by reading from a four-page letter he had sent to the agency’s workforce that day, noting that the Aerospace Safety Advisory Panel was set to brief Congress on its annual report and that NASA would release its own independent report on the Starliner CFT mission. “Let me begin with the most important point. Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” he said. “It’s decision making and leadership that left that, if left unchecked, could create a culture incompatible with human spaceflight.” He followed with a history of the program, including the problems seen on the first two uncrewed test flights in 2019 and 2022. The investigations into those flights, he said, “did not drive to or take sufficient action on the actual root cause of the anomalies that we observed. The investigations often stopped short of the proximate or the direct cause, treated it with a fix, or accepted the issue as an unexplained anomaly.” Isaacman then took up the CFT mission, heaping blame on both Boeing and his own agency. “The engineering reality, however, is that Starliner, with its qualification deficiencies, is less reliable for crew survival than other crewed vehicles,” he said. “But at NASA, we managed the contract. We accepted the vehicle. We launched the crew to space. We made decisions from docking through post mission actions. A considerable portion of the responsibility and accountability rests here,” he said. The problems with CFT, he argued, were as much with organizational failings at NASA as they were technical problems with the Boeing-developed spacecraft. “Witness statements routinely reflected a belief that management within the commercial crew program could only succeed if Starliner launched,” he said. Even before OFT in 2019, testing showed there was a risk of the thrusters were subject to thermal environments that “could exceed qualification limits.” “On orbit, disagreements over crew return options deteriorated into unprofessional conduct while the crew remained on orbit,” he continued. “Witness statements describe an environment where advocacy tied to the Starliner program, viability persisted alongside insufficient senior NASA leadership engagement to refocus teams on safety and mission outcomes.” Isaacman concluded reading the letter with a commitment to continue working with Boeing on Starliner, while making changes within the agency. “Programmatic advocacy exceeded reasonable bounds and placed the mission, the crew, and America's space program at risk in ways that were not fully understood at the time decisions were being contemplated. This created a culture of mistrust that can never happen again, and there will be leadership accountability,” he promised. “Pushing a rock uphill” As he spoke, NASA released the 312-page report, “redacted only where legally required or as directed by our commercial partner,” Isaacman said. There are still significant redactions in the report, particularly in sections about the technical causes of the helium leaks and thruster failures on the Starliner CFT mission. Many details are blacked out, along with most illustrations or images of spacecraft systems. Nonetheless, the report makes clear the severity of the issues on the flight. On approach to the station, five of Starliner’s thrusters failed, causing a loss of 6DOF, or movement in six degrees of freedom. Starliner could not move forward on the +X axis, towards the ISS, and had degraded control of pitch and yaw. “The loss of X-axis translation resulted in a loss of movement in the forward direction and the Starliner vehicle was no longer capable of docking to the ISS, until a subset of thrusters could be recovered,” the report stated. Four of those thrusters were recovered, allowing the docking to proceed. The report praised that decision, noting that backing off any making another attempt, or deciding to return to Earth immediately, could have made the thruster problems worse. That would have created a “higher risk to loss of life” for Wilmore and Willams. But the report noted that there had been thruster failures on the two uncrewed test flights, OFT and OFT-2. Even before OFT in 2019, testing showed there was a risk of the thrusters were subject to thermal environments that “could exceed qualification limits.” Nonetheless, NASA agreed to accept the risk of thruster problems on CFT. “This decision was made without resolving the core thermal qualification issues, effectively mischaracterizing the severity and operational impact of the thermal risks,” the report stated [emphasis in original]. In addition to the thruster failures in the Starliner service module on approach to the ISS, there was a separate failure of a thruster in the crew module during its uncrewed return to Earth. Notably, that failure brought the module to zero fault tolerance. “Loss of the single remaining redundant thruster, for this control axis, would have resulted in a loss of crew,” the report stated. There are fewer redactions in the section on organizational issues. For that section, the independent review team relied on 66 interviews with people from senior management to line engineers, examining decision-making processes, communication, team dynamics, and more. The report paints a picture of a rather dysfunctional management process during CFT, as engineers and managers evaluated the severity of the technical problems with Starliner and whether it was safe for Williams and Wilmore to return home on it. “It was probably the ugliest environment that I’ve been in,” one interviewee said of the Starliner mission management meetings. While crew safety was “the primary focus of discussion” throughout the mission, the report showed how those discussions were carried out was problematic. Technical teams said they did not have enough time to evaluate data and develop potential explanations of the problems with the spacecraft, which made meetings unproductive. With no published agenda for the meetings of the Starliner mission management team, people felt compelled to attend every one out of the belief it “could be ‘the one’ that was going to make the big crew return decision.” Other issues ranged from differences between NASA and Boeing in how they evaluate risk and the perception of some that they felt NASA had to show the spacecraft was unsafe, rather than Boeing prove it was safe. “Strong personalities within CCP [commercial crew program] and Boeing were seen as overly optimistic in presenting data, which some interviewees interpreted as lobbying rather than objective analysis,” the report stated. “This dynamic discouraged dissenting views and contributed to a growing sense of distrust. As one interviewee described, opposing positions felt like ‘pushing a rock uphill.’” Then there was what Isaacman called in his letter the “unprofessional conduct” during debates about Starliner’s return. The report said that several interviewees brought up what it called “frustrating and/or unprofessional communication styles” during those meetings without being prompted. The report included several quotes from people recalling those meetings: “There was yelling in meetings. It was emotionally charged and unproductive.” “There are some people that just don’t like each other very much, and that really manifested itself during CFT.” “It was probably the ugliest environment that I’ve been in.” (That is only a subset of the quotes included in the report on the topic.) The report identified three root causes for the Starliner CFT incident, none of which were technical in nature. One was a “hands-off approach” used by NASA during the start of the commercial crew program that kept the agency form gaining enough data and knowledge about the spacecraft to accept it as a service. The second was inadequate systems engineering and integration at Boeing during the design phase, resulting in Starliner thrusters operating outside of qualification ranges. The third was a culture in the commercial crew program that seeks two successful providers, which led to accepting greater risk. Starliner NASA administrator Jared Isaacman at the microphone, with associate administrator Amit Kshatriya behind him, at Thursday’s briefing about the Starliner report. (credit: NASA/Joel Kowsky) Whither Starliner? One recommendation made by the independent review was that the Starliner CFT mission should be classified as a “Type A mishap” in agency parlance, one that requires an independent review. Isaacman said he accepted that recommendation and considered the report that independent review. That classification attracted headlines because it is NASA’s most serious classification for a mishap, one that includes the losses of the shuttles Challenger and Columbia. But is also includes any accident that results in more than $2 million in damage or “unexpected aircraft or spacecraft departure from controlled flight.” Isaacman noted at the briefing that a recent gear-up landing by a NASA WB-57 aircraft at Ellington Airport in Houston, which damaged the plane but did not injure the two people on board, was also a Type A mishap. One reason that the independent review recommended the Type A mishap classification was so that the incident could be formally captured in NASA databases, like the NASA Mishap Information System, and warned without doing so “transparency, trust, and institutional learning are compromised.” At its most recent public meeting in December, the Aerospace Safety Advisory Panel had also raised the issue. Not classifying the mission as a mishap or “high-visibility close call” while in flight confused decision-making processes, the panel argued. “Had this been done in a timely fashion, after the docking of Starliner, the communication of these decision-making authorities and the primary path to resolution of the crew return question would have dramatically improved,” said former astronaut Charlie Precourt, a member of the panel. Isaacman offered few other specifics about agency responses to the report. Asked what “leadership accountability” meant in terms of potential changes in the commercial crew program or the Space Operations Mission Directorate, Isaacman instead focused on the failures of accountability during the CFT mission, which he said went all the way to the top of NASA. “To be clear, NASA will not fly another crew on Starliner until technical causes are understood and corrected, the propulsion system is fully qualified, and appropriate investigation recommendations are implemented,” Isaacman said. Leadership failures, he argued, existed at multiple levels of NASA “right up to the administrator of NASA,” who at the time of the mission was Bill Nelson. “I can't imagine a situation like that, why there would not have been some direct involvement to bring people back to the mission and the crew and figure out the correct pathway forward.” In the briefing, some asked why NASA should continue to support Starliner given its problems and the impending end of life of the ISS. “We see near-endless demand for crew and cargo access to low Earth orbit, well beyond the life of the International Space Station,” Isaacman argued. “America benefits by having multiple pathways to take our crew and cargo to orbit.” That is, interestingly, similar to the concern raised by the report that the commercial crew program was too focused on supporting two providers, resulting in increased risk. And while there are obvious benefits of having two crew providers, it’s not clear if that “near-endless demand” projected by Isaacman will materialize in time to save Starliner’s business case. In fact, the report stated that limited hardware spares and plans to retire the Atlas 5 “raise concerns about the program’s long-term viability.” Isaacman made clear, though, that he would not rush to bring Starliner back. Asked about the schedule presented in recent weeks about the uncrewed Starliner-1 flying as soon as April, with a crewed mission as soon as late this year, he did not explicitly rule out that schedule, but suggested it was not feasible. “Our focus here at NASA is working alongside Boeing again to understand the technical challenges that have caused these service module and crew module thruster issues, to remediate them, make sure we have a full understanding of the risk associated with this vehicle, implement the report, the recommendations from the report, and get back to flight,” he said. He added that “what we don't want to do is perpetuate past problems by establishing endless target launch dates that we were unable to meet.” One finding of the report was that, over a five-year period leading up to CFT, the program was within six months of a scheduled launch date for 41 months during that time: schedules repeatedly slipped as worked dragged on. The reported noted “repeatedly moving launch dates a little at a time will have a negative impact on team dynamics.” “To be clear, NASA will not fly another crew on Starliner until technical causes are understood and corrected, the propulsion system is fully qualified, and appropriate investigation recommendations are implemented,” he said. Boeing was not a part of the briefing, but did issue a brief statement confirming it would continue to work on Starliner. “In the 18 months since our test flight, Boeing has made substantial progress on corrective actions for technical challenges we encountered and driven significant cultural changes across the team that directly align with the findings in the report,” the company stated. “We’re working closely with NASA to ensure readiness for future Starliner missions and remain committed to NASA’s vision for two commercial crew providers.” Also at the briefing was Amit Kshatriya, NASA associate administrator. He filled in some of the technical details but also providing closing remarks. “For the workforce of NASA—and I’ve been here in the agency almost 20 years now—it is hard to hear sometimes, when we were talking about our culture and we're talking about how we do things,” he said. “I think it's important to recognize that this form of leadership allows our culture to get better.” Kshatriya, a former flight director in Mission Control, noted he worked with both Williams and Wilmore throughout his career. “Butch and Suni are honestly like family to me,” he said. “They have so much grace, and they’re so competent, the two of them, and we failed them. The agency failed them,” he said. “We have to say that. We have to recognize that our responsibility is to them and to all the crews that are coming and to the crews that about to go fly. And our responsibilities to each other, too. We're a family.” Jeff Foust (jeff@thespacereview.com) is the editor and publisher of The Space Review, and a senior staff writer with SpaceNews. He also operates the Spacetoday.net web site. Views and opinions expressed in this article are those of the author alone.

The Jupiter Icy Moons Orbiter

JIMO The Jupiter Icy Moons Orbiter (JIMO) program was started in 2002 and canceled in 2005. It would have sent a large spacecraft to orbit Jupiter. It would have been insanely expensive. (credit: NASA) Prometheus bound: The legacy of the Jupiter Icy Moons Orbiter by Dwayne A. Day Monday, February 23, 2026 In 2002, NASA began one of the most ambitious robotic projects the agency ever considered, a large, nuclear-powered spacecraft to explore the icy moons of Jupiter. Known as the Jupiter Icy Moons Orbiter, or JIMO, it would have used a nuclear reactor and a powerful electric propulsion system to reach and orbit Jupiter, visiting and studying Europa, Ganymede, and Callisto. But two and a half years later, JIMO was canceled after spending $463 million, with no hardware built. The Jupiter Icy Moons Orbiter is a cautionary tale about undertaking a science mission without clear scientific support, or a full understanding of the technical and programmatic risks it entails. The brief program’s legacy is mixed and has not been well-studied, but undoubtedly still has lessons to teach. JIMO The basic concept of a nuclear-powered, electric propulsion spacecraft has been around a long time. In 1963, JPL studied the ”Electric Space Cruiser” that used the same concepts and overall design that NASA considered again forty years later. (credit: JPL) JIMO emerges JIMO was approved by NASA administrator Sean O’Keefe and was widely considered to be his personal initiative rather than a priority for the planetary science community. In 2002, NASA canceled plans to pursue a Europa orbiter mission, which was a high priority for the planetary science community. Later that year, in August 2002, at O’Keefe’s urging, an “Eight Day Study” was held at NASA Headquarters outlining a possible new mission to the outer planets. The project was initiated outside the normal process for developing large planetary missions and was conducted with limited science community input. By September, the Jupiter Icy Moons Tour Studies were started, and two months later the Jupiter Icy Moons Orbiter pre-project commenced at O’Keefe’s directive. This resulted in a presentation to O’Keefe in January 2003. Soon the Tour Studies were completed and a JIMO new start was authorized by Congress. JIMO was technically ambitious. The United States had not flown a space nuclear reactor since 1965, when SNAP-10A operated at very low power in low Earth orbit for 43 days. The concept of a nuclear-powered, electric propulsion spacecraft was not new. In 1963, the Jet Propulsion Laboratory had studied the “Electric Space Cruiser” for missions to Jupiter and beyond. But the discovery of gravity-assist trajectories negated much of the need for the technology. As Voyager 2 demonstrated in the 1970s and 1980s, planetary gravity, particularly Jupiter’s gravity well, could be used to send a spacecraft farther out into the solar system. JIMO would have the same basic layout as JPL’s early sixties design, with a reactor at one end of a long boom, as far as possible from the rest of the spacecraft to protect the electronics from radiation. Electric propulsion thrusters would be mounted at the other end of the boom. In between would be the scientific instruments and radiators to dissipate the heat of the reactor. Early in 2003, as the JIMO initiative progressed, NASA created Project Prometheus, the technology development program for JIMO. NASA formed a science definition team and released a request for proposal for industry studies. Industry contracts were let by April 2003. In June 2003, NASA held a science forum and a budget workshop. The first industry studies were received by December 2003. JIMO was technically ambitious. The United States had not flown a space nuclear reactor since 1965, when SNAP-10A operated at very low power in low Earth orbit for 43 days. Numerous American space reactor development projects such as SNAP-8 and SP-100 had been canceled in the decades since without ever producing flight hardware. Even the Soviet Union, which had developed and flown many space nuclear reactors, had never operated one for more than a year. JIMO would have to operate for a decade in space, in a harsh radioactive environment around Jupiter. The program also had to determine how to convert the heat from the reactor into electricity using the most efficient and reliable technology. In addition, the electric propulsion system would have to be significantly higher power than any yet developed. This required not only new thruster technology, but new electronic control systems to handle the high power levels. These were all serious technological challenges to overcome. JIMO The Jupiter Icy Moons Orbiter spacecraft would have had to unfold in space, with the reactor at one end and the propulsion system and instruments at the other end. (credit: JPL) Changing policy environment JIMO was started just before NASA entered a dark and tumultuous phase. In February 2003, the Space Shuttle Columbia broke up during reentry, killing its crew. The accident not only forced a reevaluation of the shuttle program, but led to a high-level policy review of the goals of the American space program. This would eventually affect Prometheus and JIMO. President George W. Bush announced the Vision for Space Exploration in January 2004. The VSE, as it was often called, established the goal of returning humans to the lunar surface. The policy would require the ending of the shuttle program and the International Space Station. Without a significant increase in NASA’s budget, it would require cuts to other NASA programs, including space science programs. Democrats in Congress, although not inherently opposed to JIMO, sought information on how much it would cost. NASA refused to provide cost estimates, claiming that it was too early to do so. But there were plenty of indications that JIMO would likely be the most expensive robotic spacecraft ever built. JIMO The United States had only flown a single, low-power, reactor in space in 1965. JIMO would have required a powerful reactor and a system for converting its heat to electricity. It would have had to operate in space for up to ten years. (credit: JPL) Expanding scope NASA created the Exploration Systems Mission Directorate (ESMD) to enact the Vision for Space Exploration. Prometheus and JIMO were transferred to ESMD. In March 2004, NASA assigned responsibility for developing the nuclear reactor to Naval Nuclear Reactors. Administrator O’Keefe had briefly served as Secretary of the Navy in the first Bush administration and knew of Naval Nuclear Reactors’ ability to develop advanced systems. NNR was the most experienced organization in the nuclear reactor field in the world. What had started out as JIMO in early 2003 was expanding throughout the year to include other possible missions. Whereas Prometheus was a logical programmatic move to develop the technology for the actual JIMO mission, it soon became clear that a separate flight test of the reactor and electric propulsion system would be required before the JIMO mission could be launched. The test flight eventually became known as Prometheus-1. By 2004, JIMO added “Task 2A” for follow-on applications of JIMO spacecraft to the study contracts. Other possible missions after JIMO included the exploration of Uranus and Neptune. “JIMO was in my opinion too ambitious to be attempted,” Griffin said. “It was not a mission in my opinion that was well formed.” The reactor could be launched “cold,” meaning that it would not be active and pose limited radioactive danger during launch. It would be turned on once in orbit. When O’Keefe first announced JIMO, he referred to the high velocities of the nuclear electric spacecraft for opening the outer solar system to exploration. But electric propulsion systems achieve high velocity gradually, after months or even years of operation. Once the program was underway and scientists and engineers began studying the mission, it became clear that JIMO would spend a substantial amount of time in Earth orbit, spiraling outwards while increasing velocity. This raised concerns about operating a nuclear reactor in Earth orbit for a lengthy period, something that was against US space policy at the time. Chemical propellant transfer stages would be necessary to push the spacecraft out of Earth orbit before its reactor would be activated. By May 2004, NASA determined that the mass estimates for JIMO exceeded available launch vehicles. One possibility was that NASA would need to develop a new heavy-lift launch vehicle exclusively to launch JIMO. Three launches would be required to assemble the vehicle in low Earth orbit, which added technological complexity because robotic assembly in orbit had not yet been demonstrated. The launches were planned to take place in 2015, with the spacecraft reaching Jupiter in 2021. JIMO would have spent two or three months each exploring Ganymede and Callisto, and a month exploring Europa. It might have also been possible to explore Io, completing the mission in 2025. One proposal was for a Europa lander to be included in the mission. In September 2004, NASA awarded a letter contract to Northrop-Grumman Space Technologies. By October, NASA signed a formal Memorandum of Agreement with Naval Reactors. JIMO JIMO proponents at NASA believed that the technology could be used for other deep space missions. But JIMO alone would have wiped out the NASA space science program. (credit: National Academies of Sciences, Engineering, and Medicine) Setbacks and cancellation In November 2004, the fiscal year 2005 JIMO budget was cut 26% and the JIMO launch was delayed to 2017. The overall program objectives started to conflict. Whereas the original plan had been to build JIMO, now NASA was considering building Prometheus-1 and launching and testing it before launching JIMO. This increased the overall program cost, but NASA officials also realized that if both Prometheus-1 and JIMO were developed at the same time, any design flaws discovered during the Prometheus-1 flight test would already be incorporated into the JIMO hardware and it would be too late to modify the design. The program schedule would inevitably require restructuring, with JIMO’s launch being further delayed. In February 2005, Administrator O’Keefe departed NASA and the following month Michael Griffin was confirmed by the Senate as his replacement. Griffin had extensive technical and managerial experience with space systems. Soon thereafter, Griffin canceled Prometheus and the JIMO program. “JIMO was in my opinion too ambitious to be attempted,” Griffin said. “It was not a mission in my opinion that was well formed.” By May, he indicated that he believed that NASA should pursue a mission to Europa, a welcome indication to scientists that JIMO had not permanently damaged the Europa goal. The Prometheus project was redirected toward the development of nuclear power for a lunar surface base, but it atrophied and died. Scientific dilemma: gift horse or white elephant? JIMO had posed an uncomfortable dilemma for the American planetary science community. The 2001 planetary science decadal survey had established a Europa mission as its top science priority, but that mission had been rejected by NASA as too expensive. Now NASA was pursuing a far more expensive and ambitious outer planets mission. But JIMO was not a Europa mission, and would spend only a short time exploring Europa—ironically, it would spend far less time exploring Europa than the other Galilean moons of lesser interest to planetary scientists. For the science community, JIMO was not the mission they wanted, and it was way too expensive to afford. During the slightly more than two years it existed, NASA spent $463 million on JIMO, which in the 2000s would have been sufficient to fund at least a Discovery-class planetary mission. Nobody wanted to publicly repudiate the gift. On the one hand, NASA was spending significant amounts of new money on an outer planets mission, and members of that community were benefitting from the spending. But on the other hand, many members of that community expected that JIMO would be cancelled, collapsing due to its own immense cost. Within the outer planets science community, some believed that they should not publicly question the program while the money was flowing, whereas others grimly believed it was a program so big that it was inevitably delaying more realistic plans for a Europa mission. JIMO JThe Prometheus Project Final report was released long after the program was canceled and revealed the overall cost estimate for JIMO: $22.5 billion dollars. (credit: JPL) JIMO’s legacy? Throughout the program, NASA officials refused to reveal cost estimates for JIMO, claiming that it was too early to produce such estimates. But the agency was holding cost workshops during the program, they were just not releasing the results. According to one space scientist, the inside joke at the Jet Propulsion Laboratory was that JIMO was a “20-20-20 mission”—it would weigh 20,000 kilograms, take 20 years to build, and cost $20 billion. The spacecraft dry mass grew to over 22,000 kilograms. But that was not the real shocker. It was months after the program was canceled when the Prometheus Project Final Report was quietly released that it became clear that the JIMO program had developed a cost estimate. JIMO would cost nearly twenty-one and a half billion dollars to build and launch. That did not include the cost of the precursor mission. It was a staggering amount of money that would have consumed the entire space science budget for several years. JIMO would have cost more than twice as much as the controversial James Webb Space Telescope (JWST). JIMO JIMO final cost estimate produced in late 2005 During the slightly more than two years it existed, NASA spent $463 million on JIMO, which in the 2000s would have been sufficient to fund at least a Discovery-class planetary mission. The primary recipient of JIMO funding at the time was the Jet Propulsion Laboratory in California, which received over $200 million, although $75 million also went to the Johnson Space Center, which had no experience with robotic science missions. Some of the JPL money was used to study power conversion technologies, and when the program was canceled, it was no longer needed. JIMO JIMO actual costs JPL scientists later stated that significant amounts had gone into modelling the radiation environment of Jupiter and how to shield the JIMO spacecraft’s electronics from intense radiation. That knowledge was applied to the Juno mission to Jupiter which launched in 2011 and continues today. The data was also applied to a later study program, the Europa Jupiter System Mission (EJSM). However, EJSM was deemed too expensive to fund, with an estimated cost of $5–7 billion, and was canceled in 2011. NASA later selected the Europa Clipper mission, for a more focused study of Europa. Although intended to cost less, Europa Clipper is now expected to cost $5.2 billion. It launched in 2024 and is scheduled to arrive at Europa in 2030. It has been over two decades since JIMO’s cancellation, and the program remains under-examined and mostly forgotten. Dwayne Day can be reached at zirconic1@cox.net.

We Can Build Cities On THe Moon-Who Will Govern Them?

lunar base Illustration of a SpaceX Lunar Starship taking off from a Moonbase. (credit: SpaceX) We can build cities on the Moon—but who will govern them? Amid a global lunar rush, will we land peaceful norms alongside our spacecraft? by Rachel Williams and Jatan Mehta Monday, February 23, 2026 Earlier this month, SpaceX and its founder Elon Musk flipped their stance on the Moon from treating it as a distraction to positioning it as central to their idea of preserving our civilization—after more than two decades of emphasizing Mars as the primary destination . The stated rationale for change and the catalyst involves building a Moonbase and a self-growing city within ten years that can power lunar factories and launch orbital AI data centers, the latter part being the backdrop to SpaceX’s acquisition of xAI. Amid such heated competition and accelerating timelines of humanity’s future, early precedents could shape global lunar activity for decades to come. Even though elements of these visions remain speculative, such ambitious announcements carry real repercussions on lunar governance and global policy. SpaceX’s move is neither self-driven nor made in isolation. Last year, seeing China’s steady strides towards landing humans on the Moon by 2030, the US government sought to accelerate its delayed Artemis efforts to land astronauts before China. NASA reopened the Artemis 3 landing contract, and Jeff Bezos-owned Blue Origin bid for it and also decided to pause other internal projects to focus the company’s resources and efforts on the Moon. Industry momentum toward the Moon is part of a broader global trajectory. This development is significant. The last decade has seen a global interest in lunar exploration, with multiple countries sending diverse missions. Many more are in the pipeline, with the majority of them converging at the lunar south pole with its potential water ice deposits and in low lunar polar orbit. Continued mission successes by China and renewed focus from the US and its partners will likely accelerate activity further. The economic and scientific implications of any sustained lunar infrastructure could be immense. Regardless of the near-term feasibility, just the fact that public commitments of large-scale lunar development are being made by players with theoretical capacities to reach the Moon in substantial forms is enough to affect and alter international policy and regulatory landscapes on Earth. Amid such heated competition and accelerating timelines of humanity’s future, early precedents—how actors share information, access resources, understand land usage and rights, and regulate infrastructure—could shape global lunar activity for decades to come. These practices could either enable broad participation or gate future access. It could also gravely affect fundamental lunar science in the process, which is tied to understanding the solar system itself. To counter the many consequences of unilateral large-scale lunar activities by any party, peaceful governance norms and practical coordination mechanisms must develop alongside technological progress. The US has historically favored de facto practices over multilateral agreement in space. Norms set through the Artemis Accords by the US or its Artemis partners would not apply to a non-signatory like China. The opposite is also true. In such low-trust environments, it’s critical that operating parties share minimum viable information and coordinate their activities through the UN and complementary neutral platforms to avoid operational overlaps and disputes over lunar areas and its resources. Middle space powers, including India and Japan, can play crucial swing roles by intentionally shaping norms through their capabilities and partnerships. Two such upcoming missions have exactly such potential: India’s Chandrayaan 4 sample return and the joint ISRO-JAXA LUPEX rover, both heading to the south pole. In such ways, we can begin to place mutually beneficial governance frameworks early enough, gradually building trust through transparency for a peaceful future in our skies. The Moon is an object of hope for cultures all around the world. Retaining that shared meaning requires that governance evolves alongside technological progress. Rachel Williams is the Executive Director of the Open Lunar Foundation, a US-based non-profit forging and promoting technical and policy building blocks for cooperative and peaceful lunar exploration globally. Jatan Mehta is a globally published and cited space writer and author of Moon Monday, the world’s only blog and newsletter dedicated to covering lunar exploration developments by countries worldwide.

When Iran Took The Internet Hostage, Elon Musk Held The Keys

Starlink Starlink has enabled Iranian protestors to keep in contact with the outside world, but raises policy issues. (credit: SpaceX) When Iran took the Internet hostage, Elon Musk held the keys by Bharath Gopalaswamy Monday, February 23, 2026 As protests spread across Iran in early 2026, the government reached for one of its most reliable tools of control. Internet access was sharply restricted, mobile data was slowed or cut, and international connections were disrupted. For years, such shutdowns allowed authoritarian states to fragment protest movements and choke off the flow of information beyond their borders. This time, the blackout did not fully hold. Starlink did not cause Iran’s protests or determine their outcome. But its presence altered the information environment in ways that mattered. As Iran’s terrestrial networks faltered, some citizens turned to satellite internet to stay online, relying in particular on Starlink terminals smuggled into the country in recent years. Images and videos continued to reach foreign media. Communication persisted unevenly but persistently, even as authorities intensified efforts to jam signals and seize equipment. What had once been a near-total shutdown became a contested space, shaped not only by state power but by private infrastructure operating beyond the reach of national controls. Starlink did not cause Iran’s protests or determine their outcome. But its presence altered the information environment in ways that mattered. Even limited access was enough to show that the state no longer holds an absolute monopoly over connectivity. Control over information now depends not only on domestic law and infrastructure, but also on which private actors own and operate global networks. For Iranian authorities, this was not merely a technical inconvenience. It was a direct challenge to state authority. The response was swift. Signal interference intensified. Starlink use was criminalized. Officials renewed efforts to accelerate a tightly controlled national network designed to function with minimal dependence on the global internet. These moves reflected a clear recognition that privately provided connectivity had become politically consequential. What unfolded was not simply repression adapting to technology. It was a struggle over who gets to decide when and how people can communicate. The policy vacuum Starlink is a private, commercial service. Yet in moments of crisis, decisions about coverage, pricing, access, and resistance to state pressure shape political outcomes inside sovereign countries. These decisions are not made by governments or international bodies. They are made by firms whose primary mandate is commercial, not political. Private actors that control critical digital infrastructure now possess leverage that increasingly resembles state power, but without equivalent accountability. They can expand access or constrain it. They can comply with pressure or resist it. In environments where information control is central to political survival, these choices matter deeply. Authoritarian states understand this clearly. Iran’s response was not limited to suppressing protest. It was aimed at contesting private control over connectivity itself. Signal jamming, legal penalties, and digital isolation strategies were all efforts to reassert authority over systems that had slipped beyond state reach. In effect, a commercial service was treated as a strategic actor. Private firms shape the information environment during political crises, while governments avoid defining the boundaries of acceptable risk or responsibility. The result is uncertainty for everyone involved. Democratic governments occupy a more ambiguous position. They benefit when private platforms and infrastructure enable free expression and expose abuses. Yet when those same tools provoke retaliation, escalation, or broader geopolitical risk, officials often emphasize that the decisions lie with independent companies rather than states. Moral outcomes are welcomed, while political responsibility is quietly deferred. This ambiguity amounts to a form of strategic outsourcing. Private firms shape the information environment during political crises, while governments avoid defining the boundaries of acceptable risk or responsibility. The result is uncertainty for everyone involved. Companies face pressure to make choices that resemble foreign policy decisions without clear guidance or legitimacy. Users rely on connectivity that may expose them to surveillance or punishment. States respond with increasingly aggressive measures to regain control. Where governance lags The Iran case highlights a deeper governance gap. Existing policy frameworks were built for a world where connectivity flowed through cables, towers, and state-regulated providers. They offer little guidance for global satellite networks that operate across borders and beyond traditional territorial control. Export controls, licensing regimes, and telecommunications law lag behind the political impact of privately owned constellations with near-global reach. The institutions we do have for governing space and satellite activity are only beginning to grapple with these questions. At the United Nations Office for Outer Space Affairs (UNOOSA), the February 2026 Scientific and Technical Subcommittee meetings in Vienna featured side events on responsible AI in Earth observation, digital twins for disaster management, and the emerging Space4Ocean Alliance linking space and ocean governance. Yet even in these forums, debates about norms, debris, and data rarely extend to the concrete question of who should decide when a commercial constellation can be used to route around an authoritarian shutdown or what obligations such providers owe to the people they put at risk. This gap is unlikely to close on its own. As satellite connectivity expands and costs fall, similar dynamics will emerge in other authoritarian contexts. States that cannot fully control information flows will increasingly seek to disrupt, degrade, or criminalize them. Private actors will be forced to decide how far they are willing to shape outcomes without a public mandate. UNOOSA’s conferences on space law and policy, together with its Space4Ocean and Space2030 initiatives, are slowly building multilateral language around responsible behavior in orbit, public-private partnerships, and the use of satellite data for resilience and human security. But these conversations still treat commercial operators as stakeholders to be consulted, not as strategic actors whose decisions can tilt the balance inside a country like Iran. Until bodies like UNOOSA and its member states are willing to define clearer expectations for private constellations in moments of crisis, responsibility will remain as distributed and contested as the networks themselves. The lesson from Iran is not that technology inevitably undermines repression. It is that power has shifted faster than governance. When Iran took the internet hostage, Elon Musk did not set out to be an arbiter of political struggle in Tehran. But by controlling an offshore escape route for information, he and his company held a set of keys the Islamic Republic could not easily confiscate. The remaining question is not whether private actors will shape these moments, but who will accept responsibility when they do. Bharath Gopalaswamy is an aerospace and defense expert with extensive experience in AI, space technologies, and advanced systems. He is the author of Final Frontier: India and Space Security.

A.I. And Army Astronauts

Crew-11 When a member of Crew-11 suffered a medical issue in January, the crew could easily return to Earth, an option that won’t exist for futre missions to the Moon and Mars. (credit: NASA/Bill Ingalls) AI and Army astronauts: A judge advocate’s solution to protecting the soldier-astronaut by Mitch Y. Topaloglu Monday, February 23, 2026 The United States Army is the most formidable and versatile fighting force in human history. Some of the greatest pioneers of our Republic have come from the Army. In the spirit of CPT Lewis and 2LT Clark, the Army continues a legacy of trailblazers, found in the ranks of the soldier-astronauts. These astronauts are among the best and brightest our republic has to offer. As they forge a path to the future, the Army owes a responsibility to protect these pioneering soldiers and prioritize their well-being. Due to the dual tyrannies of distance and data, a decentralized AI model in the form of Federated Learning (FL) can be, and has been used, in space for astronaut safety and privacy. With renewed attention being placed on the space program under the new Artemis campaign, the Army must be aware that more soldiers will be called upon to serve, whether on the International Space Station (ISS), a permanent lunar outpost, or eventually, a mission to Mars. The well-being of these soldiers is crucial for the success of their mission to explore and advance humanity. Unfortunately, for the first time in 65 years of spaceflight, NASA was forced to evacuate an astronaut of Crew-11 in January of 2026 due to an undisclosed health emergency. Imagine if, instead of only being 400 kilometers away from Earth on the ISS, an astronaut had been 400,000 kilometers away on the Moon. Or imagine if the crew of a mission to Mars experienced a medical emergency in January 2026 when Mars was 400 million kilometers away. Those astronauts would have been stuck on the other end of a solar conjunction, out of contact with the Earth. Medical evacuation would become impracticable if not impossible altogether. NASA has already begun employing AI healthcare tools for diagnosis and treatment. For the soldier-astronaut, there is a permanent concern of data privacy to protect sensitive health information from adversaries. Here, due to the dual tyrannies of distance and data, a decentralized AI model in the form of Federated Learning (FL) can be, and has been used, in space for astronaut safety and privacy as seen in frame 1 of Figure 2 below. The operational reality: Distance as a medical adversary The physics of space operations create medical challenges that have no terrestrial equivalent. A future lunar base would require a three-day transit. Mars missions face greater isolation, being an odyssey of approximately seven to ten months from Earth, depending on orbital positions in ideal conditions. Round-trip communications alone consume up to 40 minutes, making real-time medical intervention impossible. Bandwidth constraints compound this tyranny of distance. NASA can currently return 50 megabits per second (Mbps) to a lunar base under ideal conditions, but Mars missions operate at far lower (about 3.1 Mpbs) data rates. In extreme cases, during a solar conjunction, there can be a total blackout of communications to Mars. A single human genome comprises three billion base pairs, requiring gigabytes of storage; continuous biometric monitoring generates gigabytes daily. Transmitting this information to Walter Reed becomes practically impossible. Table 1: Bandwidth complications in the solar system Feature ISS Potential Lunar Base Future Mars Outpost Standard Bandwidth 600 Mbps [1] 50 Mbps [2] 3.1 Mbps [3] One-Way Latency <1 second [6] ~3 – 14 seconds [4] 3 to 22 minutes [3] Blackout Risk Near-zero Low (Lunar Occultation) Periodic (Solar Conjunction) [5] The Crew-11 incident illustrates the vulnerability of reliance on Earth-side medical expertise. A medical condition could not be adequately resolved in orbit, forcing mission abort. The crew on the ISS reportedly used an ultrasound to aid in diagnosis. Given the latency and bandwidth realities of Mars, transmitting ultrasound data to ground control is a suboptimal solution. Ideally, the crew would employ AI for diagnostic or treatment support. While a centralized AI trained entirely on Earth could provide a framework, AI models require relevant data to provide relevant results. Put simply, the AI cannot fix what it does not know. Federated Learning: Decentralized by design NASA has employed AI for years. By contrast, FL has seen recent attention due to its versatility and bandwidth limitations of space to Earth communication. FL is a distributed machine learning paradigm that inverts the data-to-model relationship. Historically, raw data is sent to a model for training. In FL, the model travels to data. FL enables the AI to learn and adapt using real-time medical data without needing to transmit sensitive, high-volume information back to Earth. In a solar conjunction scenario on Mars, an FL-enabled system allows the spacecraft to remain a “learning node.” It can refine diagnostic algorithms locally, ensuring that the AI’s “medical knowledge” is tailored to the specific, evolving health profiles of those specific astronauts. By moving the model to the data, FL creates a self-improving medical suite that can function independently of Earth, effectively turning the spacecraft from a mere transport into a limited autonomous medical center. Technical overview FL radically updates the data-to-model framework: A global model (e.g., a foundation model) is distributed to edge devices, such as spacecraft medical systems. Each edge device trains the model (an ideal example being parameter-efficient fine-tuning [PEFT]) locally without transmitting that data or large foundation models, computing mathematical gradients. Only gradients—numerical weight updates, not raw data—are transmitted to a central coordinator, The central node aggregates these updates, produces an improved global model, and redistributes (i.e. the weights/adoptors)—repeating iteratively as can be seen in Figure 1. figure Figure 1. A diagram of federated learning architecture proposal in space operations HeteroFL: Addressing space system heterogeneity Space medical systems face unique challenges that standard FL hasn’t addressed. A lunar base might have robust computing resources, while a small spacecraft has minimal capability. Traditional FL assumes all nodes share the same computing capacity, but this assumption fails in space with vastly different hardware capabilities, as shown in frame 2 of Figure 2. Federated Learning enables the AI to learn and adapt using real-time medical data without needing to transmit sensitive, high-volume information back to Earth. HeteroFL is a federated learning framework that addresses nodes equipped with very different computation and communication capabilities. In essence, not all space systems are created equal, but all systems can still contribute to a PEFT. In practice, HeteroFL permits a lunar base to train on a full-complexity model, a spacecraft to train on a medium-complexity variant, and astronaut wearables to train on lightweight versions. All models can contribute to the same global medical AI as seen in frame 4 of Figure 2. Each system trains the portion of the model it can support, transmits gradients based on its bandwidth, and receives model updates scaled to its capabilities. figure Figure 2: A proposed solution for soldier-astronaut privacy. Frame 1: Describes the bandwidth bottleneck. Frame 2: An overview of the levels of FL in space. Frame 3: The iterative nature of FL. Frame 4. HeteroFL, built around different scales. The legal framework: Privacy as a command responsibility The Privacy Act of 1974 The Privacy Act establishes that federal agencies may not disclose records contained in a “System of Records” without written consent, subject to specific exceptions. A System of Records is any group of records under agency control from which information is retrieved by personal identifier. Any centralized database aggregating health records from multiple soldier-astronauts constitutes a System of Records, triggering comprehensive notice requirements and severely restricting disclosure. Each willful unauthorized access constitutes a separate violation, exposing the United States to penalties of actual damages plus fees. DoD Instruction 6025.18: The minimum necessary standard DoD Instruction 6025.18 implements HIPAA privacy standards within the Military Health System. Central to this instruction is the “Minimum Necessary” standard: covered entities must limit uses and disclosures of Protected Health Information (PHI) to the minimum necessary to accomplish the intended purpose. When multiple technical approaches exist to accomplish a mission, the legal mandate is to select the approach that minimizes PHI collection, use, and disclosure. Traditional centralized training requires collecting, transmitting, and storing complete medical records. This maximizes data collection but creates tension with the Minimum Necessary standard. FL accomplishes the medical AI role while transmitting only mathematical gradients, which are not individually identifiable health information. When two architectural approaches accomplish the same mission but one requires transmitting PHI while the other transmits only gradients, the Minimum Necessary standard prioritizes selecting the latter. HITECH Act: Breach notification and enhanced penalties The Health Information Technology for Economic and Clinical Health (HITECH) Act strengthened HIPAA’s enforcement mechanisms and imposed additional requirements on HIPAA’s covered entities. HITECH’s most consequential provision for AI deployment is its breach notification requirement. Unauthorized acquisition, access, use, or disclosure of PHI that compromises its security or privacy constitutes a breach requiring notification to affected individuals, the Department of Health and Human Services, and in cases affecting more than 500 individuals, public media notification. At first glance, it might seem that this is a premature concern. There have only been about 400 astronauts so far in the history of NASA. While the amount of Army astronauts is a far cry from the 500 required to trigger the public media notification now, a rapid growth of the corps of soldier-astronauts may be necessary within the next decade. If the United States seeks to compete with China in a new space race, it will need many more astronauts to do so. As we expand into the solar system, maintaining Earth-dependent oversight becomes untenable. Artificial intelligence offers the only viable path forward that can coexist within the legal framework of today. For centralized medical systems, a single security compromise could then expose the complete medical histories of hundreds of astronauts, triggering mass breach notification. The consequences extend beyond administrative inconvenience. Adversaries obtaining comprehensive health profiles of soldier-astronauts gain intelligence for targeting, exploitation, and psychological operations. HITECH also mandated security risk assessments and imposed enhanced penalties, with civil monetary penalties reaching $1.5 million per violation category per year for willful neglect. The FL legal architecture for the judge advocate Satisfying the minimum necessary standard Federated Learning may be the ideal AI architecture that satisfies DoDI 6025.18’s Minimum Necessary standard for space-based medical applications. The mission is clear: deploy decentralized medical AI capable of autonomous diagnosis. Centralized models require collecting, transmitting, and storing complete medical records. This maximizes data collection to the detriment of Minimum Necessary. FL accomplishes the mission while transmitting only gradients, which are not individually identifiable health information. Avoiding Systems of Records and breach risk FL avoids creating centralized targets. Medical records remain distributed across their points of collection. No central vault contains all records. The central coordinator possesses only the global model and aggregated gradients, which cannot be reverse engineered to reconstruct individual patient data when secure aggregation techniques are employed. This architectural feature has legal significance across multiple frameworks. First, if no central database retrieves information by personal identifier, there may be no System of Records triggering Privacy Act obligations. Second, from a HITECH Act perspective, FL dramatically reduces breach risk exposure. A compromise of a single edge node affects only that node’s crew, potentially as small as a handful of individuals rather than the entire astronaut corps. This proportionate risk profile means that even in worst-case security failures, the breach notification burden remains manageable and the intelligence value to adversaries remains limited. Preserving role-based access control AR 40-66’s role-based access requirements align naturally with FL’s distributed architecture. The Flight Surgeon or medical officer responsible for a crew retains complete control over their patients’ records. The local medical AI trains on this data under their authority as part of providing care. Central command, data scientists, and system administrators never gain access to individual patient records. They interact only with the global model. Practical implementation The future of medical care demands decentralized edge computing at each spacecraft and habitat module. Edge nodes continuously train on local data to improve predictive accuracy for the specific crew. Periodically, based on bandwidth availability, nodes compute gradients using secure aggregation protocols that prevent individual gradient inspection. A ground-based coordinator receives encrypted gradients, aggregates them without inspecting individual contributions, and produces an updated global model. The improved model is distributed back to edge nodes, providing each crew with the benefit of collective learning without compromising individual privacy. HeteroFL enhances this architecture by allowing each edge node to train models sized appropriately for its hardware—full neural networks on robust habitat systems, medium-sized models on spacecraft, lightweight versions on wearables. All contribute to the same global model, maximizing both privacy protection and operational capability across the solar system. Risks and mitigations The explainability challenge exists: when a federated model makes a diagnostic error, investigating the failure becomes more complex than centralized architectures. However, local training data remains available at edge nodes for audit purposes. Importantly, technical audit challenges do not create statutory liability in the same way that Privacy Act violations do. Data poisoning risks exist if an adversary compromises an edge node or sensor system. However, these are technical security challenges, not privacy or legal compliance issues. Technical risks can be mitigated through modern advances in Computer Science. Privacy violations expose the Army to civil liability, Congressional oversight, and potentially to adversary manipulation. Recommendations for judge advocates First, when technically feasible, centralized AI approaches to medical applications may constitute violations of DoDI 6025.18’s Minimum Necessary standard. Judge Advocates (JA) should proactively ask vendors whether privacy-preserving alternatives were considered. Second, Privacy Impact Assessments should explicitly evaluate federated architectures in their alternatives analysis and address HITECH-inspired considerations, including breach risk exposure and the intelligence value centralized databases present to adversaries. Third, the Pentagon should develop template Model Sharing Agreements and legal guidance formally classifying gradient updates as non-PHI to facilitate rapid capability sharing across commands. Fourth, JA’s should work with medical authorities to establish clear documentation distinguishing QA activities from human subjects research when deploying FL systems, emphasizing the continuous improvement nature of FL and the primacy of patient care over knowledge generation. Fifth, establish annual reviews of operational FL deployments comparing actual privacy performance against projections. As privacy-preserving technologies evolve (e.g., homomorphic encryption, improved differential privacy, etc.) the Pentagon must periodically reassess whether current architectures remain optimal or whether newer techniques should replace existing systems. The goal is to achieve continuous improvement in privacy protection while maintaining operational capability. Conclusion The Crew-11 incident demonstrates that privacy-protective data silos in space operations carry operational costs. As the Department of War expands into the solar system, maintaining Earth-dependent oversight becomes untenable. Artificial intelligence offers the only viable path forward that can coexist within the legal framework of today. Federated Learning provides a legal strategy that aligns AI capability with privacy protection. For JA’s, when advising on medical AI for space operations, privacy-preserving architectures are not optional enhancements; they may be legally mandated by existing regulations, operationally necessary given bandwidth constraints, and ethically required to protect soldiers who forge the republic’s future in the Army’s most extreme environment yet. The views expressed are those of the author and do not reflect the official position of the United States Army, Department of the Army, or Department of War. First Lieutenant Mitch Y. Topaloglu graduated with honors from Wake Forest University in 2022 with a B.A. in Computer Science before earning his J.D. with honors from the University of North Carolina School of Law. He is an officer in the United States Army completing the Judge Advocate Officer Basic Course. 1LT Topaloglu writes on the promises and perils of emerging technology, with a focus on AI.

We Should Have Had A Base On The Moon By Now

https://www.youtube.com/watch?v=KwH2O7Nh3Fk

Tuesday, February 17, 2026

When Second Best Is Not Good Enough

IDSCS The US military’s first operational communications satellite system was designed to be relatively simple and fast, after a much more complicated program failed. Here Initial Defense Satellite Communications System (IDSCS) satellites are integrated into their payload dispenser in 1966. (credit: USAF) When second best is good enough: The Initial Defense Satellite Communications System by Dwayne A. Day Monday, February 16, 2026 The US Air Force pioneered military satellite communications in the early space age. But its path to getting there was not direct or smooth. In the late 1950s, the Army Signal Corps was placed in charge of developing a military satellite communications capability, with the Air Force supplying the launch vehicle. The Army program was named Advent, and it was a highly ambitious plan to put a large satellite in geosynchronous orbit at a time when the United States was having difficulty launching even small satellites into low Earth orbits The decision to proceed with the new system was also a major policy reversal. Whereas the Army had been responsible for the Advent satellite and the Air Force was responsible for the ground stations, now that was reversed. By 1962, Advent’s mass and costs had increased to such an extent that it was cancelled, with a lightweight version of the satellite was proposed instead. However, by December 1962, the lightweight geosynchronous satellite was still in limbo, with the Air Force making no moves to begin a formal procurement effort despite several companies already prepared to bid.[1] The Air Force was instead focused on a less-complex approach, known as the Initial Defense Communication Satellite Program, or IDCSP, later renamed the Initial Defense Satellite Communications System, or IDSCS. Recently, some never-before-published photos of the early IDSCS satellites have surfaced. IDSCS The relatively small satellites were tested on site. They were covered in photovoltaic cells. (credit: USAF) Program 369 In 1962, after Advent had been canceled, the Task X Committee, consisting of representatives of the Army, Navy, Air Force, ARPA, and the Rand and Aerospace Corporations, proposed placing multiple, relatively small satellites in medium-altitude orbits. By the end of 1962, Air Force Systems Command’s Space Systems Division began planning to brief potential contractors on this random-orbit, medium-altitude communication satellite program, now designated Program 369.[2] The Air Force was interested in a medium orbit, 9,260-kilometer-altitude communication system which could involve as many as 20 satellites in random orbits at the same time. The satellites would move around the sky, requiring some movement of the ground antennas to keep track with them. As many as five to seven satellites could be launched by a single Atlas-Agena D rocket. Communications capability could be restricted to one to four channels in the initial program.[3] IDSCS The satellites were manufactured by Philco, which was a major military computer provider at the time. (credit: USAF) The decision to proceed with the new system was also a major policy reversal. Whereas the Army had been responsible for the Advent satellite and the Air Force was responsible for the ground stations, now that was reversed, with the Air Force developing the satellites and the Army handling the ground terminals. The new satellites would require very little control. After Philco won the competition to build the satellites, Secretary of Defense Robert McNamara put the project on hold in October 1963 while the Pentagon negotiated over renting communications capability from the newly created Communications Satellite Corp. (Comsat). Negotiations dragged on until summer 1964 before they were suspended and the Air Force resumed work on the dedicated military communications program.[4] Although McNamara had hoped to buy satellite communications commercially, commercial providers could not meet many military requirements, particularly for secure communications. The Department of Defense would ultimately use commercial satellite communications, but for tasks that did not require high security. In another cost-saving effort, in the latter half of 1964 the Pentagon tried to negotiate a “free ride” on experimental Titan III rocket launches rather than purchasing dedicated Titan III rockets after the vehicle was declared operational. The House of Representatives criticized the effort later that year as a “plan for short-range economics depending on a high-risk program that may prove costly in the end.”[5] The original plan for the medium-altitude satellite program had an estimated cost of $60 million for the satellites and a total of $165 million, including ten Atlas-Agena launches. A modified approach, using a much higher orbit and fewer launches, cost $33 million. This was far cheaper than Advent, which had grown from the original $140 million estimate to $325 million (see “Aiming too high: the Advent military communications satellite,” The Space Review, September 26, 2022.) While the Pentagon negotiated—ultimately unsuccessfully—with Comsat, Program 369 proceeded in a design phase. Initially, the satellites were intended to launch on Atlas-Agenas. But the possibility of using the new Titan IIIC was attractive, and so for a time, the satellites were designed to be able to launch on either vehicle, to equatorial or polar orbits. The reasoning was that if the Titan IIIC was ultimately selected, the Atlas-Agena would provide “insurance” until the Titan IIIC had fully proven itself. IDSCS The satellite dispenser was initially designed to be carried atop either a Titan IIIC or Atlas-Agena D.(credit: USAF) According to a contemporary history of the program, “this convertibility had three aspects – first, the satellite and dispenser had to be mechanically, thermally and electrically suitable for use with either vehicle from launch through ascent, parking orbit, transfer orbit and final injection. Second, the spacecraft had to be designed dynamically, magnetically and electronically for medium or high altitude. Finally, the thermal, solar cell and repeater design had to be acceptable in polar or equatorial orbits.”[6] Philco established a production line capable of building one satellite every four days. This was possible in large part because the satellites were relatively simple. This decision to design for either Atlas or Titan ultimately had a cost. “There were many conflicts that were resolved only by compromising the performance in both cases, but the program flexibility and non-dependence on a new booster design justified them,” engineers involved in the program explained. “The convertibility feature was maintained through the early part of 1965. At that time, some of the conflicts became serious, the compromises more painful, and most important of all, the Titan III program was looking good.” Rather than the medium-altitude orbits, the Air Force could use the Titan IIIC to place them into near-synchronous orbits.[7] The first Titan IIIC had launched successfully in June 1965. A second launch, in October, had failed. A third launch, in December, had been partially successful. The Air Force decided to eliminate the Atlas-Agena option and launch the satellites atop the fourth Titan IIIC launch. IDSCS The IDSCS satellites used the new Titan IIIC rocket, which placed them in a near-synchronous orbit. Concern about the availability of the Titan IIIC was a major uncertainty early in the program. (credit: Peter Hunter Collection) Simple satellites Philco had previously built the Courier IB satellite and had built computers for the military. The company established a production line capable of building one satellite every four days. This was possible in large part because the satellites were relatively simple. They had no moving parts, no batteries for electrical power, and limited telemetry capability to report the status of the spacecraft. The satellites could not be commanded from the ground, which had the benefit of making them resistant to Soviet tampering. They could provide two-way circuit capability for 11 “tactical-quality” voice circuits, or five “commercial-quality” circuits. The latter could transmit digital or teletype data. IDSCS Philco was capable of manufacturing a satellite in approximately four days. Each launch carried eight satellites. (credit: USAF) Each satellite was a polyhedron with twenty-four faces, weighed 45 kilograms, was a meter in diameter and a meter high, and was covered with 8,000 solar cells. The communications equipment consisted of a single-channel 8,000-megahertz receiver, and a 20-megahertz double-conversion repeater. In addition, they would have a three-watt traveling-wave-tube amplifier transmitting around 7,000 megahertz. The satellites had three-year operational lifetimes. In service, they often lasted twice as long.[8] IDSCS The IDSCS satellites were about a meter wide and a meter tall, weighed 45 kilograms, and relatively simple. (credit: USAF) In June 1966, the Air Force launched seven communications satellites into near-synchronous orbits of 33,877 by 33,655 kilometers (an eighth satellite was a technology demonstrator). A second cluster of eight satellites was launched in August, but failed to reach orbit. Two more launches, in January and July 1967, increased the number of satellites in orbit. IDSCS The relatively small satellites were tested on site. They were covered in photovoltaic cells. (credit: USAF) According to those who worked on the program, the orbit options had been extensively studied. The near-synchronous orbit was selected in case a satellite in the cluster failed. If they were placed in geosynchronous orbit, that failed satellite would remain in place for a long time. But in near-synchronous orbit, the cluster would slowly drift. “Because of this slow drift, satellites will stay in view of the ground stations for several days but a malfunctioning or failed satellite will not completely destroy the communications capabilities of a particular link.”[9] The system was declared operational before the launch of the last group of eight satellites and renamed the Initial Defense Satellite Communications System (IDSCS). The final launch of eight satellites in June 1968 atop a Titan IIIC brought the total number in orbit to 35 satellites.[10] It was years later than planned, but the Air Force finally had its communications satellite system. IDSCS Compass Link was a system for scanning photographs in Vietnam and transmitting them via satellite to Washington, DC, where they were analyzed at the National Photographic Interpretation Center. The scanning hardware had been developed for the Manned Orbiting Laboratory program. Although little is known about Compass Link, it did use IDSCS satellites. (credit: CIA) Vietnam goes to orbit In 1964, the US Army established a ground station in Saigon to relay messages via NASA’s Syncom satellite. In July 1967, the military installed satellite ground terminals at Saigon and Nha Trang for communicating via the IDSCS satellites. They linked military forces in Vietnam directly to Washington, DC. This provided new capabilities compared to existing telecommunications networks. Communications satellite technology was advancing rapidly throughout the 1960s, pushed in part by clear commercial demand as well as significant military investment. One of the more unusual and secretive uses of this new capability was Project Compass Link, which provided circuits using the IDSCS satellites to transmit high-resolution photography between Saigon and Washington, DC. This meant that photos taken by tactical aircraft over the battlefield could be analyzed in Washington. After the planes returned to base, their photos were developed and taken to the transmission center. The ground stations had a scanner that could scan a photograph for transmission. This scanner had been developed using technology originally intended for the Manned Orbiting Laboratory program (see “Live, from orbit: the Manned Orbiting Laboratory’s top-secret film-readout system,” The Space Review, September 18, 2023.) To date, information on the Compass Link equipment and how it was used remains scarce. Other commercial communications satellite systems were also used by US military forces in the Pacific during this time. The commercial systems were used for administrative and logistical requirements, and the military systems were used for more sensitive communications. IDSCS Satellites undergoing fit checks prior to shipment to the launch site. (credit: USAF) Evolving communications IDSCS was intended to be an interim system until a better and more capable communications system was developed. The Air Force was aware of the system’s limitations from the start, and planned for a follow-on program to address many of them, increasing lifetime, adding better encryption and anti-jamming capability, as well as operating in geosynchronous orbit, the goal originally established with Advent in the late 1950s. Communications satellite technology was advancing rapidly throughout the 1960s, pushed in part by clear commercial demand as well as significant military investment. The Air Force sponsored experimental communications satellites like Tacsat and the LES satellites, and NASA also sponsored communications satellites to develop new technologies. Although commercial industry sought to satisfy growing demand, the government had unique needs and could not rely on the commercial market to satisfy them. In 1971, the Air Force launched the first of the Defense Satellite Communications System II (DSCS II) satellites. DSCS II was a much larger, spin-stabilized satellite placed in geosynchronous orbit, but did not have all the features that the Air Force desired; they would later be included in the DSCS III satellites. Once DSCS II satellites became operational, the Air Force primarily relied upon large satellites in geosynchronous orbit for its primary communications requirements, a situation that is only beginning to change today with the advent of large numbers of small comsats in low Earth orbit. Acknowledgement: the author wishes to thank Jamie Draper and Jim Behling for assistance in obtaining the photographs used in this article, and Aaron Bateman for providing the AIAA history overview. References “Defense is Pushing Random-Orbit Satellite,” Aviation Week and Space Technology, December 24, 1962, p. 23. Philip J. Klass, “DOD Communication Satellite Launch Set,” Aviation Week & Space Technology, May 9, 1966, p. 33. “Military Comsat Bidder’s Briefing Set,” Aviation Week and Space Technology, February 4, 1963, p. 31. H.B. Kucheman, Jr., W.L. Pritchard, and V.W. Wall, “The Initial Defense Communication Satellite Program,” AIAA Paper No. 66-267, AIAA Communications Satellite Systems Conference, May 2-4, 1966. Philip J. Klass, “Military Comsats Deploy for Global Cover,” Aviation Week & Space Technology, June 27, 1966, pp. 25-26. “The Initial Defense Communication Satellite Program,” p. 4. Ibid. David N. Spires and Rick W. Sturdevant, “From Advent to Milstar: The U.S. Air Force and the Challenges of Military Satellite Communications,” in Beyond the Ionosphere: Fifty Years of Satellite Communication, Andrew Butrica, ed. NASA SP-4217, 1997, pp. 67-69. “The Initial Defense Communication Satellite Program,” p. 5. Ibid. Dwayne Day is interested in hearing from anybody with information about the Compass Link system. He can be reached at zirconic1@cox.net. Note: we are now moderating comments. There will be a delay in posting comments and no guarantee that all submitted comments will be posted.

Musk's Moon Mania

Moonbase Alpha In a presentation to xAI employees, Elon Musk described establishing a “Moonbase Alpha” that would build AI data center satellites and launch them using a mass driver. (credit: xAI) Musk’s Moon mania by Jeff Foust Monday, February 16, 2026 What has been the most surprising development in space in the last year? Perhaps it was the saga of Jared Isaacman’s nomination to be NASA administrator. That was an unprecedented ordeal, with Isaacman’s nomination suddenly withdrawn only for him to be renominated months later. But in the end, the result was what most expected a year ago: Isaacman leading the space agency. “For those unaware, SpaceX has already shifted focus to building a self-growing city on the Moon, as we can potentially achieve that in less than 10 years, whereas Mars would take 20+ years,” Musk wrote. Perhaps it was NASA’s budget, with the White House proposing severe cuts to overall spending and even steeper cuts in areas like science and space technology, along with cancellations or early terminations of key elements of Artemis. But by the time the final fiscal year 2026 spending bill was enacted in January, NASA’s 2026 budget was close to its 2025 budget, with few cancellations. Arguably the biggest surprise in the last year is the one that has developed just in the last several weeks: SpaceX’s sharp pivot to the Moon. A year ago, it seemed that Elon Musk, using his influence in the new Trump Administration, was shifting space policy from a return to the Moon towards human missions to Mars. That was evident in everything from Trump’s mention of “launching American astronauts to plant the Stars and Stripes on the planet Mars” in his inaugural address to a budget proposal that included funding for new Mars exploration technology initiatives. Now, though, it’s SpaceX that’s changing course. The administration has made clear its near-term focus in human space exploration is the Moon, returning astronauts to the lunar surface before China can land its first taikonauts there. A White House executive order in December, which effectively serves as the national space policy, calls for a human landing on the Moon by 2028 and beginning work on a permanent outpost there by 2030. Mars is only mentioned in passing as a goal for the indefinite future. SpaceX, already under pressure to accelerate work on the lunar lander version of Starship for NASA’s Human Landing System program (see “The (possibly) great lunar lander race”, The Space Review, November 3, 2025), seems to shifted even more towards the Moon in recent weeks, culminating in a social media post by Musk February 8, just as the Super Bowl was about to kick off. “For those unaware, SpaceX has already shifted focus to building a self-growing city on the Moon, as we can potentially achieve that in less than 10 years, whereas Mars would take 20+ years,” he wrote. Needless to say, most were unaware of that shift in focus. For most of SpaceX’s nearly quarter-century history, the company, and Musk, were deeply associated with a human presence on Mars, not the Moon. That was the subject of numerous presentations by Musk over the years, which have described making humanity “multiplanetary” by establishing human settlements, even large cities, on Mars in the next few decades. That included, as an example, a talk he gave at Starbase in May around the time of a Starship test flight. “Progress is measured by the timeline to establishing a self-sustaining civilization on Mars,” he said then (see “Starship setbacks and strategies”, The Space Review, June 9, 2025). “Along the way we can do cool things, like have a Moon base, like Moonbase Alpha,” Musk said last May. He used that talk to outline plans for sending Starships to Mars, starting as soon as the next launch opportunity in 2026. That plan called for sending 500 landers to Mars in 2033, each capable of carrying 300 tons of payload. The goals for that launch campaign included establishing global mobility and communications at Mars as well as resource extraction and “increase independence from Earth.” Musk made only a passing reference to the Moon in that talk. “Along the way we can do cool things, like have a Moon base, like Moonbase Alpha,” he said, referencing the classic sci-fi TV series “Space: 1999”. (“Moonbase Alpha” was also the name of a video game released in 2010 developed in cooperation with NASA, in which the player is an astronaut at a lunar base in the then-distant future of 2025.) “We should have a Moonbase Alpha. The next step after the Apollo program would be to have a base on the Moon,” Musk said. That base, he suggested, would be a “gigantic science station.” But after that brief digression, Musk returned to talking about sending humans to Mars, the main thrust of the talk. Musk has talked about establishing a lunar base off and on in the past, even using the same name for it. For example, at one conference in July 2017 he expressed his support for a lunar base. “If you want to get the public really fired up, you’ve got to have a base on the Moon,” he said then. A few months later, talking about what was then called BFR at the International Astronautical Congress in Adelaide, Australia, he mentioned the ability of that predecessor of Starship to support a lunar base, which he also called Moonbase Alpha. “It’s 2017,” he said in that speech. “I mean, we should have a lunar base by now. What the hell’s going on?” These concepts, though, seemed like side quests: nice things to do but not on the critical path to making humanity multiplanetary. The company’s interest in the Moon appeared largely limited to developing the HLS lander version of Starship, along the way gaining experience in technologies like in-space propellant transfer also needed for Mars. So what changed? Musk, in his Super Bowl Sunday announcement, suggested it was a matter of speed. “It is only possible to travel to Mars when the planets align every 26 months (six month trip time), whereas we can launch to the Moon every 10 days (2 day trip time),” he wrote. “This means we can iterate much faster to complete a Moon city than a Mars city.” That, however, has always been the case. In the past SpaceX was willing to overlook any advantages rapidly iterating at the Moon offered in favor of pressing ahead as fast as possible to Mars. The shift from Mars to the Moon comes as part of some of the biggest changes at SpaceX in years. In December, SpaceX executives said that the company was preparing to go public after years of claiming it would remain private: “We can’t go public until we’re flying regularly to Mars,” SpaceX president Gwynne Shotwell said in 2018 (see “SpaceX, orbital data centers, and the journey to Mars,” The Space Review, December 15, 2025). While the company’s CFO, Bret Johnsen, said that the proceeds for an IPO would allow SpaceX to “build Moonbase Alpha and send uncrewed and crewed missions to Mars,” the near-term factor in that decision was developing orbital data centers intended to serve what is, for now, an insatiable demand for computing power for AI applications. Two weeks ago, Musk took another step in that direction when he announced that SpaceX would acquire xAI, his AI and social media company. The goal, he said, was to create a vertically integrated company that could both deploy and use the orbital data centers he insists is the future of AI. “My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space,” he wrote in a memo announcing the deal that was published on SpaceX’s website. “In the long term, space-based AI is obviously the only way to scale.” “We’re going to make it real. We’re actually going to have a mass driver on the Moon,” he said. “I really want to see the mass driver on the Moon that is shooting AI satellites into deep space.” Just a few days earlier, SpaceX filed an application with the FCC for an orbital data center constellation of up to one million satellites. The satellites would operate in both sun-synchronous and mid-inclination orbits between 500 and 2,000 kilometers. The spacecraft in sun-synchronous orbits would be oriented to be in near-constant sunlight, providing continuous services, while those in mid-inclination orbits would handle peaks in demand. The brief application had little in the way of technical details—nothing about the size and power of the satellites or specific orbital planes—but plenty of grandiose visions. “Launching a constellation of a million satellites that operate as orbital data centers is a first step toward becoming a Kardashev Type II civilization — one that can harness the sun’s full power — while supporting AI-driven applications for billions of people today and ensuring humanity’s multiplanetary future among the stars,” the company stated. Musk used some of the same language in the memo discussion the xAI acquisition. “By directly harnessing near-constant solar power with little operating or maintenance costs, these satellites will transform our ability to scale compute. It’s always sunny in space!” he wrote. “Launching a constellation of a million satellites that operate as orbital data centers is a first step towards becoming a Kardashev II-level civilization, one that can harness the Sun’s full power, while supporting AI-driven applications for billions of people today and ensuring humanity’s multi-planetary future.” Musk added that, eventually, the orbital data center satellites might be built and launched from the Moon, enabling terawatts of AI computing power. “Thanks to advancements like in-space propellant transfer, Starship will be capable of landing massive amounts of cargo on the Moon. Once there, it will be possible to establish a permanent presence for scientific and manufacturing pursuits,” he wrote. “Factories on the Moon can take advantage of lunar resources to manufacture satellites and deploy them further into space. By using an electromagnetic mass driver and lunar manufacturing, it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power,” he said. No one could accuse Musk of not thinking big. Last week, xAI posted a video of a company all-hands meeting hosted by Musk. Most of the 45-minute session involved updates from employees on various projects, but Musk closed the presentation with another vision of a lunar-enabled future for AI. “Ultimately, you have to go out there and explore the universe to understand it, and that’s the motivation behind the combination of SpaceX and xAI,” he said. By launching spacecraft from Earth, he said the combined company could deploy 100 to 200 gigawatts of AI compute a year, with a path to one terawatt a year. “But what if you want to go beyond a mere terawatt?” he asked. (AI data centers used about four gigawatts of power in the US in 2024, and are projected to grow to 123 gigawatts by 2035, according to a study by Deloitte last year.) “In order to do that, you have to go to the Moon.” He described a factory that would build data center satellites on the Moon, launching them using a mass driver, which he described as a concept from science fiction. “We’re going to make it real. We’re actually going to have a mass driver on the Moon,” he said. “I really want to see the mass driver on the Moon that is shooting AI satellites into deep space.” On screen, an illustration of such a mass driver appeared, looking not unlike concepts from half a century ago proposed by Gerard K. O’Neill and other advocates of space colonies, who proposed building those free-space settlements using lunar resources transported by mass drivers. “I can’t imagine anything more epic than a mass driver on the Moon and a self-sustaining city on the Moon and going beyond the Moon to Mars,” he concluded, “going throughout our solar system and ultimately being out there among the stars.” “Musk is making a huge mistake,” Zubrin wrote. “Musk’s tweet is nonsense.” In none of the filings, memos, or presentations did Musk provide a schedule for his AI-enabled space ambitions, including a lunar satellite factory and self-sustaining city, beyond the comment in his post that a city on the Moon is potentially possible feasible within a decade. Most in the space industry, though, know such schedules are, as Musk himself has acknowledged, “aspirational.” It does, though, suggest a new underlying thesis for Musk and SpaceX. He previously said that the company’s Starlink constellation would help fund human missions to Mars: “Starlink internet is what’s being used to help pay for humanity getting to Mars,” he said at Starbase last year, thanking Starlink customers “for helping secure the future of civilization and helping make life multiplanetary.” Perhaps that business case no longer closes, either because of a better understanding of the revenues Starlink can generate or the costs of getting humans to Mars. Or the opportunity presented by AI and the demand for data centers is so compelling that it warrants going to the Moon first to enable that, even if satellite factories on the Moon are still decades away. Whatever the reason, it has dismayed Mars advocates, the biggest of whom is Robert Zubrin. “Musk is making a huge mistake,” Zubrin wrote in an essay published last week, citing the lack of resources there for a “self-sustaining city” and propulsion constraints. “In short, Musk’s tweet is nonsense.” Zubrin speculates Musk is motivated by the vast wealth AI data centers on the Moon could generate. “Or it might be where his winning streak ends,” he speculates. If lunar or orbital data centers can’t compete with terrestrial data centers—and Zubrin is skeptical they can—he worries “it could prove a financial disaster that collapses his credibility, and with it his entire corporate empire.” Musk has said he is not giving up on Mars. “SpaceX will also strive to build a Mars city and begin doing so in about 5 to 7 years, but the overriding priority is securing the future of civilization and the Moon is faster,” he wrote in the post announcing the shift to the Moon. He has subsequently suggested this new approach could actually speed up that city on Mars. And he has a need to speed things up: in June he will turn 55. If he still wants to achieve a goal he has long mentioned of dying on Mars—“just not on impact,” he would frequently add—a focus in the near term on the Moon needs to be an enabler if not accelerator of his Mars vision, not a diversion. Perhaps in a year this will look like what happened with the NASA administrator confirmation process or the agency’s budget: a period of wild, unexpected swings that end up back to “normal,” in this case with Musk and SpaceX again monomaniacally focused on Mars. If not, this could turn out to be one of the biggest shifts in spaceflight so far this century. Jeff Foust (jeff@thespacereview.com) is the editor and publisher of The Space Review, and a senior staff writer with SpaceNews. He also operates the Spacetoday.net web site. Views and opinions expressed in this article are those of the author alone.

Seattle Is Space City

NG-2 launch New Glenn on its second launch last November. Blue Origin is again considering ways to reuse the rocket’s upper stage. (credit: Blue Origin) Seattle’s lessons for rocket reusability by Robert Oler Monday, February 16, 2026 Modern Seattle is known for the victorious Super Bowl LX Seahawks, a vibrant lifestyle, and manufacturing of infrastructure that sustains the nation’s and the world’s economies. Today’s reality is a long journey from 1853 when what would become Seattle was a bunch of settlements on what would become known as Puget Sound. Without a doubt, SpaceX’s Falcon 9 settled the question of first stage reuse. That changed when Henry Yesler took a gamble, brought a saw mill up from San Franscico, and started turning trees into lumber. The mill, first of many, produced the capability that allowed ordinary people and business to build the infrastructure to play out their dreams, which eventually created today’s reality. In Seattle, Blue Origin recently posted notice for the position of manager for “Reusable Upper Stage Development”. Immediately, speculation set off in the space press and illuminati concerning the on-and-off possibility of the New Glenn second stage becoming reusable. As a few noted, at least the public side of that debate has been raging for quite some time. Without a doubt, SpaceX’s Falcon 9 settled the question of first stage reuse. In a conservative booster design the one long pole was reusing the first stage. The effort has paid off handsomely: not only for SpaceX with its Starlink infrastructure but for a lot of dreamers trying to make a buck in space effort. SpaceX changed the metric of success. Economic viability ranks with technical excellence. For rocket systems to be economically viable, the first stage must be reusable. ULA’s Vulcan, while sound technically, will fade into history rather quickly largely due to whomever at ULA won the debate about making the rocket totally expendable. The decision doomed it to mediocrity, wasting the dollars and talent spent to turn it into reality—and failing a basic vision test. There really was no debate for SLS, a vehicle designed by politicians with the preservation of political pork as the only goal. How it worked or its cost never really came up. SLS has proven to be a debacle from a cost standpoint and, as recent events have shown, an operational one. Of course, goals differ. From the standpoint of preserving the political support to maintain it, it has so far succeeded. Moving forward, the economics debate has shifted to the second stage. SpaceX had plans (and an interesting video) for a completely reusable F9 but quickly moved to Starship. Bringing Starship to reality has become a harder knot to cut than first stage recovery. SpaceX is slowly seeing the design of the second stage being driven not by payload, but by demands of full stage reuse. Rocket Lab is at the opposite extreme. Neutron’s reusable first stage is designed around the doctrine of a cheap, light, expendable second stage. The limit with this design might be the size and capability of a second stage that the first can handle. What will Blue do? The outside-the-box possibility is that the “reusable upper stage” will have little to do with New Glenn or a follow-on New Armstrong. Instead, the concern would be vehicles designed primarily to transport payloads not to low Earth orbit but from low Earth orbit to their destination either elsewhere in earth orbit or beyond. It would be refurbished for reuse outside of the Earth’s gravity well. Blue Origin’s Blue Ring and yet unnamed fuel transporter stages are hints at this. It is taking a cue from the concept pioneered at least in studies sometime ago by ULA with the reusable Centaur, moving the reuse envelope to space. This allows the company to concentrate on vacuum operation and thrust sizing of conventional rocket engines, as well as use of engines that are designed for long-duration acceleration, and structures that are optimized for space. Mass (both empty and payload) distribution would be on a more appropriate level based on mission needs then a “one size fits all” approach. Even if there was a capability to land 100 tons on the Moon, the 100 tons of payload to land does not currently exist. That results in a high cost for a lander that requires an enormous refueling effort and infrastructure in LEO and on the ground, as well as being one and done. If the goal is for a more conventional reuse of the New Glenn second stage, where might Blue go? The effort requires study and understanding of the tradeoffs in cost and time to make the second stage recovery cost effective. Treat the second stage as an airplane with drop tanks. The “airplane” part is the engines and avionics. The rest is expendable. The bulk of expense and the savings in the stage should be in the engines and electronics. Full stage recovery creates enormous expense in cost and mass diverted from the payload by recovery technologies, all to save easy-to-build and cheaply produced fuel and oxidizer tanks in the quest to satisfy an imaginary need for airplane-like reflight. As Starship illustrates ,this requires an inefficient mix of vacuum and sea-level engines, high mass in thermal protection systems (TPS), aero surfaces, and a heavy cost in first stage capability. Instead, innovate and use a modern update of the original Atlas. Treat the second stage as an airplane with drop tanks. The “airplane” part is the engines and avionics. The rest is expendable. Blue seems to have picked up on development of large “entry shields” where the NASA Inflatable Decelerator left off—they should press forward. After establishing the entire stage on a reentry trajectory, dispose of the tank portion. Engines and avionics are protected by the expandable heat shield, using parachutes for an accurate recovery profile. Inspect, put on a new “drop tank,” and refly. This approach eliminates the need for heavy TPS, aerodynamic surfaces, and fuel to carry this up, down, and make a precise landing. The ground infrastructure to recover, remate, and restack should be far less than powered landing. This configuration should allow recovery from an expanded range of orbits, including GEO transfer. Confidence in the heat shield would eventually lead to use in aerobraking as a routine function—perhaps elese in the solar system as well. Over the next 20 years in spaceflight, both crewed and uncrewed, it will be far cheaper and efficient to replenish specialized machines rather than replacing them after one mission. Yet machines, architecture, and capabilities will evolve. When lunar in situ resource utilization comes into being, it will likely first be fuel and oxidizer. Construction of machines on the Moon is decades and many economic milestones away. Starting small will create infrastructure that can evolve driven by technology and economics. Yesler’s mill started very small with lumber initially of poor quality, but better than what it was before, which was nothing. Little remains other than historical markers and a great view of the Sound from “Skid Row,” as the area was and is called. Yet Seattle and the booming US West Coast are clearly its legacy. Reusable upper stages might be for space infrastructure what a lumber mill on the Puget was for the US West Coast. Robert G. Oler is a founding member of the Clear Lake Group on Space Policy. These are his own views. He can be reached at orbitjet@hotmail.com Note: we are now moderating comments. There will be a delay in posting comments and no guarantee that all submitted comments will be posted.