JacksMars
Since I was a young child Mars held a special fascination for me. It was so close and yet so faraway. I have never doubted that it once had advanced life and still has remnants of that life now. I am a dedicated member of the Mars Society,Norcal Mars Society National Space Society, Planetary Society, And the SETI Institute. I am a supporter of Explore Mars, Inc. I'm a great admirer of Elon Musk and SpaceX. I have a strong feeling that Space X will send a human to Mars first.
Wednesday, February 25, 2026
The Starliner Failure
Starliner
Boeing’s CST-100 Starliner docked to the International Space Station during the CFT mission in 2024. (credit: NASA)
“We failed them”: NASA grapples with Starliner
by Jeff Foust
Monday, February 23, 2026
In briefings before the launch earlier this month of the Crew-12 mission to the International Space Station using a SpaceX Crew Dragon, reporters asked NASA officials about the status of the other commercial crew vehicle, Boeing’s CST-100 Starliner. That vehicle has been grounded since its flawed crewed test flight in mid-2024 that led NASA to bring the spacecraft back without the two astronauts who launched on it. Those astronauts, Suni Williams and Butch Wilmore, returned last March on a Crew Dragon, an eight-day test flight that turned into an eight-month expedition.
Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” Isaacman said.
NASA had said nothing about Starliner since last November, when it announced a contract modification. Boeing’s original commercial crew contract included six operational missions to the ISS after 2024’s Crewed Flight Test (CFT) mission. Under the modification, the first of those, Starliner-1, would carry only cargo; in effect, another test flight after CFT. After that would be three more crewed operational flights, with options for two more.
At the time NASA said Starliner-1 could launch as soon as April, but provided no updates since then, hence the questions at the Crew-12 briefings. “Starliner, we’re planning to fly on a another uncrewed mission this spring, sometime in the spring to summer,” Ken Bowersox, NASA associate administrator for space operations, said at one briefing January 30.
“We’ll launch it when it's ready. We’ll gather all the information we need to complete the certification, and then we want to work Starliner back into the rotation,” he added, alternating flights with Crew Dragon as originally planned.
“Right now, we’re continuing to have a no-earlier-than April launch date,” for Starliner-1, said Steve Stich, NASA commercial crew program manager, at a second briefing February 9.
He pointed to great strides in resolving the technical problems seen on the CFT mission. That included changing seals that caused helium leaks before and after launch, work he said had been closed out. There had also been tests of thrusters, several of which failed in flight, with engineers taking the data to develop models to predict thruster behavior, he noted.
“When we get through that and get to a point where we’re comfortable predicting thruster performance, then we’ll go move forward and look towards a launch date,” he said.
Stich added that NASA had not yet decided if the next crew rotation mission after Crew-12, launching late this year, would be another Crew Dragon mission, designated Crew-13, or the first operational Starliner crewed mission, Starliner-2. “We’d want to work through and get through Starliner-1 into the summer timeframe,” he said. “We have some time to decide.”
Those comments offered a relatively optimistic assessment of Starliner, with a cargo test flight as soon as April and the beginning of routine missions as soon as the fall—six years after SpaceX’s Crew-1—suggesting that, perhaps, the worst of Starliner’s development problems was behind it. A report last week, though, changed that.
At a press conference held on less than two hours’ notice, NASA administrator Jared Isaacman started by reading from a four-page letter he had sent to the agency’s workforce that day, noting that the Aerospace Safety Advisory Panel was set to brief Congress on its annual report and that NASA would release its own independent report on the Starliner CFT mission.
“Let me begin with the most important point. Starliner has design and engineering deficiencies that must be corrected, but the most troubling failure revealed by this investigation is not hardware,” he said. “It’s decision making and leadership that left that, if left unchecked, could create a culture incompatible with human spaceflight.”
He followed with a history of the program, including the problems seen on the first two uncrewed test flights in 2019 and 2022. The investigations into those flights, he said, “did not drive to or take sufficient action on the actual root cause of the anomalies that we observed. The investigations often stopped short of the proximate or the direct cause, treated it with a fix, or accepted the issue as an unexplained anomaly.”
Isaacman then took up the CFT mission, heaping blame on both Boeing and his own agency. “The engineering reality, however, is that Starliner, with its qualification deficiencies, is less reliable for crew survival than other crewed vehicles,” he said.
“But at NASA, we managed the contract. We accepted the vehicle. We launched the crew to space. We made decisions from docking through post mission actions. A considerable portion of the responsibility and accountability rests here,” he said.
The problems with CFT, he argued, were as much with organizational failings at NASA as they were technical problems with the Boeing-developed spacecraft. “Witness statements routinely reflected a belief that management within the commercial crew program could only succeed if Starliner launched,” he said.
Even before OFT in 2019, testing showed there was a risk of the thrusters were subject to thermal environments that “could exceed qualification limits.”
“On orbit, disagreements over crew return options deteriorated into unprofessional conduct while the crew remained on orbit,” he continued. “Witness statements describe an environment where advocacy tied to the Starliner program, viability persisted alongside insufficient senior NASA leadership engagement to refocus teams on safety and mission outcomes.”
Isaacman concluded reading the letter with a commitment to continue working with Boeing on Starliner, while making changes within the agency. “Programmatic advocacy exceeded reasonable bounds and placed the mission, the crew, and America's space program at risk in ways that were not fully understood at the time decisions were being contemplated. This created a culture of mistrust that can never happen again, and there will be leadership accountability,” he promised.
“Pushing a rock uphill”
As he spoke, NASA released the 312-page report, “redacted only where legally required or as directed by our commercial partner,” Isaacman said.
There are still significant redactions in the report, particularly in sections about the technical causes of the helium leaks and thruster failures on the Starliner CFT mission. Many details are blacked out, along with most illustrations or images of spacecraft systems.
Nonetheless, the report makes clear the severity of the issues on the flight. On approach to the station, five of Starliner’s thrusters failed, causing a loss of 6DOF, or movement in six degrees of freedom. Starliner could not move forward on the +X axis, towards the ISS, and had degraded control of pitch and yaw.
“The loss of X-axis translation resulted in a loss of movement in the forward direction and the Starliner vehicle was no longer capable of docking to the ISS, until a subset of thrusters could be recovered,” the report stated. Four of those thrusters were recovered, allowing the docking to proceed.
The report praised that decision, noting that backing off any making another attempt, or deciding to return to Earth immediately, could have made the thruster problems worse. That would have created a “higher risk to loss of life” for Wilmore and Willams.
But the report noted that there had been thruster failures on the two uncrewed test flights, OFT and OFT-2. Even before OFT in 2019, testing showed there was a risk of the thrusters were subject to thermal environments that “could exceed qualification limits.”
Nonetheless, NASA agreed to accept the risk of thruster problems on CFT. “This decision was made without resolving the core thermal qualification issues, effectively mischaracterizing the severity and operational impact of the thermal risks,” the report stated [emphasis in original].
In addition to the thruster failures in the Starliner service module on approach to the ISS, there was a separate failure of a thruster in the crew module during its uncrewed return to Earth. Notably, that failure brought the module to zero fault tolerance. “Loss of the single remaining redundant thruster, for this control axis, would have resulted in a loss of crew,” the report stated.
There are fewer redactions in the section on organizational issues. For that section, the independent review team relied on 66 interviews with people from senior management to line engineers, examining decision-making processes, communication, team dynamics, and more.
The report paints a picture of a rather dysfunctional management process during CFT, as engineers and managers evaluated the severity of the technical problems with Starliner and whether it was safe for Williams and Wilmore to return home on it.
“It was probably the ugliest environment that I’ve been in,” one interviewee said of the Starliner mission management meetings.
While crew safety was “the primary focus of discussion” throughout the mission, the report showed how those discussions were carried out was problematic. Technical teams said they did not have enough time to evaluate data and develop potential explanations of the problems with the spacecraft, which made meetings unproductive. With no published agenda for the meetings of the Starliner mission management team, people felt compelled to attend every one out of the belief it “could be ‘the one’ that was going to make the big crew return decision.”
Other issues ranged from differences between NASA and Boeing in how they evaluate risk and the perception of some that they felt NASA had to show the spacecraft was unsafe, rather than Boeing prove it was safe.
“Strong personalities within CCP [commercial crew program] and Boeing were seen as overly optimistic in presenting data, which some interviewees interpreted as lobbying rather than objective analysis,” the report stated. “This dynamic discouraged dissenting views and contributed to a growing sense of distrust. As one interviewee described, opposing positions felt like ‘pushing a rock uphill.’”
Then there was what Isaacman called in his letter the “unprofessional conduct” during debates about Starliner’s return. The report said that several interviewees brought up what it called “frustrating and/or unprofessional communication styles” during those meetings without being prompted. The report included several quotes from people recalling those meetings:
“There was yelling in meetings. It was emotionally charged and unproductive.”
“There are some people that just don’t like each other very much, and that really manifested itself during CFT.”
“It was probably the ugliest environment that I’ve been in.”
(That is only a subset of the quotes included in the report on the topic.)
The report identified three root causes for the Starliner CFT incident, none of which were technical in nature. One was a “hands-off approach” used by NASA during the start of the commercial crew program that kept the agency form gaining enough data and knowledge about the spacecraft to accept it as a service. The second was inadequate systems engineering and integration at Boeing during the design phase, resulting in Starliner thrusters operating outside of qualification ranges. The third was a culture in the commercial crew program that seeks two successful providers, which led to accepting greater risk.
Starliner
NASA administrator Jared Isaacman at the microphone, with associate administrator Amit Kshatriya behind him, at Thursday’s briefing about the Starliner report. (credit: NASA/Joel Kowsky)
Whither Starliner?
One recommendation made by the independent review was that the Starliner CFT mission should be classified as a “Type A mishap” in agency parlance, one that requires an independent review. Isaacman said he accepted that recommendation and considered the report that independent review.
That classification attracted headlines because it is NASA’s most serious classification for a mishap, one that includes the losses of the shuttles Challenger and Columbia. But is also includes any accident that results in more than $2 million in damage or “unexpected aircraft or spacecraft departure from controlled flight.” Isaacman noted at the briefing that a recent gear-up landing by a NASA WB-57 aircraft at Ellington Airport in Houston, which damaged the plane but did not injure the two people on board, was also a Type A mishap.
One reason that the independent review recommended the Type A mishap classification was so that the incident could be formally captured in NASA databases, like the NASA Mishap Information System, and warned without doing so “transparency, trust, and institutional learning are compromised.”
At its most recent public meeting in December, the Aerospace Safety Advisory Panel had also raised the issue. Not classifying the mission as a mishap or “high-visibility close call” while in flight confused decision-making processes, the panel argued.
“Had this been done in a timely fashion, after the docking of Starliner, the communication of these decision-making authorities and the primary path to resolution of the crew return question would have dramatically improved,” said former astronaut Charlie Precourt, a member of the panel.
Isaacman offered few other specifics about agency responses to the report. Asked what “leadership accountability” meant in terms of potential changes in the commercial crew program or the Space Operations Mission Directorate, Isaacman instead focused on the failures of accountability during the CFT mission, which he said went all the way to the top of NASA.
“To be clear, NASA will not fly another crew on Starliner until technical causes are understood and corrected, the propulsion system is fully qualified, and appropriate investigation recommendations are implemented,” Isaacman said.
Leadership failures, he argued, existed at multiple levels of NASA “right up to the administrator of NASA,” who at the time of the mission was Bill Nelson. “I can't imagine a situation like that, why there would not have been some direct involvement to bring people back to the mission and the crew and figure out the correct pathway forward.”
In the briefing, some asked why NASA should continue to support Starliner given its problems and the impending end of life of the ISS. “We see near-endless demand for crew and cargo access to low Earth orbit, well beyond the life of the International Space Station,” Isaacman argued. “America benefits by having multiple pathways to take our crew and cargo to orbit.”
That is, interestingly, similar to the concern raised by the report that the commercial crew program was too focused on supporting two providers, resulting in increased risk. And while there are obvious benefits of having two crew providers, it’s not clear if that “near-endless demand” projected by Isaacman will materialize in time to save Starliner’s business case. In fact, the report stated that limited hardware spares and plans to retire the Atlas 5 “raise concerns about the program’s long-term viability.”
Isaacman made clear, though, that he would not rush to bring Starliner back. Asked about the schedule presented in recent weeks about the uncrewed Starliner-1 flying as soon as April, with a crewed mission as soon as late this year, he did not explicitly rule out that schedule, but suggested it was not feasible.
“Our focus here at NASA is working alongside Boeing again to understand the technical challenges that have caused these service module and crew module thruster issues, to remediate them, make sure we have a full understanding of the risk associated with this vehicle, implement the report, the recommendations from the report, and get back to flight,” he said.
He added that “what we don't want to do is perpetuate past problems by establishing endless target launch dates that we were unable to meet.” One finding of the report was that, over a five-year period leading up to CFT, the program was within six months of a scheduled launch date for 41 months during that time: schedules repeatedly slipped as worked dragged on. The reported noted “repeatedly moving launch dates a little at a time will have a negative impact on team dynamics.”
“To be clear, NASA will not fly another crew on Starliner until technical causes are understood and corrected, the propulsion system is fully qualified, and appropriate investigation recommendations are implemented,” he said.
Boeing was not a part of the briefing, but did issue a brief statement confirming it would continue to work on Starliner.
“In the 18 months since our test flight, Boeing has made substantial progress on corrective actions for technical challenges we encountered and driven significant cultural changes across the team that directly align with the findings in the report,” the company stated. “We’re working closely with NASA to ensure readiness for future Starliner missions and remain committed to NASA’s vision for two commercial crew providers.”
Also at the briefing was Amit Kshatriya, NASA associate administrator. He filled in some of the technical details but also providing closing remarks.
“For the workforce of NASA—and I’ve been here in the agency almost 20 years now—it is hard to hear sometimes, when we were talking about our culture and we're talking about how we do things,” he said. “I think it's important to recognize that this form of leadership allows our culture to get better.”
Kshatriya, a former flight director in Mission Control, noted he worked with both Williams and Wilmore throughout his career. “Butch and Suni are honestly like family to me,” he said.
“They have so much grace, and they’re so competent, the two of them, and we failed them. The agency failed them,” he said. “We have to say that. We have to recognize that our responsibility is to them and to all the crews that are coming and to the crews that about to go fly. And our responsibilities to each other, too. We're a family.”
Jeff Foust (jeff@thespacereview.com) is the editor and publisher of The Space Review, and a senior staff writer with SpaceNews. He also operates the Spacetoday.net web site. Views and opinions expressed in this article are those of the author alone.
The Jupiter Icy Moons Orbiter
JIMO
The Jupiter Icy Moons Orbiter (JIMO) program was started in 2002 and canceled in 2005. It would have sent a large spacecraft to orbit Jupiter. It would have been insanely expensive. (credit: NASA)
Prometheus bound: The legacy of the Jupiter Icy Moons Orbiter
by Dwayne A. Day
Monday, February 23, 2026
In 2002, NASA began one of the most ambitious robotic projects the agency ever considered, a large, nuclear-powered spacecraft to explore the icy moons of Jupiter. Known as the Jupiter Icy Moons Orbiter, or JIMO, it would have used a nuclear reactor and a powerful electric propulsion system to reach and orbit Jupiter, visiting and studying Europa, Ganymede, and Callisto. But two and a half years later, JIMO was canceled after spending $463 million, with no hardware built.
The Jupiter Icy Moons Orbiter is a cautionary tale about undertaking a science mission without clear scientific support, or a full understanding of the technical and programmatic risks it entails. The brief program’s legacy is mixed and has not been well-studied, but undoubtedly still has lessons to teach.
JIMO
The basic concept of a nuclear-powered, electric propulsion spacecraft has been around a long time. In 1963, JPL studied the ”Electric Space Cruiser” that used the same concepts and overall design that NASA considered again forty years later. (credit: JPL)
JIMO emerges
JIMO was approved by NASA administrator Sean O’Keefe and was widely considered to be his personal initiative rather than a priority for the planetary science community. In 2002, NASA canceled plans to pursue a Europa orbiter mission, which was a high priority for the planetary science community. Later that year, in August 2002, at O’Keefe’s urging, an “Eight Day Study” was held at NASA Headquarters outlining a possible new mission to the outer planets. The project was initiated outside the normal process for developing large planetary missions and was conducted with limited science community input. By September, the Jupiter Icy Moons Tour Studies were started, and two months later the Jupiter Icy Moons Orbiter pre-project commenced at O’Keefe’s directive. This resulted in a presentation to O’Keefe in January 2003. Soon the Tour Studies were completed and a JIMO new start was authorized by Congress.
JIMO was technically ambitious. The United States had not flown a space nuclear reactor since 1965, when SNAP-10A operated at very low power in low Earth orbit for 43 days.
The concept of a nuclear-powered, electric propulsion spacecraft was not new. In 1963, the Jet Propulsion Laboratory had studied the “Electric Space Cruiser” for missions to Jupiter and beyond. But the discovery of gravity-assist trajectories negated much of the need for the technology. As Voyager 2 demonstrated in the 1970s and 1980s, planetary gravity, particularly Jupiter’s gravity well, could be used to send a spacecraft farther out into the solar system.
JIMO would have the same basic layout as JPL’s early sixties design, with a reactor at one end of a long boom, as far as possible from the rest of the spacecraft to protect the electronics from radiation. Electric propulsion thrusters would be mounted at the other end of the boom. In between would be the scientific instruments and radiators to dissipate the heat of the reactor.
Early in 2003, as the JIMO initiative progressed, NASA created Project Prometheus, the technology development program for JIMO. NASA formed a science definition team and released a request for proposal for industry studies. Industry contracts were let by April 2003. In June 2003, NASA held a science forum and a budget workshop. The first industry studies were received by December 2003.
JIMO was technically ambitious. The United States had not flown a space nuclear reactor since 1965, when SNAP-10A operated at very low power in low Earth orbit for 43 days. Numerous American space reactor development projects such as SNAP-8 and SP-100 had been canceled in the decades since without ever producing flight hardware. Even the Soviet Union, which had developed and flown many space nuclear reactors, had never operated one for more than a year. JIMO would have to operate for a decade in space, in a harsh radioactive environment around Jupiter. The program also had to determine how to convert the heat from the reactor into electricity using the most efficient and reliable technology. In addition, the electric propulsion system would have to be significantly higher power than any yet developed. This required not only new thruster technology, but new electronic control systems to handle the high power levels. These were all serious technological challenges to overcome.
JIMO
The Jupiter Icy Moons Orbiter spacecraft would have had to unfold in space, with the reactor at one end and the propulsion system and instruments at the other end. (credit: JPL)
Changing policy environment
JIMO was started just before NASA entered a dark and tumultuous phase. In February 2003, the Space Shuttle Columbia broke up during reentry, killing its crew. The accident not only forced a reevaluation of the shuttle program, but led to a high-level policy review of the goals of the American space program. This would eventually affect Prometheus and JIMO.
President George W. Bush announced the Vision for Space Exploration in January 2004. The VSE, as it was often called, established the goal of returning humans to the lunar surface. The policy would require the ending of the shuttle program and the International Space Station. Without a significant increase in NASA’s budget, it would require cuts to other NASA programs, including space science programs. Democrats in Congress, although not inherently opposed to JIMO, sought information on how much it would cost. NASA refused to provide cost estimates, claiming that it was too early to do so. But there were plenty of indications that JIMO would likely be the most expensive robotic spacecraft ever built.
JIMO
The United States had only flown a single, low-power, reactor in space in 1965. JIMO would have required a powerful reactor and a system for converting its heat to electricity. It would have had to operate in space for up to ten years. (credit: JPL)
Expanding scope
NASA created the Exploration Systems Mission Directorate (ESMD) to enact the Vision for Space Exploration. Prometheus and JIMO were transferred to ESMD. In March 2004, NASA assigned responsibility for developing the nuclear reactor to Naval Nuclear Reactors. Administrator O’Keefe had briefly served as Secretary of the Navy in the first Bush administration and knew of Naval Nuclear Reactors’ ability to develop advanced systems. NNR was the most experienced organization in the nuclear reactor field in the world.
What had started out as JIMO in early 2003 was expanding throughout the year to include other possible missions. Whereas Prometheus was a logical programmatic move to develop the technology for the actual JIMO mission, it soon became clear that a separate flight test of the reactor and electric propulsion system would be required before the JIMO mission could be launched. The test flight eventually became known as Prometheus-1. By 2004, JIMO added “Task 2A” for follow-on applications of JIMO spacecraft to the study contracts. Other possible missions after JIMO included the exploration of Uranus and Neptune.
“JIMO was in my opinion too ambitious to be attempted,” Griffin said. “It was not a mission in my opinion that was well formed.”
The reactor could be launched “cold,” meaning that it would not be active and pose limited radioactive danger during launch. It would be turned on once in orbit. When O’Keefe first announced JIMO, he referred to the high velocities of the nuclear electric spacecraft for opening the outer solar system to exploration. But electric propulsion systems achieve high velocity gradually, after months or even years of operation. Once the program was underway and scientists and engineers began studying the mission, it became clear that JIMO would spend a substantial amount of time in Earth orbit, spiraling outwards while increasing velocity. This raised concerns about operating a nuclear reactor in Earth orbit for a lengthy period, something that was against US space policy at the time. Chemical propellant transfer stages would be necessary to push the spacecraft out of Earth orbit before its reactor would be activated.
By May 2004, NASA determined that the mass estimates for JIMO exceeded available launch vehicles. One possibility was that NASA would need to develop a new heavy-lift launch vehicle exclusively to launch JIMO. Three launches would be required to assemble the vehicle in low Earth orbit, which added technological complexity because robotic assembly in orbit had not yet been demonstrated. The launches were planned to take place in 2015, with the spacecraft reaching Jupiter in 2021. JIMO would have spent two or three months each exploring Ganymede and Callisto, and a month exploring Europa. It might have also been possible to explore Io, completing the mission in 2025. One proposal was for a Europa lander to be included in the mission.
In September 2004, NASA awarded a letter contract to Northrop-Grumman Space Technologies. By October, NASA signed a formal Memorandum of Agreement with Naval Reactors.
JIMO
JIMO proponents at NASA believed that the technology could be used for other deep space missions. But JIMO alone would have wiped out the NASA space science program. (credit: National Academies of Sciences, Engineering, and Medicine)
Setbacks and cancellation
In November 2004, the fiscal year 2005 JIMO budget was cut 26% and the JIMO launch was delayed to 2017. The overall program objectives started to conflict. Whereas the original plan had been to build JIMO, now NASA was considering building Prometheus-1 and launching and testing it before launching JIMO. This increased the overall program cost, but NASA officials also realized that if both Prometheus-1 and JIMO were developed at the same time, any design flaws discovered during the Prometheus-1 flight test would already be incorporated into the JIMO hardware and it would be too late to modify the design. The program schedule would inevitably require restructuring, with JIMO’s launch being further delayed.
In February 2005, Administrator O’Keefe departed NASA and the following month Michael Griffin was confirmed by the Senate as his replacement. Griffin had extensive technical and managerial experience with space systems. Soon thereafter, Griffin canceled Prometheus and the JIMO program. “JIMO was in my opinion too ambitious to be attempted,” Griffin said. “It was not a mission in my opinion that was well formed.” By May, he indicated that he believed that NASA should pursue a mission to Europa, a welcome indication to scientists that JIMO had not permanently damaged the Europa goal. The Prometheus project was redirected toward the development of nuclear power for a lunar surface base, but it atrophied and died.
Scientific dilemma: gift horse or white elephant?
JIMO had posed an uncomfortable dilemma for the American planetary science community. The 2001 planetary science decadal survey had established a Europa mission as its top science priority, but that mission had been rejected by NASA as too expensive. Now NASA was pursuing a far more expensive and ambitious outer planets mission. But JIMO was not a Europa mission, and would spend only a short time exploring Europa—ironically, it would spend far less time exploring Europa than the other Galilean moons of lesser interest to planetary scientists. For the science community, JIMO was not the mission they wanted, and it was way too expensive to afford.
During the slightly more than two years it existed, NASA spent $463 million on JIMO, which in the 2000s would have been sufficient to fund at least a Discovery-class planetary mission.
Nobody wanted to publicly repudiate the gift. On the one hand, NASA was spending significant amounts of new money on an outer planets mission, and members of that community were benefitting from the spending. But on the other hand, many members of that community expected that JIMO would be cancelled, collapsing due to its own immense cost. Within the outer planets science community, some believed that they should not publicly question the program while the money was flowing, whereas others grimly believed it was a program so big that it was inevitably delaying more realistic plans for a Europa mission.
JIMO
JThe Prometheus Project Final report was released long after the program was canceled and revealed the overall cost estimate for JIMO: $22.5 billion dollars. (credit: JPL)
JIMO’s legacy?
Throughout the program, NASA officials refused to reveal cost estimates for JIMO, claiming that it was too early to produce such estimates. But the agency was holding cost workshops during the program, they were just not releasing the results.
According to one space scientist, the inside joke at the Jet Propulsion Laboratory was that JIMO was a “20-20-20 mission”—it would weigh 20,000 kilograms, take 20 years to build, and cost $20 billion. The spacecraft dry mass grew to over 22,000 kilograms.
But that was not the real shocker.
It was months after the program was canceled when the Prometheus Project Final Report was quietly released that it became clear that the JIMO program had developed a cost estimate. JIMO would cost nearly twenty-one and a half billion dollars to build and launch. That did not include the cost of the precursor mission. It was a staggering amount of money that would have consumed the entire space science budget for several years. JIMO would have cost more than twice as much as the controversial James Webb Space Telescope (JWST).
JIMO
JIMO final cost estimate produced in late 2005
During the slightly more than two years it existed, NASA spent $463 million on JIMO, which in the 2000s would have been sufficient to fund at least a Discovery-class planetary mission. The primary recipient of JIMO funding at the time was the Jet Propulsion Laboratory in California, which received over $200 million, although $75 million also went to the Johnson Space Center, which had no experience with robotic science missions. Some of the JPL money was used to study power conversion technologies, and when the program was canceled, it was no longer needed.
JIMO
JIMO actual costs
JPL scientists later stated that significant amounts had gone into modelling the radiation environment of Jupiter and how to shield the JIMO spacecraft’s electronics from intense radiation. That knowledge was applied to the Juno mission to Jupiter which launched in 2011 and continues today. The data was also applied to a later study program, the Europa Jupiter System Mission (EJSM). However, EJSM was deemed too expensive to fund, with an estimated cost of $5–7 billion, and was canceled in 2011. NASA later selected the Europa Clipper mission, for a more focused study of Europa. Although intended to cost less, Europa Clipper is now expected to cost $5.2 billion. It launched in 2024 and is scheduled to arrive at Europa in 2030.
It has been over two decades since JIMO’s cancellation, and the program remains under-examined and mostly forgotten.
Dwayne Day can be reached at zirconic1@cox.net.
We Can Build Cities On THe Moon-Who Will Govern Them?
lunar base
Illustration of a SpaceX Lunar Starship taking off from a Moonbase. (credit: SpaceX)
We can build cities on the Moon—but who will govern them?
Amid a global lunar rush, will we land peaceful norms alongside our spacecraft?
by Rachel Williams and Jatan Mehta
Monday, February 23, 2026
Earlier this month, SpaceX and its founder Elon Musk flipped their stance on the Moon from treating it as a distraction to positioning it as central to their idea of preserving our civilization—after more than two decades of emphasizing Mars as the primary destination . The stated rationale for change and the catalyst involves building a Moonbase and a self-growing city within ten years that can power lunar factories and launch orbital AI data centers, the latter part being the backdrop to SpaceX’s acquisition of xAI.
Amid such heated competition and accelerating timelines of humanity’s future, early precedents could shape global lunar activity for decades to come.
Even though elements of these visions remain speculative, such ambitious announcements carry real repercussions on lunar governance and global policy. SpaceX’s move is neither self-driven nor made in isolation. Last year, seeing China’s steady strides towards landing humans on the Moon by 2030, the US government sought to accelerate its delayed Artemis efforts to land astronauts before China. NASA reopened the Artemis 3 landing contract, and Jeff Bezos-owned Blue Origin bid for it and also decided to pause other internal projects to focus the company’s resources and efforts on the Moon. Industry momentum toward the Moon is part of a broader global trajectory.
This development is significant. The last decade has seen a global interest in lunar exploration, with multiple countries sending diverse missions. Many more are in the pipeline, with the majority of them converging at the lunar south pole with its potential water ice deposits and in low lunar polar orbit. Continued mission successes by China and renewed focus from the US and its partners will likely accelerate activity further. The economic and scientific implications of any sustained lunar infrastructure could be immense. Regardless of the near-term feasibility, just the fact that public commitments of large-scale lunar development are being made by players with theoretical capacities to reach the Moon in substantial forms is enough to affect and alter international policy and regulatory landscapes on Earth.
Amid such heated competition and accelerating timelines of humanity’s future, early precedents—how actors share information, access resources, understand land usage and rights, and regulate infrastructure—could shape global lunar activity for decades to come. These practices could either enable broad participation or gate future access. It could also gravely affect fundamental lunar science in the process, which is tied to understanding the solar system itself.
To counter the many consequences of unilateral large-scale lunar activities by any party, peaceful governance norms and practical coordination mechanisms must develop alongside technological progress. The US has historically favored de facto practices over multilateral agreement in space. Norms set through the Artemis Accords by the US or its Artemis partners would not apply to a non-signatory like China. The opposite is also true. In such low-trust environments, it’s critical that operating parties share minimum viable information and coordinate their activities through the UN and complementary neutral platforms to avoid operational overlaps and disputes over lunar areas and its resources.
Middle space powers, including India and Japan, can play crucial swing roles by intentionally shaping norms through their capabilities and partnerships. Two such upcoming missions have exactly such potential: India’s Chandrayaan 4 sample return and the joint ISRO-JAXA LUPEX rover, both heading to the south pole. In such ways, we can begin to place mutually beneficial governance frameworks early enough, gradually building trust through transparency for a peaceful future in our skies.
The Moon is an object of hope for cultures all around the world. Retaining that shared meaning requires that governance evolves alongside technological progress.
Rachel Williams is the Executive Director of the Open Lunar Foundation, a US-based non-profit forging and promoting technical and policy building blocks for cooperative and peaceful lunar exploration globally.
Jatan Mehta is a globally published and cited space writer and author of Moon Monday, the world’s only blog and newsletter dedicated to covering lunar exploration developments by countries worldwide.
When Iran Took The Internet Hostage, Elon Musk Held The Keys
Starlink
Starlink has enabled Iranian protestors to keep in contact with the outside world, but raises policy issues. (credit: SpaceX)
When Iran took the Internet hostage, Elon Musk held the keys
by Bharath Gopalaswamy
Monday, February 23, 2026
As protests spread across Iran in early 2026, the government reached for one of its most reliable tools of control. Internet access was sharply restricted, mobile data was slowed or cut, and international connections were disrupted. For years, such shutdowns allowed authoritarian states to fragment protest movements and choke off the flow of information beyond their borders. This time, the blackout did not fully hold.
Starlink did not cause Iran’s protests or determine their outcome. But its presence altered the information environment in ways that mattered.
As Iran’s terrestrial networks faltered, some citizens turned to satellite internet to stay online, relying in particular on Starlink terminals smuggled into the country in recent years. Images and videos continued to reach foreign media. Communication persisted unevenly but persistently, even as authorities intensified efforts to jam signals and seize equipment. What had once been a near-total shutdown became a contested space, shaped not only by state power but by private infrastructure operating beyond the reach of national controls.
Starlink did not cause Iran’s protests or determine their outcome. But its presence altered the information environment in ways that mattered. Even limited access was enough to show that the state no longer holds an absolute monopoly over connectivity. Control over information now depends not only on domestic law and infrastructure, but also on which private actors own and operate global networks.
For Iranian authorities, this was not merely a technical inconvenience. It was a direct challenge to state authority. The response was swift. Signal interference intensified. Starlink use was criminalized. Officials renewed efforts to accelerate a tightly controlled national network designed to function with minimal dependence on the global internet. These moves reflected a clear recognition that privately provided connectivity had become politically consequential.
What unfolded was not simply repression adapting to technology. It was a struggle over who gets to decide when and how people can communicate.
The policy vacuum
Starlink is a private, commercial service. Yet in moments of crisis, decisions about coverage, pricing, access, and resistance to state pressure shape political outcomes inside sovereign countries. These decisions are not made by governments or international bodies. They are made by firms whose primary mandate is commercial, not political.
Private actors that control critical digital infrastructure now possess leverage that increasingly resembles state power, but without equivalent accountability. They can expand access or constrain it. They can comply with pressure or resist it. In environments where information control is central to political survival, these choices matter deeply.
Authoritarian states understand this clearly. Iran’s response was not limited to suppressing protest. It was aimed at contesting private control over connectivity itself. Signal jamming, legal penalties, and digital isolation strategies were all efforts to reassert authority over systems that had slipped beyond state reach. In effect, a commercial service was treated as a strategic actor.
Private firms shape the information environment during political crises, while governments avoid defining the boundaries of acceptable risk or responsibility. The result is uncertainty for everyone involved.
Democratic governments occupy a more ambiguous position. They benefit when private platforms and infrastructure enable free expression and expose abuses. Yet when those same tools provoke retaliation, escalation, or broader geopolitical risk, officials often emphasize that the decisions lie with independent companies rather than states. Moral outcomes are welcomed, while political responsibility is quietly deferred.
This ambiguity amounts to a form of strategic outsourcing. Private firms shape the information environment during political crises, while governments avoid defining the boundaries of acceptable risk or responsibility. The result is uncertainty for everyone involved. Companies face pressure to make choices that resemble foreign policy decisions without clear guidance or legitimacy. Users rely on connectivity that may expose them to surveillance or punishment. States respond with increasingly aggressive measures to regain control.
Where governance lags
The Iran case highlights a deeper governance gap. Existing policy frameworks were built for a world where connectivity flowed through cables, towers, and state-regulated providers. They offer little guidance for global satellite networks that operate across borders and beyond traditional territorial control. Export controls, licensing regimes, and telecommunications law lag behind the political impact of privately owned constellations with near-global reach.
The institutions we do have for governing space and satellite activity are only beginning to grapple with these questions. At the United Nations Office for Outer Space Affairs (UNOOSA), the February 2026 Scientific and Technical Subcommittee meetings in Vienna featured side events on responsible AI in Earth observation, digital twins for disaster management, and the emerging Space4Ocean Alliance linking space and ocean governance. Yet even in these forums, debates about norms, debris, and data rarely extend to the concrete question of who should decide when a commercial constellation can be used to route around an authoritarian shutdown or what obligations such providers owe to the people they put at risk.
This gap is unlikely to close on its own. As satellite connectivity expands and costs fall, similar dynamics will emerge in other authoritarian contexts. States that cannot fully control information flows will increasingly seek to disrupt, degrade, or criminalize them. Private actors will be forced to decide how far they are willing to shape outcomes without a public mandate.
UNOOSA’s conferences on space law and policy, together with its Space4Ocean and Space2030 initiatives, are slowly building multilateral language around responsible behavior in orbit, public-private partnerships, and the use of satellite data for resilience and human security. But these conversations still treat commercial operators as stakeholders to be consulted, not as strategic actors whose decisions can tilt the balance inside a country like Iran. Until bodies like UNOOSA and its member states are willing to define clearer expectations for private constellations in moments of crisis, responsibility will remain as distributed and contested as the networks themselves.
The lesson from Iran is not that technology inevitably undermines repression. It is that power has shifted faster than governance. When Iran took the internet hostage, Elon Musk did not set out to be an arbiter of political struggle in Tehran. But by controlling an offshore escape route for information, he and his company held a set of keys the Islamic Republic could not easily confiscate. The remaining question is not whether private actors will shape these moments, but who will accept responsibility when they do.
Bharath Gopalaswamy is an aerospace and defense expert with extensive experience in AI, space technologies, and advanced systems. He is the author of Final Frontier: India and Space Security.
A.I. And Army Astronauts
Crew-11
When a member of Crew-11 suffered a medical issue in January, the crew could easily return to Earth, an option that won’t exist for futre missions to the Moon and Mars. (credit: NASA/Bill Ingalls)
AI and Army astronauts: A judge advocate’s solution to protecting the soldier-astronaut
by Mitch Y. Topaloglu
Monday, February 23, 2026
The United States Army is the most formidable and versatile fighting force in human history. Some of the greatest pioneers of our Republic have come from the Army. In the spirit of CPT Lewis and 2LT Clark, the Army continues a legacy of trailblazers, found in the ranks of the soldier-astronauts. These astronauts are among the best and brightest our republic has to offer. As they forge a path to the future, the Army owes a responsibility to protect these pioneering soldiers and prioritize their well-being.
Due to the dual tyrannies of distance and data, a decentralized AI model in the form of Federated Learning (FL) can be, and has been used, in space for astronaut safety and privacy.
With renewed attention being placed on the space program under the new Artemis campaign, the Army must be aware that more soldiers will be called upon to serve, whether on the International Space Station (ISS), a permanent lunar outpost, or eventually, a mission to Mars. The well-being of these soldiers is crucial for the success of their mission to explore and advance humanity. Unfortunately, for the first time in 65 years of spaceflight, NASA was forced to evacuate an astronaut of Crew-11 in January of 2026 due to an undisclosed health emergency.
Imagine if, instead of only being 400 kilometers away from Earth on the ISS, an astronaut had been 400,000 kilometers away on the Moon. Or imagine if the crew of a mission to Mars experienced a medical emergency in January 2026 when Mars was 400 million kilometers away. Those astronauts would have been stuck on the other end of a solar conjunction, out of contact with the Earth.
Medical evacuation would become impracticable if not impossible altogether. NASA has already begun employing AI healthcare tools for diagnosis and treatment. For the soldier-astronaut, there is a permanent concern of data privacy to protect sensitive health information from adversaries. Here, due to the dual tyrannies of distance and data, a decentralized AI model in the form of Federated Learning (FL) can be, and has been used, in space for astronaut safety and privacy as seen in frame 1 of Figure 2 below.
The operational reality: Distance as a medical adversary
The physics of space operations create medical challenges that have no terrestrial equivalent. A future lunar base would require a three-day transit. Mars missions face greater isolation, being an odyssey of approximately seven to ten months from Earth, depending on orbital positions in ideal conditions. Round-trip communications alone consume up to 40 minutes, making real-time medical intervention impossible.
Bandwidth constraints compound this tyranny of distance. NASA can currently return 50 megabits per second (Mbps) to a lunar base under ideal conditions, but Mars missions operate at far lower (about 3.1 Mpbs) data rates. In extreme cases, during a solar conjunction, there can be a total blackout of communications to Mars. A single human genome comprises three billion base pairs, requiring gigabytes of storage; continuous biometric monitoring generates gigabytes daily. Transmitting this information to Walter Reed becomes practically impossible.
Table 1: Bandwidth complications in the solar system
Feature ISS Potential Lunar Base Future Mars Outpost
Standard Bandwidth 600 Mbps [1] 50 Mbps [2] 3.1 Mbps [3]
One-Way Latency <1 second [6] ~3 – 14 seconds [4] 3 to 22 minutes [3]
Blackout Risk Near-zero Low (Lunar Occultation) Periodic (Solar Conjunction) [5]
The Crew-11 incident illustrates the vulnerability of reliance on Earth-side medical expertise. A medical condition could not be adequately resolved in orbit, forcing mission abort. The crew on the ISS reportedly used an ultrasound to aid in diagnosis. Given the latency and bandwidth realities of Mars, transmitting ultrasound data to ground control is a suboptimal solution. Ideally, the crew would employ AI for diagnostic or treatment support. While a centralized AI trained entirely on Earth could provide a framework, AI models require relevant data to provide relevant results. Put simply, the AI cannot fix what it does not know.
Federated Learning: Decentralized by design
NASA has employed AI for years. By contrast, FL has seen recent attention due to its versatility and bandwidth limitations of space to Earth communication. FL is a distributed machine learning paradigm that inverts the data-to-model relationship. Historically, raw data is sent to a model for training. In FL, the model travels to data.
FL enables the AI to learn and adapt using real-time medical data without needing to transmit sensitive, high-volume information back to Earth. In a solar conjunction scenario on Mars, an FL-enabled system allows the spacecraft to remain a “learning node.” It can refine diagnostic algorithms locally, ensuring that the AI’s “medical knowledge” is tailored to the specific, evolving health profiles of those specific astronauts. By moving the model to the data, FL creates a self-improving medical suite that can function independently of Earth, effectively turning the spacecraft from a mere transport into a limited autonomous medical center.
Technical overview
FL radically updates the data-to-model framework:
A global model (e.g., a foundation model) is distributed to edge devices, such as spacecraft medical systems.
Each edge device trains the model (an ideal example being parameter-efficient fine-tuning [PEFT]) locally without transmitting that data or large foundation models, computing mathematical gradients.
Only gradients—numerical weight updates, not raw data—are transmitted to a central coordinator,
The central node aggregates these updates, produces an improved global model, and redistributes (i.e. the weights/adoptors)—repeating iteratively as can be seen in Figure 1.
figure
Figure 1. A diagram of federated learning architecture proposal in space operations
HeteroFL: Addressing space system heterogeneity
Space medical systems face unique challenges that standard FL hasn’t addressed. A lunar base might have robust computing resources, while a small spacecraft has minimal capability. Traditional FL assumes all nodes share the same computing capacity, but this assumption fails in space with vastly different hardware capabilities, as shown in frame 2 of Figure 2.
Federated Learning enables the AI to learn and adapt using real-time medical data without needing to transmit sensitive, high-volume information back to Earth.
HeteroFL is a federated learning framework that addresses nodes equipped with very different computation and communication capabilities. In essence, not all space systems are created equal, but all systems can still contribute to a PEFT.
In practice, HeteroFL permits a lunar base to train on a full-complexity model, a spacecraft to train on a medium-complexity variant, and astronaut wearables to train on lightweight versions. All models can contribute to the same global medical AI as seen in frame 4 of Figure 2. Each system trains the portion of the model it can support, transmits gradients based on its bandwidth, and receives model updates scaled to its capabilities.
figure
Figure 2: A proposed solution for soldier-astronaut privacy. Frame 1: Describes the bandwidth bottleneck. Frame 2: An overview of the levels of FL in space. Frame 3: The iterative nature of FL. Frame 4. HeteroFL, built around different scales.
The legal framework: Privacy as a command responsibility
The Privacy Act of 1974
The Privacy Act establishes that federal agencies may not disclose records contained in a “System of Records” without written consent, subject to specific exceptions. A System of Records is any group of records under agency control from which information is retrieved by personal identifier. Any centralized database aggregating health records from multiple soldier-astronauts constitutes a System of Records, triggering comprehensive notice requirements and severely restricting disclosure. Each willful unauthorized access constitutes a separate violation, exposing the United States to penalties of actual damages plus fees.
DoD Instruction 6025.18: The minimum necessary standard
DoD Instruction 6025.18 implements HIPAA privacy standards within the Military Health System. Central to this instruction is the “Minimum Necessary” standard: covered entities must limit uses and disclosures of Protected Health Information (PHI) to the minimum necessary to accomplish the intended purpose. When multiple technical approaches exist to accomplish a mission, the legal mandate is to select the approach that minimizes PHI collection, use, and disclosure.
Traditional centralized training requires collecting, transmitting, and storing complete medical records. This maximizes data collection but creates tension with the Minimum Necessary standard. FL accomplishes the medical AI role while transmitting only mathematical gradients, which are not individually identifiable health information. When two architectural approaches accomplish the same mission but one requires transmitting PHI while the other transmits only gradients, the Minimum Necessary standard prioritizes selecting the latter.
HITECH Act: Breach notification and enhanced penalties
The Health Information Technology for Economic and Clinical Health (HITECH) Act strengthened HIPAA’s enforcement mechanisms and imposed additional requirements on HIPAA’s covered entities. HITECH’s most consequential provision for AI deployment is its breach notification requirement. Unauthorized acquisition, access, use, or disclosure of PHI that compromises its security or privacy constitutes a breach requiring notification to affected individuals, the Department of Health and Human Services, and in cases affecting more than 500 individuals, public media notification.
At first glance, it might seem that this is a premature concern. There have only been about 400 astronauts so far in the history of NASA. While the amount of Army astronauts is a far cry from the 500 required to trigger the public media notification now, a rapid growth of the corps of soldier-astronauts may be necessary within the next decade. If the United States seeks to compete with China in a new space race, it will need many more astronauts to do so.
As we expand into the solar system, maintaining Earth-dependent oversight becomes untenable. Artificial intelligence offers the only viable path forward that can coexist within the legal framework of today.
For centralized medical systems, a single security compromise could then expose the complete medical histories of hundreds of astronauts, triggering mass breach notification. The consequences extend beyond administrative inconvenience. Adversaries obtaining comprehensive health profiles of soldier-astronauts gain intelligence for targeting, exploitation, and psychological operations. HITECH also mandated security risk assessments and imposed enhanced penalties, with civil monetary penalties reaching $1.5 million per violation category per year for willful neglect.
The FL legal architecture for the judge advocate
Satisfying the minimum necessary standard
Federated Learning may be the ideal AI architecture that satisfies DoDI 6025.18’s Minimum Necessary standard for space-based medical applications. The mission is clear: deploy decentralized medical AI capable of autonomous diagnosis. Centralized models require collecting, transmitting, and storing complete medical records. This maximizes data collection to the detriment of Minimum Necessary. FL accomplishes the mission while transmitting only gradients, which are not individually identifiable health information.
Avoiding Systems of Records and breach risk
FL avoids creating centralized targets. Medical records remain distributed across their points of collection. No central vault contains all records. The central coordinator possesses only the global model and aggregated gradients, which cannot be reverse engineered to reconstruct individual patient data when secure aggregation techniques are employed.
This architectural feature has legal significance across multiple frameworks. First, if no central database retrieves information by personal identifier, there may be no System of Records triggering Privacy Act obligations. Second, from a HITECH Act perspective, FL dramatically reduces breach risk exposure. A compromise of a single edge node affects only that node’s crew, potentially as small as a handful of individuals rather than the entire astronaut corps. This proportionate risk profile means that even in worst-case security failures, the breach notification burden remains manageable and the intelligence value to adversaries remains limited.
Preserving role-based access control
AR 40-66’s role-based access requirements align naturally with FL’s distributed architecture. The Flight Surgeon or medical officer responsible for a crew retains complete control over their patients’ records. The local medical AI trains on this data under their authority as part of providing care. Central command, data scientists, and system administrators never gain access to individual patient records. They interact only with the global model.
Practical implementation
The future of medical care demands decentralized edge computing at each spacecraft and habitat module. Edge nodes continuously train on local data to improve predictive accuracy for the specific crew. Periodically, based on bandwidth availability, nodes compute gradients using secure aggregation protocols that prevent individual gradient inspection. A ground-based coordinator receives encrypted gradients, aggregates them without inspecting individual contributions, and produces an updated global model. The improved model is distributed back to edge nodes, providing each crew with the benefit of collective learning without compromising individual privacy.
HeteroFL enhances this architecture by allowing each edge node to train models sized appropriately for its hardware—full neural networks on robust habitat systems, medium-sized models on spacecraft, lightweight versions on wearables. All contribute to the same global model, maximizing both privacy protection and operational capability across the solar system.
Risks and mitigations
The explainability challenge exists: when a federated model makes a diagnostic error, investigating the failure becomes more complex than centralized architectures. However, local training data remains available at edge nodes for audit purposes. Importantly, technical audit challenges do not create statutory liability in the same way that Privacy Act violations do. Data poisoning risks exist if an adversary compromises an edge node or sensor system. However, these are technical security challenges, not privacy or legal compliance issues. Technical risks can be mitigated through modern advances in Computer Science. Privacy violations expose the Army to civil liability, Congressional oversight, and potentially to adversary manipulation.
Recommendations for judge advocates
First, when technically feasible, centralized AI approaches to medical applications may constitute violations of DoDI 6025.18’s Minimum Necessary standard. Judge Advocates (JA) should proactively ask vendors whether privacy-preserving alternatives were considered.
Second, Privacy Impact Assessments should explicitly evaluate federated architectures in their alternatives analysis and address HITECH-inspired considerations, including breach risk exposure and the intelligence value centralized databases present to adversaries.
Third, the Pentagon should develop template Model Sharing Agreements and legal guidance formally classifying gradient updates as non-PHI to facilitate rapid capability sharing across commands.
Fourth, JA’s should work with medical authorities to establish clear documentation distinguishing QA activities from human subjects research when deploying FL systems, emphasizing the continuous improvement nature of FL and the primacy of patient care over knowledge generation.
Fifth, establish annual reviews of operational FL deployments comparing actual privacy performance against projections. As privacy-preserving technologies evolve (e.g., homomorphic encryption, improved differential privacy, etc.) the Pentagon must periodically reassess whether current architectures remain optimal or whether newer techniques should replace existing systems. The goal is to achieve continuous improvement in privacy protection while maintaining operational capability.
Conclusion
The Crew-11 incident demonstrates that privacy-protective data silos in space operations carry operational costs. As the Department of War expands into the solar system, maintaining Earth-dependent oversight becomes untenable. Artificial intelligence offers the only viable path forward that can coexist within the legal framework of today.
Federated Learning provides a legal strategy that aligns AI capability with privacy protection. For JA’s, when advising on medical AI for space operations, privacy-preserving architectures are not optional enhancements; they may be legally mandated by existing regulations, operationally necessary given bandwidth constraints, and ethically required to protect soldiers who forge the republic’s future in the Army’s most extreme environment yet.
The views expressed are those of the author and do not reflect the official position of the United States Army, Department of the Army, or Department of War.
First Lieutenant Mitch Y. Topaloglu graduated with honors from Wake Forest University in 2022 with a B.A. in Computer Science before earning his J.D. with honors from the University of North Carolina School of Law. He is an officer in the United States Army completing the Judge Advocate Officer Basic Course. 1LT Topaloglu writes on the promises and perils of emerging technology, with a focus on AI.
Sunday, February 22, 2026
Subscribe to:
Comments (Atom)