Pages

Tuesday, May 12, 2026

After Gateway: The Case For A Middle Power Lunar Consortium

Gateway NASA’s decision to end the lunar Gateway (above) offers an opportunity for international partners to work together on their own lunar exploration program. (credit: NASA) After Gateway: the case for a middle power lunar consortium by Phil McCrory Monday, May 11, 2026 On March 24, NASA administrator Jared Isaacman announced the “pausing” of the lunar Gateway; an effective cancellation for the current project. ESA, JAXA, and CSA—whose combined hardware investments exceeded several billion dollars—learned of the decision alongside the general public. ESA’s Director General said the agency was “consulting closely with its Member States, international partners and European industry to assess the implications.” ESA has set a deadline of June, when the ESA Council holds its next meeting, to determine what that assessment produces. What it should produce is the beginning of something that the evidence of recent agency behavior strongly suggests several of them have privately concluded is necessary: a collectively governed lunar installation built and operated by the capable mid-tier space agencies, on terms that no single external power can unilaterally redirect. Not as a provocation. Not as an anti-American gesture. As the only credible alternative to a governance position—junior partner in someone else’s program—that the Gateway cancellation has just demonstrated, in the most concrete possible way, is not secure. The hardware problem nobody is talking about The Gateway cancellation created what might politely be called a hardware disposition problem. ESA’s I-Hab module, a 36-cubic-meter pressurized habitat with JAXA life support, is currently in production at Thales Alenia Space in Turin, Italy, but now has no confirmed destination. Canadarm3, Canada’s next-generation robotic manipulator system in advanced development by MDA Space, was designed specifically for Gateway. ESA’s Lunar View logistics module, Lunar Link communications system, and JAXA’s life support and battery systems represent a collective investment of up to $3–4 billion in hardware that is now, in various degrees, stranded. What is needed is something that the evidence of recent agency behavior strongly suggests several of them have privately concluded is necessary: a collectively governed lunar installation built and operated by the capable mid-tier space agencies, on terms that no single external power can unilaterally redirect. The June 2026 ESA Council meeting is not primarily a governance discussion. It is a hardware discussion. What do we do with what we built? The most likely answer, absent any alternative proposal, is absorption: ESA’s Gateway hardware gets repurposed into NASA’s unilateral surface base program, on terms set by NASA, under governance ESA does not control. This is the path of least institutional resistance. It is also the path that recreates, one program later, the exact dependency that just produced the Gateway cancellation. There is another path. But it requires someone to say it out loud before June. What the bilateral network already tells us The argument for a middle power lunar consortium does not begin with this article. It is already being assembled, bilaterally, by the agencies themselves—apparently without anyone having yet named what they are doing. JAXA and ISRO are jointly developing the LUPEX lunar south pole rover, with ESA instruments aboard, approved in March 2025. ESA launched a formal internal study of a European-led post-ISS station in February 2026, naming JAXA and CSA as proposed partners. South Korea’s new KASA agency signed a cooperation memorandum of understanding with CSA in April 2026. Australia is in active negotiations on a Cooperative Agreement with ESA. The Moon Village Association’s governance working group has been developing multilateral lunar frameworks for five years. Every bilateral relationship required for a founding coalition already exists. JAXA–ISRO. ESA–JAXA. ESA–CSA. ESA–ISRO. KASA–CSA. ESA–Australia. What does not exist is the multilateral frame: the single table at which these agencies sit together, with a shared installation as the object of discussion rather than a series of bilateral arrangements that individually leave each agency dependent on the US-led program for their crewed lunar future. The agencies are behaving like people who have each privately concluded that a lifeboat is needed. No one has yet said it out loud or started building one together. The hardware credit argument The financial objection to an independent installation—that it requires new money that ESA member states and partner agencies simply do not have—misreads the situation. ESA is not being asked to spend more money. It is being offered a mechanism to redeem the money it has already spent. The proposed founding coalition—ESA, JAXA, ISRO, CSA, KASA, and the Australian Space Agency—collectively covers every dimension of capability a permanent south pole installation requires. The $3–4 billion in stranded Gateway hardware represents a founding contribution credit. Under any serious consortium founding agreement, existing hardware investments would be valued and credited against contribution obligations before Phase 1 funding commitments are made. ESA’s I-Hab is directly applicable as a surface installation habitat core with minimal redesign: the difference between adapting I-Hab and designing a new habitat from scratch is measured in years and hundreds of millions of dollars. The ESA Council’s most important financial decision in June is not whether to spend new money on an alternative program. It is whether to allow existing investments to be absorbed into a program ESA does not govern, or to convert them into founding equity in a program it does. The distinction matters because it changes the domestic political argument in every ESA member state. This is not a request for additional budget. It is a proposal to recover value from an investment that has just been rendered uncertain by a unilateral decision made in Washington. What the consortium would actually look like The proposed founding coalition—ESA, JAXA, ISRO, CSA, KASA, and the Australian Space Agency—collectively covers every dimension of capability a permanent south pole installation requires. ESA brings Ariane 6 launch, the Argonaut lander program, I-Hab hardware, and the Moonlight communications constellation. JAXA brings H3 launch, the most mature non-US life support system in production, and the LUPEX south pole reconnaissance mission. ISRO brings the only successful south pole landing to date, LVM3 launch, and world-leading mission cost efficiency: Chandrayaan-3 delivered south pole surface operations for approximately $75 million. CSA brings Canadarm3 robotic assembly capability. KASA brings a funded south pole lander and a $72 billion ten-year program budget. Australia brings New Norcia-3 deep space ground coverage, remote operations expertise from the mining sector, and the Roo-ver oxygen extraction rover. The governance model should draw on proven precedents: CERN’s equal-vote structure that has survived 70 years of geopolitical turbulence, ITER’s legal personality and in-kind contribution framework that accommodates strategic adversaries in the same program, and the explicit lessons of the ISS hub-and-spoke model that produced the very vulnerability the Gateway cancellation just demonstrated. A binding intergovernmental agreement, not a series of MOUs. Full legal personality. A council where each member has one vote regardless of financial contribution size. An open science mandate that is structural rather than aspirational, built into the data architecture, not vulnerable to ministerial override. The program proceeds in three phases, each complete and scientifically productive in its own right. Phase 1 (2027–2032) establishes orbital infrastructure and affiliates with ESA’s Moonlight navigation constellation. Phase 2 (2032–2038) establishes a permanent robotic surface presence with ISRU demonstration. Phase 3 (2038 onwards) delivers the first crewed occupation. Total cost to first crewed operations: approximately $20–30 billion: one-fifth to one-third of comparable superpower program costs, achieved through existing launch vehicles, adapted Gateway hardware, and ISRO’s demonstrated cost efficiency. The Artemis Accords are not an obstacle The most likely objection from officials in alliance-constrained agencies is that Artemis Accords membership creates an incompatibility with an independently governed program. It does not. The Accords are non-binding bilateral political commitments between the US government and individual signatory states. They commit signatories to norms—peaceful use, transparency, interoperability, data sharing, emergency assistance—not to any specific program or governance structure. The consortium endorses every Accords norm. Its open science mandate is more demanding than the Accords’ data-sharing provision. Its humanitarian access clause extends the Accords’ emergency assistance principle further than any current signatory program. Its interface standard is designed for reference compatibility with the LunaNet framework the Accords endorse. A Japan, Canada, or Australia that is an Artemis Accords signatory and a consortium founding member is in full compliance with its Accords commitments. The tension is political, not legal, and the political tension is managed by a doctrine this paper calls equidistance: cooperative at the operational and technical level with both US-led and China-led programs, independent at the governance level. That is not a new posture for middle powers. It describes how most of the proposed founding members already navigate great-power competition in every other domain. What needs to happen before June The realistic goal for the next several weeks is not the founding of a consortium. It is ensuring that the consortium proposal is a named, credible option in ESA’s internal deliberations before the June Council meeting, not an idea that arrives as commentary on decisions already made. The agencies that have individually concluded that the current trajectory is not sustainable have weeks, not months, to say so collectively and propose an alternative. The ideal June 2026 outcome is specific and modest: ESA’s Council endorses an independent feasibility study into post-Gateway program options, including a multi-agency independently governed installation. That is a different and more achievable threshold than “ESA joins the consortium.” A feasibility study mandate provides political cover for the Track 1.5 engagement—agency-to-agency, below ministerial level—that would need to happen in parallel. It buys the time required to commission the hardware valuation process, draft the governance framework, and bring ISRO and KASA formally into the conversation. The window is real. I-Hab is in production now. The bilateral relationships exist now. The hardware credit opportunity exists now, in the specific form of stranded assets whose disposition is genuinely undecided. The agencies that have individually concluded that the current trajectory is not sustainable have weeks, not months, to say so collectively and propose an alternative. After June, the trajectory will have been set for another program cycle. The middle-power space agencies do not need permission to build this. They need each other. And they need to say so before June. About this analysis This article draws on a longer working paper, “Toward a Middle Power Lunar Consortium”(April 2026), which develops the governance architecture, technical phasing, financial framework, and risk assessment of the proposal in full. The paper is available on request from the author. All factual claims reflect publicly available information as of April 2026. Phil McCrory is a Melbourne-based independent analyst with a background in community development and a long-standing interest in space exploration and governance. He believes the current convergence of geopolitical disruption and growing agency capability has created an opening for a genuinely international future in space that would have seemed implausible until recently. He comes to the subject with an understanding that questions about who governs the Moon, and for whose benefit, are fundamentally questions about democratic accountability.

Exquisitely Unnecessary Very High-Resolution Satellite Reconnaissance

VHR A declassified image of an Iranian launch site taken in 2019 by an American reconnaissance satellite. Even though it is degraded, the image still provides an indication of the kind of detail available from the most powerful reconnaissance satellites. (credit: US government) Exquisitely unnecessary: very high resolution satellite reconnaissance by Dwayne A. Day Monday, May 11, 2026 How good is good enough when it comes to satellite imagery? Today it is common for commercial satellites to produce images with ground resolution of 0.2 to 0.3 meters (with 0.5 to 1 meter being more common), but the US intelligence community has long been rumored to have systems considerably better, reportedly able to discern objects on the ground down to 0.1 meters—a capability often referred to as “exquisite.” Although the details are classified, some historical information has been released indicating that at several periods during the early years of satellite reconnaissance, from approximately 1963 to 1969 and then from 1969 to 1973, the US National Reconnaissance Office (NRO) grappled with the question of the requirement for a “very high resolution” imagery system, and determined that it was not necessary. Despite this, the NRO eventually accomplished such high resolution by upgrading existing systems. The Manned Orbiting Laboratory and very high resolution Although the terminology is still somewhat obscured by classification, the US intelligence community in the 1960s and 1970s appears to have considered “high resolution” to be approximately 8 to 13 inches (0.2 to 0.3 meters), and “very high resolution” (VHR) to be approximately eight inches (0.2 meters) or better. The terms were also defined by the systems being developed to produce that quality imagery. During the 1960s, the Air Force and the NRO were developing the Manned Orbiting Laboratory (MOL), which was equipped with the powerful KH-10 DORIAN optical system. DORIAN would have been capable of photographing objects on the ground as small as four inches (about 10 centimeters), essentially establishing the definition of VHR as what MOL was designed to achieve. As MOL dragged on in schedule and its costs increased, it came under scrutiny. People within the intelligence community began asking if MOL had real intelligence value. Other than carrying astronauts, MOL’s primary attribute was very high resolution, so intelligence officials asked what could VHR do, what did that mean for national security, and was it worth the immense cost? Other than carrying astronauts, MOL’s primary attribute was very high resolution, so intelligence officials asked what could VHR do, what did that mean for national security, and was it worth the immense cost? Despite the discussion now being almost 60 years old, it provides an excellent insight into Cold War deliberations about the value of very high resolution satellite reconnaissance. Prior to the cancellation of MOL, very high resolution was discussed solely in terms of MOL, which had the primary justification—although not openly admitted—of putting military astronauts in space first, and finding something useful to do second. Thus, VHR was not the primary requirement for MOL. After MOL’s cancellation, very high resolution reconnaissance had to be evaluated against other factors, such as its cost and its value for intelligence collection. What the discussion makes clear is that very high resolution could not be judged merely in relation to other photo-reconnaissance systems like the then high-resolution GAMBIT-3, but had to be considered in relation to other types of intelligence collection including signals intelligence. The opportunity costs of developing it also had to be evaluated. Finally, a major question was whether very high resolution reconnaissance would substantively affect US military policies and forces. How would VHR imagery, as opposed to high-resolution imagery, improve understanding of Soviet weapons development, and would this matter? A VHR satellite required large optics, a large spacecraft, and a larger rocket—all of which added up to higher costs. If building a new VHR satellite meant that the United States did not build another type of satellite, how would this affect intelligence collection? If a new VHR satellite was expensive to build, but did not save money by improving or reducing the need for strategic forces, was it worth it? VHR Declassified cutaway of the Manned Orbiting Laboratory and its DORIAN optical system. MOL was a very high resolution system that was canceled in 1969. (credit: NRO) MOL development change paper and ODDR&E study on VHR In late 1968, the MOL program produced a development change paper, or DCP, intended to justify the continued need for MOL despite its increasing costs. The DCP was supported by a study produced by the Secretary of Defense’s Office of Design, Development, Research and Engineering (ODDR&E) titled “The Need for Very High Resolution Imagery and Its Contribution to DoD Operations and Decisions.” Although neither the DCP nor the VHR study have been released, there is a detailed response to it written in early 1969 by a DoD official. He sought to address what he believed to be the core issues: “the value of very high resolution imagery, the urgency with which we need it, and alternative ways of obtaining such imagery.” Ivan Selin, who at that time was the Deputy Assistant Secretary for Strategic Programs for the Department of Defense wrote: “The MOL DCP concludes that the need for VHR imagery is great enough and urgent enough to spend more than $1.5 billion on MOL in FY69 through FY71.” Selin wrote that the DCP and the ODDR&E study “argue that VHR imagery will be valuable in two general ways. First, such imagery might improve our estimates of the capabilities of Soviet and Chinese forces, permitting us to plan less conservative, and therefore less expensive, forces. Second, VHR imagery might provide enough detail about the military characteristics of Soviet and Chinese weapons to permit better design of our weapons, either to reduce their vulnerabilities or to enhance other aspects of their effectiveness.” The value of VHR The CORONA search satellite had, at best, six-foot (1.8-meter) resolution and was scheduled to be replaced by 1970 or 1971 by the HEXAGON, with resolution of one to three feet (61-91 centimeters). The GAMBIT-3 (also referred to as the KH-8) entered service in 1966 and its resolution was apparently initially around two feet (61 centimeters), improving to twelve-inches (30 centimeters) relatively quickly. The goal for the Manned Orbiting Laboratory and its big DORIAN optical system was around six inches (15 centimeters) resolution on the ground, possibly up to four inches (10 centimeters) if viewing conditions were ideal. The Very High Resolution satellite then being discussed in 1968 was intended to have resolution better or equal to DORIAN. Selin stated that the DCP and ODDR&E study justified very high resolution according to several factors: its value for evaluating anti-ballistic missile (ABM) capabilities, assessing Soviet air defense systems, and determining Soviet capabilities to attack American armored vehicles. He also sought to place it in context with other possible new reconnaissance systems. Selin thought that the analysis of very high resolution satellite photography in support of future strategic force decisions was weak. In short, very high resolution satellite photographs would not have any notable impact on the US warfighting strategy. “VHR imagery is not required to determine such things of immediate importance as numbers of Soviet strategic offensive and defensive weapons and numbers of Soviet, Bloc, and Chinese general purpose forces units, where these are deployed, and the equipment they possess,” Selin wrote. He believed that “VHR imagery can contribute to more refined estimates of some of the performance parameters of weapons, both before and after their deployment. The resulting estimates even with VHR imagery will be of modest confidence because of a large number of factors. We have not found examples of such estimates to which VHR can contribute, which have a strong influence on major resource allocation decisions.” Selin stated that there were some “relatively urgent intelligence needs” that could be provided by real-time systems able to return imagery within hours of taking photographs, but very high resolution would not be able to contribute much. “On balance, I believe that VHR imagery may provide some useful information we cannot now obtain and that it will be a worthwhile if marginal addition to our collection program. However, I do not believe large savings will result from VHR imagery,” nor that it would make major changes in the confidence with which the United States estimates Soviet and Chinese threats. Selin believed there were two realistic courses of action: exploit an existing system such as GAMBIT-3 or HEXAGON to obtain photography of resolution between that of GAMBIT-3 and MOL (meaning between 4 to 13 inches ground resolution), or do advanced development of the optical and other systems for an unmanned VHR satellite to be operational at some time in the future. The cost of MOL By late 1968, both the HEXAGON and MOL programs were behind schedule and over-budget. MOL had cost more and slipped more than HEXAGON. But they were not equivalent systems. MOL was a very high resolution system, whereas HEXAGON was designed to gather medium-resolution imagery for large areas. MOL’s startup costs were estimated to be around $3 billion, with $100 million for each mission. Both were scheduled to become operational by 1970. Much of Selin’s memo was devoted to discussing the current US strategic warfighting strategy, which he referred to as the Assured Destruction strategy. This strategy required the US to be able to accept a nuclear first strike from the Soviet Union and still “kill 20% to 25%” of the Soviet population. Selin thought that the analysis of very high resolution satellite photography in support of future strategic force decisions was weak. In short, very high resolution satellite photographs would not have any notable impact on the US warfighting strategy. The US strategic policy towards the Soviet anti-ballistic missile system was “exhaustion” of the defending ABM system—send more warheads than the ABM system could shoot down. It was a question of numbers, not of capability. Higher resolution photographs of Soviet ABM interceptors were not going to change that policy. VHR A commercial satellite image from 2019 showing damage to an oil refinery. Commercial imagery with resolution of 0.3 to 1 meters is readily available and sufficient for most non-military and many military uses. (credit: DigitalGlobe) Strategic forces decisions and VHR The arguments in favor of VHR were divided into several other categories. Selin believed these arguments were also weak. Soviet ballistic missiles “We need to know the number of independent ballistic missile reentry vehicles that can be delivered, their reliability, delivery accuracy, and yield. Of these, by far the most important are numbers and accuracy,” Selin stated. Very high resolution photos would not change that. The American Sentinel ABM system then being proposed would have very little defense against Soviet attack. “If the Soviets take even simple steps to exhaust it, Soviet penetration capabilities beyond use of chaff are now of little importance.” “VHR imagery can be expected to make little or no additional contribution to determining either numbers or accuracy of Soviet ballistic missiles. It is conceivable that such imagery could help determine the payload (through better measurements) and hence the yield of a missile such as the SS-13, but because our ICBM vulnerability is not very sensitive to yield, the value of every refined yield information is low.” “The primary damage limiting contributions suggested by the report for DORIAN are improving our estimates of Soviet ICBM silo hardness and determining more about Soviet capabilities to penetrate our anti-Soviet ABM (which we have not yet decided to buy).” VHR could improve estimates of Soviet silo lid thickness, but other factors dominate. “Even with complete drawings, exhaustive soil tests, and finally, full scale high explosive tests, we were and are unsure of the true hardness, especially the upper limit, of the [U.S. Air Force’s] Minuteman facilities.” The hardness of Soviet silos “is of interest, but does not drive either our force requirements or the way we might use these forces.” “The study also argued that DORIAN might get VHR pictures of Soviet reentry systems. This seems highly unlikely. Advanced reentry systems of the type we are developing and testing just aren’t exposed to overhead photography; MIRVs, decoys, chaff, etc., are nearly always, as a minimum, under wind shields when the boosters are on the test pads. Even if such photographs were obtained, they would tell us very little about penetration capabilities. If we were to deploy a heavy ABM against the Soviets, we would still need collectors like Sentinel Foam to acquire necessary reentry data. DORIAN would add very little to our knowledge in this case.” Very high resolution would not change the best confidence in best estimates of ABM parameters, and even if it did, that would not change the strategic situation. Selin added that “relatively few lives can be saved by modifying our war plan if it is discovered that a Soviet ABM is in fact totally ineffective.” Soviet area defense “The effectiveness of Soviet air defenses, given known Soviet aircraft, are almost completely determined by the capabilities of Soviet airborne warning and control (AWACs) aircraft, interceptors, and air-to air missiles to find and shoot at low altitude targets.” It was difficult to define very high resolution requirements. This was a result of there being little experience with VHR over denied areas. One of the key questions at the time was whether the Soviet Union’s high-speed MiG-25 Foxbat interceptor would have shoot-down missiles that could attack low-flying American aircraft. The MiG-25 first flew in 1964, and was shown off to the Soviet public in a 1967 airshow. Although VHR satellites might detect missiles under the wings of MiG-25s, they would not provide information on whether the missiles could be fired downward and detect an aircraft from ground clutter, so-called “look-down/shoot-down” capability. Indeed, in 1976, when a Soviet pilot flew his MiG-25 to Japan, American technicians were able to examine the plane’s radar in detail—and question the pilot—and were surprised that it still lacked that capability. As Selin stated, the primary factors for successful Soviet air defense were “an electronic capability, SAM [surface-to-air missile] firepower and SAM reaction time, both electronic and data handling capabilities. None of these are very susceptible to analysis by VHR imagery.” Soviet anti-submarine warfare (ASW) “The kinds of things we might see with VHR imagery such as deck-mounted ASW weapons, sonar domes, and antennas are not the critical elements in a system with capabilities against our SSBNs [ballistic missile submarines]. The fundamental problems of detecting and tracking these submarines are not likely to be solved with equipment subject to VHR imagery.” Similar to his argument about area defense, Selin claimed that very high resolution photos of Soviet anti-submarine warfare systems would not provide the most important information about them. The effectiveness of a sonar, for instance, had to be evaluated in the water by listening to it, not by looking at it from space. In summary, Selin explained that “the report has identified the wrong features of Soviet systems as the important ones.” Tactical forces decisions and VHR Tanks and armored personnel carriers are designed to last five to ten years. They are designed conservatively to include possible threats even at the end of their lifetimes. “If VHR imagery were to reveal lesser threats [to American tanks], we would not reduce the design requirements on the” tank. “It is very unlikely that we would see advances exceeding our conservative postulation since: (1) many of the weapons simply would not be available to overhead photography of any resolution, and (2) because our postulations are very conservative, it is by definition, unlikely that we would discover more serious threats.” “The capabilities of Soviet general purpose forces change slowly because it simply takes a long time to modernize these forces, since such modernization may require literally thousands of new weapons. A large change in the balance of our general purpose forces and the Soviets’ is very unlikely to come about because of Soviet technical innovations. We will gain much information on such changes from COMINT [communications intelligence], direct observation, and other sources in time to respond if a response is needed.” Selin argued that the study “does not follow the arguments through that high priority efforts to get high resolution photography should result in similar efforts to respond to such photography—possibly because we have not in the recent past engaged in any major high priority programs to change the general purpose force weapons in response to surprises discovered by means other than VHR imagery.” Unconvincing arguments In May 1969, highly respected intelligence advisor Edwin “Din” Land wrote President Nixon recommending that he cancel MOL and continue development of a very high resolution camera that exploited DORIAN technology advances. Land also urged that most reconnaissance research and development be concentrated on near-real-time reconnaissance. He urged Nixon to start “highest priority” development of a “simple, long-life imaging satellite, using an array of photosensitive elements to convert the image to electrical signals for immediate transmission,” a system that the CIA was then developing known as ZAMAN (see “Intersections in Real Time: the decision to build the KH-11 KENNEN reconnaissance satellite (part 1)” The Space Review, September 9, 2019, and Part 2.) The MOL program was canceled in June 1969. In September 1969, Major Richard L. Geer wrote a memo to the head of the NRO’s West Coast program office about the VHR issue. He noted that “Several high-powered studies have attempted to establish a case for VHR photography, mostly in support of MOL. A number of these (e.g. the Foster Study/Ad Hoc Evaluation Group) have made innumerable arguments, many of which were fairly impressive. Taken together, they ought to have made an unshakeable rationale for VHR. That they have not made a sufficient case to justify MOL is a matter of record. Whether they have made a sufficient case to justify funding any other VHR development is a matter of doubt.” Geer’s memo mentioned Selin with a hint of disdain, implying that the people working in the NRO were aware of his arguments and didn’t agree with him. Nevertheless, they were not making a sufficient case for very high resolution to overcome those arguments. Major Geer noted that it was hard to get complete community support for VHR funding when that might come at the expense of more pressing requirements such as search and surveillance. Now that the HEXAGON program seemed to be secure, VHR might have a better chance. But any VHR system was going to ultimately compete against other systems for funding. Geer wrote that the second problem was that it was difficult to define very high resolution requirements. This was a result of there being little experience with VHR over denied areas. “Each intelligence target in the overhead reconnaissance inventory has a range of resolution requirements corresponding to what is desired to be known at any given time about that target. These requirements vary for a given target and a given time, but they range down to the equivalent of parade photography,” meaning the big military parades where the Soviets showed off some of their missiles while foreigners, including American intelligence officers, took photos. “Partly as a consequence of this problem, there has never existed a consolidated list of actual targets requiring VHR,” Geer wrote. “There is some limit for any intelligence target beyond which increasing resolution of overhead photography is less rewarding than investment in other collection means.” Geer noted that one problem that occurred with MOL was that the air and space weapons requirements for MOL imagery were much firmer than Army and Navy requirements, because the Air Force was given much more time to develop its requirements than the other two services. Thus, it appeared that the Air Force needed the imagery more than the Army or Navy, but this may have been inaccurate. The other forces needed more time to study the issue. Geer also wrote that some of the statement of requirements for VHR were inflexible, citing the example of requiring VHR for an airfield for experimental aircraft, whereas the VHR was really only required when a new aircraft or missile was present. By being more flexible, this might create more opportunities. An example was using a modified GAMBIT satellite to spot VHR targets and only using the last few orbits to take the photos before jettisoning the first film return vehicle. The high-level perspective From around 1969 to 1971 there was apparently discussion of a VHR system that some referred to as HEXADOR, a combination of the HEXAGON spacecraft and DORIAN optics. However, it is unclear whether this was actively studied, or simply a basic proposal. There was also apparently some study of the technological advances required to achieve very high resolution in general. By 1971 the NRO should have realized that 2.5-centimeter (i.e. 1-inch) ground resolution wasn't possible, because it defied the laws of physics. What is better documented is that, from 1970 thru 1971, the majority of the discussion of new reconnaissance systems within the intelligence community focused not on VHR but on near-real-time. Acquiring imagery faster was a higher priority than acquiring better imagery. The resolution goal for the KH-11 KENNEN, which was approved for development in 1971, was probably around 12–18 inches (30–45 centimeters) ground resolution. In April 1971, Director of the NRO John L. McLucas wrote a memo titled “Future of Drones and Aircraft in Overhead Reconnaissance” where he discussed the limited utility of drones and aircraft such as the U-2, particularly when it came to overflying hostile territory. McLucas explained that the approach the NRO was taking to improve the ability to return imagery faster was to have a satellite in orbit constantly, with the ultimate goal being the deployment of a near-real-time satellite using an electro-optical imaging system that beamed its images to the ground. The new electro-optical imaging system, soon named KENNEN, was also going to cost a lot of money. “In order to acquire such a capability, which is some three or more years away, constraints have caused us to terminate all activities leading to a Very High Resolution system capable of some 1-inch to 5-inch resolution” (2.5–12.7 centimeters), McLucas explained. But by the latter 1960s it was known within the reconnaissance community that there was a physical limit to resolution from a satellite due to atmospheric turbulence, and the lower-end number that McLucas cited as VHR’s goal was impossible to achieve. In 1966, David Fried published a paper in the open literature that determined the atmospheric resolution limits of a satellite in low Earth orbit. Fried calculated that a satellite was limited to a resolution of no better than five to ten centimeters no matter how powerful its optics, and his conclusion was independently confirmed two years later by John C. Evvard. By 1971 the NRO should have realized that 2.5-centimeter (i.e. 1-inch) ground resolution wasn't possible, because it defied the laws of physics. Available technology, or even technology that might become available in the foreseeable future, could not bend the laws of physics. McLucas’ memo indicates that VHR was killed by budget constraints, not physical limits. By 1971, Lew Allen was a brigadier general, and the head of the NRO’s headquarters staff in Washington, and would soon be named head of the Secretary of the Air Force Special Projects office (also known as SAFSP, the NRO’s Program A) in Los Angeles. He would later go on to become a full general, run the National Security Agency, become Air Force Chief of Staff, and after leaving the Air Force in 1982, he became director of the Jet Propulsion Laboratory. He had developed an almost scholarly perspective of satellite reconnaissance and the bureaucracy that managed it. In 1974, Allen wrote an extended commentary about a top-secret draft NRO history and discussed what he referred to as the conflict between requirements for new reconnaissance systems vs. the “technological imperative”—meaning simply pushing the technology as far as it could go regardless of any specific requirement. Allen observed that there were essentially three aspects to satellite reconnaissance. The first was quality, which mainly meant the resolution capability of a system. The second was quantity, which primarily referred to how much area coverage a satellite could provide. The third was timeliness. “There can be developed a logical description of requirements, as it relates to each factor,” Allen wrote, “but in truth (as [reconnaissance pioneer Amrom] Katz would say) the developments have been driven by the ‘technological imperative’ and the requirements here caught up later.” Quality—the desire for higher and higher resolution photographs from space—drove the NRO to develop the GAMBIT system, the Manned Orbiting Laboratory (MOL) and its DORIAN optics system, and then to pursue improvements to GAMBIT. Quantity—the need for area coverage—led to the first reconnaissance satellite, CORONA, in 1959, followed by its replacement the HEXAGON system, which first flew in 1971. Allen viewed HEXAGON as an “ultimate” system. Its success had left the need for quantity “unfruitful for further dreams.” Allen could often state—at least in top secret documents—the uncomfortable truths that others in his field might not acknowledge. In Allen’s view, the unstated primary requirement for MOL was to put military astronauts in space. Taking very high resolution photos was really only a justification for orbiting the astronauts, not a requirement that led to MOL. “As the enormous value of overhead recce became more appreciated, it was always the strategic concept which dominated – technological advancement of Soviet weaponry – SALT – order of battle, etc.,” Allen wrote. General Allen’s views in 1974 did not contradict Selin’s arguments five years earlier. Although General Allen indicated that he believed the technological imperative drove most space reconnaissance systems, that was not completely true. There were stated requirements that drove the development of CORONA, GAMBIT, and HEXAGON. Once each of those systems was in operation, the technological imperative took over as their designers strove to improve them to the maximum extent possible, eventually exceeding their requirements, sometimes substantially. He was, however, correct about the technological imperative regarding very high resolution. There does not appear to have been any clear requirement during the 1960s for VHR. MOL’s requirement was to fly military astronauts and find something useful for them to do, and VHR was subservient to that requirement. If the astronauts were not needed, neither was VHR. KENNEN, HEXADOR, and Advanced GAMBIT-3 This snapshot of arguments for and against very high resolution satellite photography represents only a brief moment of time. It is possible that the arguments changed, or that new types of strategic threats resulted in a change in the arguments for or against very high resolution. An example of the latter was the development by the Soviet Union of mobile ICBMs in the 1970s. Very good photos of mobile ICBMs were not important, but imagery that showed where they were right now was important. Tracking the locations of mobile ICBMs was much more dependent upon timely imagery, including possibly radar imagery to penetrate clouds and nighttime. Thus, a non-photographic system capable of providing that kind of imagery would have become more important during the 1970s when the Soviet Union began fielding road-mobile ICBMs. MOL’s requirement was to fly military astronauts and find something useful for them to do, and VHR was subservient to that requirement. If the astronauts were not needed, neither was VHR. In the early 1970s, the US Intelligence Community created the National Imagery Interpretability Rating Scale, or NIIRS, a subjective scale used for rating the quality of imagery acquired from various imagery systems. NIIRS consisted of ten levels, from 0 (worst quality) to 9 (best quality). The scales included the kinds of targets that could be identified for each level. (See Table 1) These changed over time as some targets, like obsolete weapons systems, were removed from use and no longer seen in imagery. NIIRS provides a good introduction to the kinds of things that could be seen in imagery, and NIIRS 9 represented the very high resolution category that Selin and Geer had discussed in 1968. But looking at NIIRS only provides some of the story—it tells us the kinds of targets that could be seen. It does not answer the “so what?” question of their importance to the intelligence community. In 1973, the NRO evaluated a proposal for an updated GAMBIT satellite with a larger mirror, capable of achieving the VHR goal. This would have required new development money, including an improved Titan III rocket. (See “Advanced Gambit and VHR,” The Space Review, July 25, 2022.) But GAMBIT-3’s resolution was improving steadily throughout this time. Although the resolution capabilities of the GAMBIT-3 film-return system then in service are mostly classified, some information has been released. In 1969, GAMBIT-3’s best resolution was around 13 inches (33 centimeters). It improved steadily throughout the 1970s with several upgrades. By March 1975, a GAMBIT-3 satellite had returned imagery with 4.5-inch (11.4-centimeter) ground resolution, and by the end of the program it had returned imagery reportedly of “better than four inches.” Thus, the National Reconnaissance Office achieved the upper end of very high resolution by the mid-1970s without developing an entirely new system. This helps explain why GAMBIT stayed in service until 1984 even though the KH-11 KENNEN entered service in late 1976—GAMBIT-3 provided higher resolution photos than KENNEN for many years. If KENNEN’s overall evolution followed the same general path as CORONA, GAMBIT, and HEXAGON, then its designers undoubtedly sought to improve its capabilities over time until they eventually exceeded the original requirements. KENNEN started as a high resolution system, but almost certainly achieved very high resolution capability at some time in the 1980s. How much that capability was of value to the intelligence community remains unknown. Further reading: James Edward David, “How much detail do we need to see? High and very high resolution photography, GAMBIT, and the Manned Orbiting Laboratory,” INTELLIGENCE AND NATIONAL SECURITY, 2017, VOL. 32, NO. 6, 768!781 Table 1: Visible National Imagery Interpretability Rating Scale (NIIRS) - March 1994 RATING LEVEL O Interpretability of the imagery is precluded by obscuration, degradation, or very poor resolution. RATING LEVEL1 Detect a medium-sized port facility and/or distinguish between taxiways and runways at a large airfield. RATING LEVEL2 Detect large hangars at airfields. Detect large static radars (e.g., AN/FPS-85, COBRA DANE, PECHORA, HENHOUSE). Detect military training areas. Identify an SA-5 site based on road pattern and overall site configuration. Detect large buildings at a naval facility (e.g., warehouses, construction hall). Detect large buildings (e.g., hospitals, factories). RATING LEVEL 3 Identify the wing configuration (e.g., straight, swept delta) of all large aircraft (e.g., 707, CONCORD, BEAR, BLACKJACK). Identify radar and guidance areas at a SAM site by the configuration, mounds, and presence of concrete aprons. Detect a helipad by the configuration and markings. Detect the presence/absence of support vehicles at a mobile missile base. Identify a large surface ship in port by type (e.g., cruiser, auxiliary ship, noncombatant/merchant). Detect trains or strings of standard rolling stock on railroad tracks (not individual cars). RATING LEVEL 4 Identify all large fighters by type (e.g., FENCER, FOXBAT, F-15, F-14). Detect the presence of large individual radar antennas (e.g., TALL KING). Identify, by general type, tracked vehicles, field artillery, large river crossing equipment, wheeled vehicles when in groups. Detect an open missile silo door. Determine the shape of the bow (pointed or blunt/rounded) on a medium-sized submarine (e.g., ROMEO, HAN, Type 209, CHARLIE II, ECHO II, VICTOR II/III). Identify individual tracks, rail pairs, control towers, switching points in rail yards. RATING LEVEL 5 Distinguish between a MIDAS and a CANDID by the presence of refueling equipment (e.g., pedestal and wing pod). Identify radar as vehicle-mounted or trailer-mounted. Identify, by type, deployed tactical SSM systems (e.g., FROG, SS-21, SCLID). Distinguish between SS-25 mobile missile TEL and Missile Support Vans (MSVs) in a known support base, when not covered by camouflage. Identify TOP STEER or TOP SAIL air surveillance radar on KIROV-, SOVREMENNY-, KIEV-, SLAVA-, MOSKVA-, KARA-, or KRESTA-II-class vessels. Identify individual rail cars by type (e.g., gondola, flat, box) and/or locomotives by type (e.g., steam, diesel). RATING LEVEL 6 Distinguish between models of small/medium helicopters (e.g., HELIX A from HELIX B from HELIX C, HIND D from HIND E, HAZE A from HAZE B from HAZE C). Identify the shape of antennas on EW/GCI/ACQ radars as parabolic, parabolic with clipped corners or rectangular. Identify the spare tire on a medium-sized truck. Distinguish between SA-6, SA-ll, and SA-17 missile airframes. Identify individual launcher covers (8) of vertically launched SA-N-6 on SLAVA-class vessels. Identify automobiles as sedans or station wagons. RATING LEVEL 7 Identify fitments and fairings on a fighter-sized aircraft (e.g., FULCRUM, FOXHOUND). Identify ports, ladders, vents on electronics vans. Detect the mount for antitank guided missiles (e.g., SAGGER on BMP-1). Detect details of the silo door hinging mechanism on Type III-F, III-G, and II-H launch silos and Type III-X launch control silos. Identify the individual tubes of the RBU on KIROV-, KARA-, KRIVAK-class vessels. Identify individual rail ties. RATING LEVEL 8 Identify the rivet lines on bomber aircraft. Detect horn-shaped and W-shaped antennas mounted atop BACKTRAP and BACKNET radars. Identify a hand-held SAM (e.g., SA-7/14, REDEYE, STINGER). Identify joints and welds on a TEL or TELAR. Detect winch cables on deck-mounted cranes. Identify windshield wipers on a vehicle. RATING LEVEL 9 Differentiate cross-slot from single slot heads on aircraft skin panel fasteners. Identify small light-toned ceramic insulators that connect wires of an antenna canopy. Identify vehicle registration numbers (VRN) on trucks. Identify screws and bolts on missile components. Identify braid of ropes (1 to 3 inches in diameter). Detect individual spikes in railroad ties. Dwayne Day can be reached at zirconic1@cox.net.

The Nancy Grace Roman Space Telescope Was Cut By President Trump and Saved By Congress

Roman The Nancy Grace Roman Space Telescope, NASA’s next astrophysics flagship mission, in a high bay at the Goddard Space Flight Center in April. (credit: J. Foust) Flagships on a budget by Jeff Foust Monday, May 11, 2026 Last month, NASA held a media day for its latest astrophysics flagship mission, the Nancy Grace Roman Space Telescope. Reporters crowded into a viewing gallery overlooking a high bay at the Goddard Space Flight Center. In the high bay was Roman, lit to give it almost a purplish hue. The spacecraft recently completed environmental testing and was being prepared for shipment to the Kennedy Space Center for launch. “We're on track, as the administrator said, to launch early in September, and that is, I can't stop saying it, ahead of schedule and under budget,” Fox said of Roman. Roman has been spared much of the development drama that enveloped the James Webb Space Telescope. At the media day, NASA administrator Jared Isaacman announced the agency is planning a launch of Roman on a Falcon Heavy in early September. That is eight months ahead of the launch readiness date of May 2027 NASA set for the mission when it passed its confirmation review several years ago, while also remaining within a total lifecycle cost of $4.3 billion. “We're on track, as the administrator said, to launch early in September, and that is, I can't stop saying it, ahead of schedule and under budget,” said Nicky Fox, NASA associate administrator for science (who indeed has consistently mentioned Roman’s cost and schedule performance.) Given the delays and overruns experienced by JWST, no one can blame NASA for trumpeting the performance of Roman. But what is the secret to its success? One factor, project officials said, is that the mission took into account programmatic factors like cost and schedule into account alongside technical issues. “Something that we’ve done on Roman is we added the programmatics to that balancing equation, so everybody from the project manager all the way down to the techs on the floor understand how the cost and the schedule and the technical all have to come together on Roman,” said Jackie Townsend, Roman deputy project manager, in an interview during the media day. Another factor, she said, was the cost cap placed on Roman early in its development, along with “forward-phased” funding that avoided cash flow issues. “That combination of a cost cap and forward-phased funding allows for smart decisions all the way through the life cycle.” NASA officials long stressed that keeping Roman on budget and on schedule was critical to the future of large flagship missions in the agency’s science directorate. If NASA could convince stakeholders, particularly in Congress, that it can avoid the problems from Webb, it believes it will be easier to secure funding for future flagship missions, like the Habitable Worlds Observatory. That logic appears to be working. Congress, in a fiscal year 2026 spending bill passed in January, provided $150 million for the Habitable Worlds Observatory, far more than the $3.3 million NASA requested for the mission in its proposal that slashed funding across the agency’s science programs. The funding is helping the program accelerate early design and technology work on the mission, which will fly a mirror between six and eight meters across with a coronagraph that will enable it to directly image Earth-sized exoplanets. “We’re finally going to get to accelerate our mission progress with real funding,” Giada Arney, project scientist for the mission, said at an American Astronomical Society (AAS) conference in January. Habitable Worlds is projected to launch no sooner than the early 2040s, but at the conference NASA officials suggested they would like to move up the mission. “Nancy Grace Roman is not the last flagship mission for us” despite budget uncertainty, Isaacman said. “The things you do to move faster on the larger-scale missions at the outset are things we’re asking this community, and the people you work with outside this room, to do over the next few years,” said Shawn Domagal-Goldman, director of NASA’s astrophysics division, during a session about the mission at the AAS meeting. Isaacman, in a town hall the day after he was sworn in as administrator in December, also endorsed faster and more frequent flagship missions. “It would be great if we were launching flagship missions with even a greater cadence,” he said. That, though, clashes with both the costs of flagship missions as well as budget challenges. NASA’s fiscal year 2027 budget proposal would again cut NASA’s science programs by nearly 50%. Habitable Worlds would get just $5 million, jeopardizing the progress made to date on the mission and any efforts to accelerate its development. Isaacman argued that there is still room for flagship-class missions with those budgets. “Nancy Grace Roman is not the last flagship mission for us,” he said, citing work on Dragonfly, a mission under development to send a nuclear-powered rotorcraft to Saturn’s moon Titan. (Dragonfly is not technically a flagship mission, instead selected as part of the New Frontiers line of planetary science missions, but its costs have swelled to levels closer to a flagship.) “Going out and trying to unlock the secrets of the universe is fundamental to NASA's mission. I expect there'll be plenty of flagship missions in the future,” he argued. The flaglet approach It is difficult, though, to square the desire to continue to do flagship space science missions, let alone do more of them, in the current budgetary environment. That’s led to discussion about other ways to do science associated with such missions. When the latest astrophysics decadal survey, Astro2020, recommended what became known as the Habitable Worlds Observatory as its top flagship mission, it did so as part of a program of such missions that would later include X-ray and infrared space telescopes. It was modeled on NASA’s original “Great Observatories” program that included the Hubble Space Telescope, Chandra X-ray Observatory, Compton Gamma-Ray Observatory, and Spitzer Space Telescope, which observed the universe from infrared to gamma rays. Habitable Worlds and those future observatories would address many key scientific questions, but do so on a timescale of decades. That is in part because of the schedule for their development: NASA doesn’t anticipate starting work on the mission to follow Habitable Worlds—either the X-ray or infrared telescope—until at least the early 2030s. “This was really exciting because four seems like a tractable number” of missons, said Caputo. “It’s not 100.” There may be ways to accelerate that scientific quest. In a white paper completed last September and published last week on the arXiv preprint server, a team at NASA Goddard proposed a faster approach to addressing most of the scientific questions from Astro2020. “The study really came from a feeling that I had,” said Regina Caputo, first author of the study, during a seminar last Thursday, “that the current mission approach that we had, that we were implementing in order to realize Astro2020, was missing some key capabilities.” That sentiment, she said, was shared by colleagues at Goddard. “We thought we’ll do the nerdiest thing that we could possibly think of and actually analyze the questions in the decadal survey to see if our feelings were correct.” The approach the study took was not to try to build more flagship missions but instead see if they could be supplemented by smaller missions, dubbed “flaglets.” These would be missions costing $1 billion to $2 billion each, similar to the Astrophysics Probe line of missions also recommended by the decadal that NASA is starting to pursue. The study took the priority questions from Astro2020 and the capabilities needed to answer them, along with similar questions from planetary science and heliophysics decadal surveys that could also be addressed by potential missions. Those 85 questions were cross-referenced with both flagship missions and 23 concepts for probe missions that were submitted as part of the development of Astro2020. The goal was to see how many probe-class missions could answer the majority of the science questions from the decadal, filling gaps from the flagships. “What was really cool was that, with four probes, you could at least somewhat address over 80% of decadal survey science questions,” she said. “This was really exciting because four seems like a tractable number. It’s not 100.” Caputo and others involved in the study emphasized they were not recommending four specific probe concepts but rather that some combination of four of them could address 80% of the science questions to come degree, with different sets of missions answering a different group of questions. The study presented what it called the “Flaglet Great Observatories” program to develop those missions. In one scenario, one flaglet would start development in fiscal year 2030 with the other three following every two years, each costing $1.5 billion. A spending profile showed the peak annual spending on the program reaching just under $525 million in 2041, but with the overall funding profile significantly smaller than that of JWST. An alternative approach would cap annual spending on the program at $400 million, or about a quarter of the current NASA astrophysics budget. That had the effect of stretching out the program, with missions starting every three to four years instead of every two. In a panel discussion that followed, members of the study team emphasized that the flaglets they proposed were not meant to replace flagship missions like Habitable Worlds Observatory. “There are gaps in the recommendations from the decadal survey going forward that the even the current suite of flagships don’t address and won’t address in the coming decades,” said Jennifer Wiseman, senior project scientist for Hubble. “This is a wonderful, we think, way of addressing a lot of those questions—something like 80% of the decadal survey priorities—in the next decade or a little bit beyond.” When NASA selects a flagship mission for development, “the whole field is often warped into that one direction,” said Mandell. Flaglets would fit into NASA’s astrophysics budget, particularly using the cost-cap approach, along with a flagship-class mission and other, smaller missions and research. “There are some puts and takes, but it all still fits,” said Joshua Schlieder of Goddard’s Exoplanets and Stellar Astrophysics Laboratory. Flaglets have other advantages as well. When NASA selects a flagship mission for development, “the whole field is often warped into that one direction,” said Avi Mandell of Goddard’s Planetary Systems Laboratory. “People specializing in other wavelength ranges feel like maybe they don’t even have a career for the next 15 years.” A series of missions spread across the spectrum and the range of scientific questions highlights in the decadal eases those concerns. “In this perspective, where you have a built-in cross-wavelength, cross-science portfolio that is somewhat locked in, it gives a lot of impetus and motivation for people to feel free to pick the wavelengths and science that is really of need and interest,” he said. The flaglet concept is, for now, just a concept, without formal agency backing. However, it fits into a new effort within NASA called the Astrophysics Strategic Technology and Research Accelerator (ASTRA). It is designed to identify and study probe and flagship mission concepts in their earliest phases. That will include far-infrared and X-ray flagship concepts as recommended by Astro2020 but also other future concepts. ASTRA includes workshops late this month on innovation in astrophysics mission concepts and another in September on science priorities. The agency plans to select mission concepts for study at the January 2027 AAS conference. That work will continue amid budget uncertainty: NASA’s Astrophysics Probe program, the model for the flaglet concept, is among the dozens of science missions that the agency’s fiscal year 2027 budget request would cancel. That was also true in 2026, but Congress rejected those cuts and may do so again. The flaglet approach does offer a way to answer astronomers’ pressing questions without relying primarily on flagships. “The fleet is bigger than the sum of its parts,” said Caputo. “You answer bigger questions when you have more tools at your disposal to address them.” Jeff Foust (jeff@thespacereview.com) is the editor and publisher of The Space Review, and a senior staff writer with SpaceNews. He also operates the Spacetoday.net web site. Views and opinions expressed in this article are those of the author alone.

Strategy Is Easy, Logistics Is Challenging-The Golden Dome Missile Defense System Proves This

Golden Dome One concept for the Golden Dome missile defense system with space-based elements. (credit: Redwire) Strategy is easy, but logistics is hard. Golden Dome proves it. by Bharath Gopalaswamy and Daniel Dant Monday, May 11, 2026 General Omar Bradley’s aphorism, “Amateurs talk strategy; professionals talk logistics,” has aged remarkably well. Today, it frames the central challenge facing the United States as it pursues the Golden Dome for America: a visionary, multilayered homeland defense architecture that depends less on conceptual brilliance than on industrial endurance. Golden Dome will not fail because the architecture is flawed; it will fail if logistics are treated as a secondary consideration. Golden Dome’s strategic ambition is unmistakable. The architecture envisions space-based sensors, including left-of-launch capabilities; proliferated satellite constellations; ground- and space-based interceptors; and integrated command and control. The technology exists or is within reach. The problem is not imagination or design. It is whether the United States possesses the industrial capacity to build, sustain, and replenish such a system under real world conditions of disruption and conflict. That concern is not theoretical. Lt. Gen. Philip Garrant, commander of Space Systems Command, has been explicit that supply chain risk is among his command’s top priorities. He has publicly identified microelectronics, software, propulsion, and ground infrastructure as the most fragile points, before Golden Dome places demand on the system at a scale never previously experienced.[1] Golden Dome will not fail because the architecture is flawed; it will fail if logistics are treated as a secondary consideration. Demand at a scale the industrial base has never seen The United States is entering an era of unprecedented demand on the space industrial base. The Space Development Agency is fielding Tranches 1 and 2 of its proliferated low Earth orbit architecture while the Department of the Air Force reassesses the future of data transport, with Tranche 3 on hold and requirements migrating toward the Space Data Network supporting Golden Dome. Space Systems Command is advancing missile-warning satellites in medium Earth orbit. Commercial providers are simultaneously deploying global communications and sensing constellations at historic rates. Meanwhile, China’s announced plans for massive sovereign constellations, regardless of their ultimate scope, signal intent that cannot be ignored. The problem is not the number of satellites. Instead, it is the concentration of demand on a surprisingly small set of suppliers. Reaction wheels, star trackers, radiation-hardened processors, propulsion systems, optical crosslinks, and traveling wave tube amplifiers all come from a narrow, overburdened industrial base. As Garrant has bluntly noted, radiation hardened microelectronics remain, and will remain, the single greatest supply chain risk facing US space systems.[2] Golden Dome multiplies this problem. It assumes not only initial fielding at scale, but also sustained operations, resilience under attack, and the ability to replenish losses at speed. That assumption is not matched by present-day industrial realities. The efficiency trap Over the last decade, commercial space has delivered extraordinary cost reductions and accelerated innovation. Those gains made proliferated architectures viable and attractive to the Space Force. But they came with a tradeoff: consolidation. Production has been concentrated in fewer facilities, with fewer suppliers, optimized for efficiency rather than surge capacity or wartime resilience. Workforce skills, capital investment, and specialty manufacturing have followed the same path. Efficiency has produced vulnerability. Golden Dome relies heavily on commercial firms whose business models are optimized for growth markets, predictable demand, and peacetime economics, not for prolonged conflict or rapid replenishment under fire. Industry analyses have warned that many space components now rely on three or fewer qualified domestic suppliers, often with long lead times and little or no surge capacity. In a contested environment, disruption is not hypothetical. The relevant question is not whether something will break, but whether the system can absorb shocks without losing mission effectiveness. Independent analyses by RAND and the Aerospace Corporation on proliferated LEO resilience underscore the same conclusion: redundancy on orbit means little if replenishment on the ground cannot keep pace.[3,4] Capital structure meets strategic reality Golden Dome relies heavily on commercial firms whose business models are optimized for growth markets, predictable demand, and peacetime economics, not for prolonged conflict or rapid replenishment under fire. This mismatch cannot be solved through contract language alone. The Center for a New American Security has warned that the US defense industrial base is at an inflection point, unable to meet the pace and scale of modern conflict without substantial investment.[5] Recent history reinforces the stakes. In the opening hours of Russia’s invasion of Ukraine, a cyberattack on Viasat’s KA-SAT network disrupted Ukrainian military communications across Europe, demonstrating that space infrastructure is now a first strike target.[6] Both China and Russia have demonstrated kinetic antisatellite capabilities. Cyber and kinetic threats to space systems are no longer theoretical. They are proven. Innovation without industrial resilience is a liability. Golden Dome must be built not only to perform, but to endure. Europe’s signal and America’s hesitation Some US allies have already internalized this lesson. European governments increasingly treat space infrastructure as essential infrastructure, backing that view with long term funding, sovereign manufacturing initiatives, and explicit demand signals. The EU’s €10.6 billion IRIS² constellation was designed from inception around resilience and redundancy, with a 12-year funding horizon. France has made direct investments in domestic propulsion and optical terminal manufacturing. The United States, by contrast, remains constrained by short-term budget cycles and uncertain demand signals, conditions ill-suited to the decade-long investments required to expand production capacity. A supplier deciding whether to build a second production line for optical terminals or propulsion systems is not reading strategy documents. It is reading order books. The questions that actually matter Golden Dome’s success will ultimately hinge on questions that rarely dominate program reviews: What happens if a single source supplier goes dark for 18 months? How many anomalies can the architecture absorb before service degrades? What does replenishment look like if losses occur during a demand spike rather than a peacetime trough? Gen. Michael Guetlein, the former Vice Chief of Space Operations and now direct reporting program manager for Golden Dome, has described the challenge as “a heavy lift across all the organizations that are going to be participating,” underscoring that integration across services, agencies, and industry, not technology alone, will determine whether Golden Dome succeeds.[7] From diagnosis to action The path forward is neither mysterious nor inexpensive. Three concrete steps would materially reduce Golden Dome’s risk. Golden Dome will not endure unless it rests on an industrial base built for resilience, not efficiency alone. First, the Department of Defense and Congress should formalize long-term demand signaling as procurement policy. A published ten-year demand forecast from the Space Development Agency and Space Systems Command, backed by multiyear block buys for critical components, would give industry the planning horizon required to justify capital investment. Strategy papers do not drive manufacturing decisions. Orders do. Second, the Space Force should establish a formal space industrial base readiness framework. Aircraft fleets are measured by readiness rates, but the satellite industrial base has no equivalent. The Department needs a classified, systematic methodology to track single-source risk, lead times, and surge capacity across critical components. Without it, program managers are operating blind, repeating lessons learned decades earlier in airpower. Third, where single source risk is unavoidable, such as radiation-hardened processors, reaction wheels, and optical crosslinks, the government should fund the qualification of second domestic suppliers. The precedent exists in propulsion and avionics. Maintaining second sources is expensive, but far less so than fielding a constellation that cannot be replenished when it matters most. Bradley was right. Strategy is the easy part. Golden Dome will not endure unless it rests on an industrial base built for resilience, not efficiency alone. Creating that base is not a supporting task of strategy. It is the task. Endnotes Calvin Biesecker, “Space Force Assessing Supply Chain Ahead of Potential Tariff Concerns,” Via Satellite, April 10, 2025. Lt. Gen. Philip Garrant identifies microelectronics, ground entry points, software, and propulsion tanks as the biggest risk areas in the Space Force supply chain. Sandra Erwin, “Pentagon Underestimated Supply Chain Fragility, Now Racing to Fix Gaps,” SpaceNews, November 25, 2024. Garrant on systemic industrial base concerns and the Space Development Agency as a revealing case study of demand ramping on a narrow supplier base. Jonathan P. Wong, Yool Kim, et al., Leveraging Commercial Space Services: Opportunities and Risks for the Department of the Air Force, RAND Corporation, RR-A1724-1, 2023. RAND analysis of commercial space market maturity, industrial base stability, and resilience considerations for the Department of the Air Force. Andrew Berglund, Strengthening the Industrial Base to Deliver Proliferated Defense Space Systems, Center for Space Policy and Strategy, The Aerospace Corporation, Space Agenda 2025, October 2024. On the necessity of rebalancing from efficiency to resiliency across raw materials, parts, components, and manufacturing capacity to support proliferated DoD space architectures. Becca Wasser and Philip Sheers, From Production Lines to Front Lines: Revitalizing the U.S. Defense Industrial Base for Future Great Power Conflict, Center for a New American Security, April 2025. CNAS assessment that the defense industrial base is at a critical inflection point, with current capacity insufficient to meet the demands of modern warfare. Patrick Howell O’Neill, “Russia Hacked an American Satellite Company One Hour Before the Ukraine Invasion,” MIT Technology Review, May 10, 2022. See also Viasat, KA-SAT Network Cyber Attack Overview, corporate incident report on the February 24, 2022 attack against the KA-SAT network coincident with the opening of Russia’s invasion of Ukraine. Courtney Albon, “‘Golden Dome’ Success Will Require National Buy-In, Official Says,” Military Times, March 6, 2025. Guetlein remarks at the National Security Innovation Base conference, Washington, D.C., March 2025. Bharath Gopalaswamy, PhD & Col. Daniel Dant (Ret.) are Senior Visiting Scholars at the National Space Power Center (NSpC) at the United States Space Force Association (USSFA). Bharath Gopalaswamy, PhD, Founder, Mission Resilient Solutions, is an aerospace, defense, and emerging technology executive with extensive experience leading growth, strategy, and advanced programs across the US, Europe, and the Middle East. Col. Daniel Dant (Ret.) is a Vice President of Strategic Initiatives at KBR. He previously served with the US Air Force as a space weapons officer, acquirer, and operator. He currently serves on the Board of Mission Resilient Solutions.

Three Steps Forward and One Step Bak-The Artemis Program

Starlab The Starlab commercial space station is among the concepts threatened by NASA’s proposed changes to its Commercial EO Destinations program. (credit: Starlab Space) Three steps forward but one step back by Dale Skran Monday, May 11, 2026 Some Artemis components, such as the Space Launch System (SLS) and Orion, long held a complex position in the space community. For some, it was an exciting venture back to the Moon and on to Mars. For others, it was a Congressionally mandated boondoggle focused on a jobs program leading nowhere. A key part of the Artemis architecture was the lunar Gateway, which existed mainly because the underpowered Orion service module could not directly reach low lunar orbit. Artemis transformed from something often viewed as a pointless jobs program with a few good elements to an important effort to build a lunar surface base. As a result, the space community sustained continuing cognitive dissonance. On one hand, Orion, the SLS, and Gateway employed thousands of dedicated engineers and scientists. On the other, it was increasingly obvious that, as the New Glenn and Starship programs evolved, and with amazing success of SpaceX’s Falcon 9 and Falcon Heavy, reusable rockets were the future, and that the single-use SLS would never be competitive with the rising tide of new heavy lift vehicles. Yet the SLS, Orion, and Gateway had strong defenders in Congress, and skeptics for the most part avoided direct attacks on these programs. Into this long-standing conundrum came the new NASA administrator, Jared Isaacman. Despite skepticism that he possessed the political chops for the job, Isaacman reformed the Artemis concept of operations and goals dramatically and extremely rapidly. In the blink of an eye, the Gateway was cancelled and its initial component, the Power and Propulsion Element (PPE), was repurposed as part of a nuclear-electric journey to Mars. The Exploration Upper Stage (EUS) of the SLS and its associated taller mobile launch system, years late and billions over budget, were also cancelled. The reliable and fully operational Centaur V is set to replace the Interim Cryogenic Propulsion System (ICPS), the current upper stage of the SLS vehicle. With these simplifications, the SLS/Orion launch cadence will increase, and the Artemis 3 lunar landing mission has been reset as an Earth-orbital rendezvous between Orion and one or both of the Human Landing System (HLS) landers, currently being pursued by SpaceX and Blue Origin. This pushes out the first return to the Moon until Artemis 4. The lunar orbit for Artemis 4 will also be modified to reduce the change in velocity needed to accomplish the mission. Finally, the primary goal of Artemis became the construction of a lunar base, a milestone long sought by space advocates. To support the new lunar surface base, Isaacman called for a large increase in the number of Commercial Lunar Payload Services (CLPS) missions to deliver, in part, cargo and materials for the new lunar base. All in all, these changes are magnificent. Not just two steps forward–more like three! Artemis transformed from something often viewed as a pointless jobs program with a few good elements to an important effort to build a lunar surface base. The SLS moved from being the rocket NASA planned to fly for 50 years (a wholly unrealistic idea) to a transitional program element with capped development. Bravo Jared Isaacman! Perhaps the greatest damage done by the core module proposal, beyond a decade or more setback in the development of a commercial LEO economy, is to NASA’s reputation as a reliable partner for commercial firms. Unfortunately, NASA also proposed to replace the Commercial LEO Destinations (CLD) program with a “core” module owned and operated by NASA, to which “commercial” modules would be attached to form a mini-ISS. This proposed direction would devastate the CLD competitors. These companies have raised billions of dollars from investors, and their plans would be kicked aside by the “core” module. As one example, the Starlab space station, currently being built by Voyager Technologies, would be crippled by the requirement that CLD stations start their operational lives attached to the ISS. The CLD companies are fighting back but are limited in what they can do or say since they must work with NASA one way or another. The core module is a big step backward for the broader goals of space development and settlement. Perhaps the greatest damage done by the core module proposal, beyond a decade or more setback in the development of a commercial LEO economy, is to NASA’s reputation as a reliable partner for commercial firms. If CLD firms that have raised and invested billions in pursuit of a goal set by NASA are then deliberately undermined by a radical and dubious change of direction, this suggests that NASA will not deliver on public-private partnership (PPP) commitments. Since the Artemis program contains several vital PPPs, including HLS, the core module proposal directly threatens the success of Artemis. NASA needs to immediately drop the core module proposal and issue the long overdue request for proposals for the next stage in the CLD program. Additionally, NASA needs to step up to the bar and commit to being a long-term anchor tenant on the CLD stations, with funding between 25% and 50% of the current ISS operational budget. Finally, NASA needs to get out of the business of determining when the LEO market is “ready.” It is not reasonable to expect the full blooming of the LEO economy until NASA no longer functions as a gatekeeper limiting the scope of commercial operations in orbit. As one example, some at NASA have complained that a vibrant orbital tourism industry has yet to emerge, but it is impossible to see how such a thing could happen with NASA limiting the private astronaut missions to the ISS to two per year, while also restricting crew size to four, and requiring CLD companies to fill one of the four seats with a former NASA astronaut (although this has just been changed.) Additionally, the ISS was not designed as a tourist destination. If NASA continues with a core module owned and operated by itself, we can look forward to Chinese dominance of LEO operations for the ten years or more following the deorbit of the ISS. Hopefully Congress will step back from this unwise direction and stay the course toward the development of a vibrant LEO economy based on true commercial LEO stations. Dale Skran is the National Space Society’s chief operating officer and senior vice president. He was awarded the DMTS title at Bell Labs and later held executive positions at Ascend Communications, Sonus Networks, and CMware Inc. This article is not an official position of the National Space Society.

Saturday, May 9, 2026

A Brasilian Scientist Finds A Shortcut For Manned Missions To Mars

Space Astronomy Planets Mars 'I was not looking for this': Scientist accidentally finds shortcut to Mars that could slash travel time in half A new study suggests early asteroid trajectory data could help design faster Mars missions, potentially cutting round-trip travel time to under a year. Sharmila Kuthunur's avatar By Sharmila Kuthunur published 4 days ago in News An illustration of a white spacecraft with solar panels approaching an orange/red planet. An illustration of a spacecraft bound for Mars. New research unveils a possible shortcut to the Red Planet that could drastically cut down mission timelines. (Image credit: dottedhippo via Getty Images) Share this article 16 Join the conversation Follow us Add us as a preferred source on Google Newsletter Subscribe to our newsletter Astronauts could complete a round trip to Mars in less than a year someday, potentially cutting current mission timelines in half, according to a new study that drew inspiration from asteroid trajectories. Under current mission profiles, reaching Mars, which is located about 50% farther from the sun than Earth is, takes roughly seven to 10 months. Because Earth and Mars align for fuel-efficient transfers only every 26 months, astronauts must wait for a return window, stretching a full round trip to nearly three years. However, the new findings, published online in the journal Acta Astronautica in April, suggest that early, imprecise orbital estimates of near-Earth asteroids — which were historically used to assess impact risks, before being discarded in favor of more precise data — may contain valuable geometric clues for designing faster interplanetary routes. You may like A close up of Olympus Mons on the surface of Mars, its structure a pile of brown lava in a large circular mount on the surface of the planet An anomaly in Mars' mantle could trigger volcanoes to erupt — and may be causing the whole planet to spin faster Hubble image of 3I/ATLAS. White dashes on a black background. Scientists propose new plan to 'catch' comet 3I/ATLAS — but we have to act fast An illustration of two astronauts boarding a rocket on the moon Can NASA and SpaceX really build a moon base in the next 10 years? "Maybe this can change the idea that we need more than two years to go to Mars and return," study author Marcelo de Oliveira Souza, a cosmologist at the State University of Northern Rio de Janeiro in Brazil, told Live Science. "I was not looking for this" Souza first stumbled on the idea in 2015, when he was studying near-Earth asteroids. One object in particular, 2001 CA21, caught his attention because early estimates suggested it followed a rare path crossing both Earth's and Mars' orbital zones. Although later measurements refined the asteroid's true trajectory, its initial geometry during the October 2020 opposition — when Earth and Mars were aligned on the same side of the sun, and closest together in their orbits — hinted at the possibility of "ultra-short" routes between the two planets, Souza noted in the paper. "This was a surprise for me — I was not looking for this," he told Live Science. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Your Email Address By signing up, you agree to our Terms of services and acknowledge that you have read our Privacy Notice. You also agree to receive marketing emails from us that may include promotions from our trusted partners and sponsors, which you can unsubscribe from at any time. As more observations allow astronomers to refine an asteroid's orbit, those early trajectories change, so someone analyzing it later wouldn't have seen the same path, Souza added. "Maybe I was in the right place at the right time," he said. Round trip to Mars? For the October 2020 opposition, Souza's calculations showed that a very fast, roughly 34-day trip from Earth to Mars is geometrically possible if a spacecraft follows a path similar to the asteroid's early orbital plane. However, such a trajectory would require departure speeds of around 32.5 kilometers per second, well beyond current rocket capabilities, and a spacecraft would arrive at Mars traveling around 64,800 mph (108,000 km/h) — too fast for existing landing systems to handle safely, Souza noted in the paper. What to read next A diagram showing the Earth in a blue oval surrounded by a white and red dotted line showing the moon's orbit, with labeled areas for different levels of cosmic radiation Chinese lander reveals giant 'cavity' of radiation between Earth and the moon — and it could change how lunar exploration is done NASA wants to speed up its lunar missions and establish a permanent moon base. NASA announces 'near‑impossible' space plans, including $20B moon base and humanity's first nuclear-powered interplanetary spacecraft A photo of astronaut James B. Irwin standing on the lunar surface during the Apollo 15 mission in 1971. Artemis II: NASA is preparing for a return to the moon, but why is it going back? Two three-dimensional graphs show different circular colors along a white grid, all against a black background. The geometry of a 33-day Mars trip (left) compared to a 90-day voyage (right). (Image credit: Acta Astronautica / Marcelo de Oliveira Souza) Instead, Souza used the asteroid-inspired geometry to explore possible trips during future Mars oppositions in 2027, 2029 and 2031. By using a standard method for calculating paths between two points in space (called the Lambert analysis) and constraining those paths to remain within about 5 degrees of the asteroid's orbital tilt, Souza found that only the 2031 alignment offered a viable opportunity for rapid travel using near-term technology. In that window, a round-trip mission from Earth to Mars could be completed in just 153 days, or roughly five months, according to the study. In that scenario, a spacecraft would depart Earth on April 20, 2031, at about 27 kilometers per second, arrive at Mars by May 23 after a 33-day journey, spend about 30 days on the surface, depart June 22 and return to Earth by Sept. 20, with the return leg taking roughly 90 days. Souza also identified a lower-energy alternative within the same window, requiring a launch at about 16.5 kilometers per second for a mission lasting about 226 days, or about 7.5 months ‪—‬ still significantly shorter than current mission timelines. Related stories 'The Martian' predicts human colonies on Mars by 2035. How close are we? NASA's Mars Sample Return is dead, leaving China to retrieve signs of life from the Red Planet NASA rover uncovers rock with 7 new organic molecules on Mars — the 'most diverse collection' ever seen Still, the concept remains largely theoretical and would depend heavily on mission specifics — including spacecraft design, payload mass and propulsion capabilities — all of which would shape whether such fast transfers are feasible in practice. The method, however, could still prove useful as a way to narrow the search for viable trajectories. The required velocities are comparable to those achieved by missions such as New Horizons — the NASA probe, which, when launched in 2006 on a mission to flyby Pluto at 16.26 kilometers per second, was the fastest human-made object ever launched from Earth. Such high-speed trajectories could be within the reach of next-generation rockets such as SpaceX's Starship or Blue Origin's New Glenn, Souza told Live Science. Article Sources De Oliveira Souza, M. (2026). Using asteroid early orbital data for rapid mars missions. Acta Astronautica, 246, 354–366. https://doi.org/10.1016/j.actaastro.2026.04.018 What do you know about the Red Planet? Test your knowledge with our Mars quiz!

Tuesday, May 5, 2026

Satellite Vulnerability In The 70's

ASAT The Soviet co-orbital anti-satellite weapon was first developed in the 1960s and became operational by the 1970s. This one is in a Russian museum. By the 1970s, the Soviet Union was developing additional anti-satellite weapons. (credit: TV Zvezda) Battle for the heavens: intelligence satellite vulnerability in the 1970s by Dwayne A. Day Monday, May 4, 2026 Starting in the late 1960s, the Soviet Union began testing a new anti-satellite weapon system that maneuvered a satellite close to its target and then fired a shaped explosive charge at it, showering it with metal fragments. The United States detected these tests and, over the next few years, US satellite operators became concerned that in event of a war, the Soviet Union could negate America’s military and intelligence space assets. There was another and more obvious response to the increasing vulnerability of US satellites to attack: make them less vulnerable. American satellite vulnerability became a major concern by the mid-1970s and, in July 1976, President Ford signed National Security Decision Memorandum 333 (NSDM 333), titled “Enhanced Survivability of Critical U.S. Military and Intelligence Space Systems.” It shifted the United States from a policy of assuming space was a “sanctuary” and satellites would not be attacked, eventually to development of an American capability to attack Soviet satellites. By the late 1970s, space warfare had become a real possibility (see “To attack or deter? The role of anti-satellite weapons,” The Space Review, April 20, 2020.) But there was another and more obvious response to the increasing vulnerability of US satellites to attack: make them less vulnerable. In response to NSDM 333 in October 1976, the National Reconnaissance Office, which managed the nation’s fleet of intelligence satellites, produced a report on the vulnerability of its satellites to attack and plans to reduce their vulnerability. The executive summary of the report was recently declassified and provides a fascinating insight into the emerging threats to American satellites in the mid-1970s. Titled “Survivability Enhancement Action Plan,” with a clever 1970 cover illustration produced by Soviet military officers depicting “The Space Networks of Espionage,” the summary stated that, up to this point the NRO had assumed “that reconnaissance satellites are stabilizing in times of crisis, and that reconnaissance spacecraft are therefore sanctioned,” in other words safe from attack. But this situation was changing and they could no longer assume this. ASAT The cover of a 1976 study on how to make American intelligence satellites more survivable. The cover art was produced by two Soviet officers for a Soviet military journal and probably not used with their permission. (credit: NRO) The document added: “In the belief that the programmatic goal at this point is to define a general survivability objective and level of effort, the NRO has developed several alternative programs of graduated cost and effectiveness against the foreign threat. To assure confidence in the resulting cost estimates, these alternatives have been constructed from specific projects identified for each system.” Because of the speed of the study, the cost estimates were rough and a detailed follow-on study would be required. The study focused on the next five to ten years, developing remedies for systems already in acquisition. It noted, “Such remedies are typically only partially effective, since they basically require retrofitting systems not initially designed for survivability. The far-term offers greater opportunity. New reconnaissance system concepts can be developed from the beginning with survivability as a major system performance criteria. Systems specifically emphasizing survivability can be conceived, taking advantage of such recent introductions as the Space Shuttle.” The NRO indicated that it might also investigate fundamentally different systems, such as quick-reaction imagery and signals intelligence systems. Countermeasures came with costs, however. These were not only monetary—hardening a satellite could be expensive—but could affect satellite operations. The study divided existing NRO satellite systems into three categories: most critical, critical, and least critical. In the most critical category was the KH-11 KENNEN near-real-time imaging satellite, which would have its first launch in December 1976. Two other systems with their names deleted were also included in the most critical category and these were most likely high-altitude signals intelligence satellites, such as the recently declassified JUMPSEAT satellite. Several systems with their names deleted were included in the “critical” category, probably including the newly-fielded PARCAE ocean-surveillance system and at least one other high-altitude signals intelligence system. In the least critical category were the HEXAGON and GAMBIT imagery satellites, and the Program 989 low-orbiting signals intelligence satellites. A primary characteristic of these three systems was that they did not deliver their data to the ground very quickly and therefore were not as vital during a crisis situation. The DoD/Intelligence Community NSDM 333 Response Working Group developed a list of operational options: Maintain space operations in peacetime (but during interference) Manage and control escalation or deescalation of US/Soviet crisis/confrontation Manage and control escalation of a conventional conflict involving the US but not involving the Soviet Union or vital Soviet interests (e.g., Vietnam) Maximize military support of a conventional conflict involving the US but not involving the Soviet Union or vital Soviet interests (e.g., Vietnam) Manage and control escalation of a US/Soviet conventional conflict Maximize military support during a US/Soviet conventional conflict Manage and control escalation of a NATO/Pact conflict Maximize military support during a NATO/Pact conflict Support the conduct of limited strategic nuclear options Support the conduct of strategic nuclear conflict ASAT In the mid-1970s, the US military determined that there were multiple potential threats to American satellites from Soviet weapons, not only operational anti-satellite weapons, but also newly emerging threats like high-powered lasers. (credit: NRO) The Soviet threat The vulnerability study summarized the Soviet threat, noting that: The Soviet ASAT threat to U.S. satellites consists of a variety of systems and capabilities. The Soviets have a coorbital intercept system that uses a fragmentation warhead. It uses a modified SS-9 ICBM booster and can intercept targets at up to 2,500 NM altitude. Using a larger space booster, it could intercept targets in semi-synchronous and synchronous (19,300 NM) orbits. A probable high power ground-based laser, possibly already in operation, may be an ASAT system under development. It is likely the Soviets will undertake development of a very high power ground-based laser ASAT system. The Soviets intend to conduct electronic warfare against satellites during wartime and are believed to have such a capability. The Soviets are reportedly developing a space-based laser weapon for use against satellites which could be demonstrated in the early 1980s. In addition, the nuclear-armed Galosh ABM interceptors would undoubtedly be used in an ASAT role against satellites thought to threaten Moscow. The Soviets could develop nuclear intercept systems for attack of very high altitude satellites. There is no evidence of such development. The Soviets also have the capability for covert attacks on space systems ground facilities in the U.S. and overseas. It is highly likely that the Soviets will develop radio-frequency damage weapons, in spite of the uncertainty in achieving kill inherent in such weapons. The NRO also noted that its “systems are also vulnerable to inadvertent destruction from non-targeted nuclear weapons and sabotage of ground facilities.” The report stated that “the development of very high value National Reconnaissance Program collection systems has placed a premium on survival techniques to allow mission completion by existing or replacement systems. The application of sophisticated U.S. space technology in the survival enhancement area is expected to provide a high payoff in mission completion and in increasing the difficulties encountered by the Soviet ASAT forces.” ASAT A table showing potential threats to American intelligence satellites and potential countermeasures to defeat them. (larger version) (credit: NRO) Surviving the Soviet threat The summary included a table of “Major Survival Enhancement Options” that identified both the threat and the countermeasure. For instance, countermeasures to orbital interceptors included evasive maneuvers, homing sensor deception and jamming, as well as the proliferation of systems (i.e. not putting all the NRO’s eggs in one basket.) For a high-power laser threat, the countermeasures included avoiding the laser site and hardening the satellite. Physical attacks on ground stations and launch vehicles could be countered with increased physical security. Countermeasures came with costs, however. These were not only monetary—hardening a satellite could be expensive—but could affect satellite operations. As the report noted, maneuvering would make an attack more difficult, but would also limit the effectiveness, and maybe the orbital lifetime, of the satellite. In the future, if decreasing vulnerability was a design goal, satellites could be designed with more fuel, increasing their lifetime and making it possible to change orbits to dodge threats. The report stated that encryption was by then fully implemented on several systems and would be included in all new systems and retrofitted onto older ones. “Minimal on-board verification sensors for laser attack” were included on HEXAGON and KENNEN satellites, and a “more complete verification package, including an expanded sensor complement to respond to other threats and reduce ambiguities” would be developed in the near-future. Certainly, the issue of satellite vulnerability has become more urgent today, as many more governments now have access to anti-satellite capabilities. But as the report makes clear, this has been true for a very long time. Eighteen months earlier the NRO had started an effort to harden its ground facilities against attack. However, some of these facilities, like the ground tracking and communications station at Buckley Air Force Base north of Denver, and the Space Tracking Center in Sunnyvale, California (famously referred to as “The Blue Cube”), were located very close to busy civilian areas. It would not be difficult for an enemy special operations team to get close to them to attack, even if that only meant firing rocket-propelled grenades from the back of a pickup truck on the 101 Freeway in California. Vandenberg Air Force Base, where many NRO satellites were launched, was a sprawling facility with vast, open, lightly-patrolled spaces. Throughout its history, there is evidence that both amateur rocket enthusiasts and possibly even Soviet special operatives penetrated the base. One of the major recommendations was that survivability should be included as a system performance evaluation criteria. It had to be considered from the start, not added in after all the other performance decisions were made. Although the summary is not highly detailed, the fact that the NRO even declassified it is remarkable. Certainly, the issue of satellite vulnerability has become more urgent today, as many more governments now have access to anti-satellite capabilities, and space has become, in the euphemism of the warfighter, “a contested realm.” But as the report makes clear, this has been true for a very long time. Dwayne Day can be reached at zirconic1@cox.net.