The CDC, Palantir and the AI-Healthcare Revolution
II. When Data Becomes Policy
The CFA plans to utilize this vast array of data to inform real-time policy decisions related to future pandemic planning and response. Multiple divisions within the CFA will contribute to this strategy of creating policy through the implementation of data into policy decision making.
The Office of the Director will oversee the general direction of this aim, as it defines “goals and objectives for policy formation, scientific oversight, and guidance in program planning and development…” The Office of Policy and Communications will then presumably work to implement these objectives into concrete policies and regulations, as it is responsible for “review[ing], coordinat[ing], and prepar[ing] legislation, briefing documents, Congressional testimony, and other legislative matters” as well as coordinating the “development, review, and approval of federal regulations,” presumably surrounding pandemic policy, surveillance, data and response efforts.
The Predict Division will play a crucial role in informing the specifics of these policies, as it generates “forecasts and analyses to support outbreak preparedness and response efforts”, and collaborates with partners from the local to federal to international level “on performing analytics to support decision-making.” It will also perform tabletop simulations to “match policies and resources with [its AI-generated] forecasts,” leaving the fate of communities, relating to their freedom as well as access to medical care, in the hands of algorithms and datasets.
Importantly, the CFA will not only utilize this data in long-term preparation or research, but in critical, high-pressure moments. Specifically, the Predict Division’s data sets and models will be used “to address questions that arise with short latency.”
During outbreaks, such questions that may arise with “short latency” would likely relate to containment efforts, and thus, lockdown policy. The Analytics Response Branch of the CFA, which uses its “analytical tools” to aid “decision making for key partners” both during a potential or ongoing outbreak, is also responsible for analyzing “disease spread through existing data sources to identify key populations/settings at highest risk” and correspondingly providing “essential information to key partners in decisions surrounding community migration” (emphasis added).
This sentence, though somewhat vague, suggests that AI-informed policy will subject certain communities/individuals to an extraordinary level of intrusion. Specifically, beyond more general, overarching pandemic policy, it appears that AI-generated forecasts and “risk levels” will dictate policy on the local, or perhaps even individual, level—directly controlling the movement, or “migration,” of communities.
Indeed, the CFA’s cooperative agreement states that the ability to apply data-driven, “mathematical” methods to tackle health equity problems in the face of disease outbreaks is “of great interest to the CFA.” Key to this objective is the collection of data “on the social determinants of health” to utilize in disease forecasting. These “social determinants” include “geography (rural/urban), household crowding, employment status, occupation, income, and mobility/access to transportation,” as well as race, so long as race is not recognized as an “independent exposure variable” but instead is seen as a “proxy” for other social determinants.
While on the surface level, this “targeted” approach may seem to provide a solution to the previously implemented universal pandemic policy, the digitization of lockdowns still raises the potential to seriously threaten individual and communal autonomy—only this time, under the auspices of “objective” data, accumulated and interpreted by AI technology.
Who’s Behind the AI-Healthcare Push?
The tentacles of the biosecurity apparatus spread across multiple sectors of government and business, transcending the heavily blurred and essentially illusory lines between the public and private sector and Big Tech and Big Pharma. Military officials, tech operatives and global public health institutions all play a significant role in the lobbying for and implementation of this emerging healthcare industry.
I. The Military
While the idea of developing preemptive vaccines to treat novel infectious pathogens dates back to the Reagan-era, these ideas initially focused on developing preemptive vaccines for diseases that emerged in a human population via a bioweapon, making the strategy rooted in national security as opposed to traditional disease response. Yet in the modern era, this militarized approach to public health has become the dominant ideology in establishment public health sectors—demonstrated by the core ideology that the CFA is built on.
The CFA’s Office of the Director ensures that “the CFA strategy is executed by the Predict Division and aligned with overall CDC goals” (emphasis added). While the vagueness of this passage omits the exact intentions of the referenced “CDC goals”—the CDC’s national biosurveillance strategy for human health, however, sheds light on the hidden agenda here.
The strategy is cemented in “U.S. laws and Presidential Directives, including Homeland Security Presidential Directive-21 (HSPD-21), ‘Public Health and Medical Preparedness.’” HSPD-21 is a Bush-era Department of Homeland Security directive made to “guide…efforts to defend a bioterrorist attack” that are also “applicable to a broad array of natural and manmade public health and medical challenges.” The directive aimed to predict disease outbreaks—natural or bioweapon-induced—via “early warning” and “early detection” of “health events.” Strikingly similar to the TIA “Bio-Surveillance” objectives, these values appear to have been placed in good hands at the CFA, as the Center’s director, Dylan George, previously served as vice-president of In-Q-Tel, the venture capital arm of the CIA.
A recent trip that US Army officials made to Silicon Valley illustrates how the ideology behind this strategy has manifested through the relationship between Silicon Valley, academia and the Pentagon. In this “pivotal visit” to the San Francisco Bay Area in Aug. 2024, the US Army’s surgeon general, Mary K. Izaguirre, met with scientists at Stanford University and Google to further “the Army’s efforts to integrate cutting-edge technology and build stronger ties with civilian sectors.” Izaguirre rendezvoused with Civilian Aides to the Secretary of the Army (CASAs) and Army Reserve Ambassadors to discuss “their efforts to bridge the gap between the Army and the civilian community.”
When she met with Stanford scientists, who have “a long history of collaboration with the military, particularly through research initiatives that contribute to national defense and public health,” the scientists briefed her on advancements made in AI allegedly capable of “[revolutionizing] emergency medicine.” This tech was part of Stanford’s, and presumably the military’s and Big Tech’s, “broader mission to integrate AI into various aspects of health care…”
From there, Izaguirre traveled to Google’s headquarters where she and the tech experts discussed how Google’s “AI, machine learning, and cloud computing capabilities” could assist the Army’s healthcare ambitions. She also thanked Google for helping veterans “find their footing” after their time in the military, acknowledging the role that the company’s “SkillBridge” program plays in aiding soldiers in their transitions “into civilian careers”—which provides a convenient funnel from the military into Silicon Valley for lucky servicemen. The article concluded by remarking that through its collaboration with “leaders in academia and technology, the Army aims to equip its soldiers with the best tools and support for the challenges ahead.” Notably Google also shares a $9 billion cloud computing contract, along with Amazon Web Services (AWS), Microsoft and Oracle, with the Pentagon for the military’s Joint Warfighting Cloud Capability system (JWCC).
This meeting, along with the ever-growing partnerships between Big Tech and the Pentagon, obviously do not occur in a vacuum, but instead represent a natural culmination of years-long industry plans to merge Silicon Valley data with military data. In March 2019, for example, co-authors Dr. Ryan Kappedal, a former intelligence officer whose job pedigree summarily includes — lead product manager for the Pentagon’s Defense Innovation Unit (DIU), data scientist at Johnson & Johnson, and currently a lead manager at Google — and Dr. Niels Olson, a US Navy Commander and the Laboratory Director at US Naval Hospital Guam, wrote an article for the Pentagon-funded neoconservative think tank, Center for New American Security (CNAS), titled “Predictive Medicine: Where the Pentagon and Silicon Valley Could Build a Bridge in Artificial Intelligence,” in which they fantasized about the merging of these industries that Kappedal hails from:
“[With] the Department of Veterans Affairs (VA) healthcare system, the federal government has the largest healthcare system in the world. In the era of machine learning, this translates to the most comprehensive healthcare dataset in the world. The vastness of the DoD’s dataset combined with the department’s commitment to basic biological surveillance yields a unique opportunity to create the best artificial intelligence–driven healthcare system in the world. (emphasis added)”
While the CNAS authors claim that the Pentagon and Silicon Valley merely aim to improve civilian and military healthcare through this AI healthcare system, this technocratic evolution of healthcare importantly presents a mutually beneficial opportunity for each of these institutions. For the private sector, as the CNAS article states, the DoD possesses a plethora of data with “intrinsic commercial value.” For the Pentagon, such a relationship with Silicon Valley would expand its data mining efforts into the body, allowing for a wider array of valuable data to use for national security purposes.
Further, implementing a predictive medicine infrastructure provides both sectors with the pretext to amass more health data, and to continuously do so, in order to train the predictive AI technology. This has already granted the Pentagon the pretext to increase data-collection efforts in the name of creating this AI healthcare system, potentially explaining the creation of predictive health programs such as ARPA-H and AI forecasting infrastructure like the CDC’s CFA. Importantly, the biosurveillance field’s biggest advocates also have a long history of stressing the importance of mass interagency data sharing, including between the public and private sectors — highlighting again the cross-sectoral commitment to utilizing this data for both profit and national security.