The Trahan Report -  A Strong Privacy Foundation; An Unfinished Building

(by Pegah K. Parsi)

Privacy is not a compliance checkbox. It is a foundational human value and right that protects individual dignity, sustains freedom of thought, enables democratic participation, allows for healthy markets, and shields communities from the unchecked accumulation of power. When it erodes, so does trust in government and in one another. That is why the state of federal privacy law is not a bureaucratic concern. It impacts every American.

For over fifty years, the Privacy Act of 1974 has been the primary shield purporting to protect Americans from federal data overreach. But let’s be honest: it was written for an era of manila folders and filing cabinets. Today, we live in a world of AI-driven surveillance, massive data fusion, and an administration that has shown a renewed, aggressive interest in digging through personal data under the broad auspices of "detecting fraud" through the Department of Government Efficiency (DOGE). The status quo is a sieve.

Against that backdrop, Representative Lori Trahan's February 2026 report, Privacy, Trust, and Effective Government: A Bipartisan Blueprint for Modernizing the Privacy Act, arrives at a critical moment. The federal Privacy Act of 1974 governs how federal agencies collect, maintain, use, and share personal information about Americans and lawful residents (but notably, not other immigrants); it is failing. Not slowly and around the edges, but fundamentally and structurally. And the consequences are being felt right now.

This post examines what the report gets right, where it falls short, and what a genuinely modern federal privacy framework would need to include. Our conclusion:  the Trahan report is a serious, substantive, and genuinely progressive contribution to the long-overdue project of Privacy Act reform. It moves the ball meaningfully forward. With a few revisions, it can revamp the filing cabinet law for the modern age.

THE PRIVACY ACT TODAY:  A LAW BUILT FOR FILE CABINETS

The Privacy Act of 1974 was a landmark achievement passed in the wake of Watergate and COINTELPRO to protect Americans from a government that had weaponized personal information against its own citizens. But it was written for a world of paper files and mainframe computers. Its central concept of the "system of records," where protections attach only to data retrieved by a personal identifier made sense then. It is obsolete now.

Modern systems infer identity and generate insights about individuals and groups probabilistically, aggregate data from disparate sources, and generate sensitive conclusions from seemingly innocuous inputs. Meanwhile, agencies have stretched the "routine use" exception, which allows data sharing without consent for purposes loosely related to the original collection, to the point of meaninglessness. Enforcement mechanisms are toothless: plaintiffs must prove "actual damages" for harms that are inherently difficult to quantify. The result is a law that looks protective on paper but provides increasingly thin coverage for Americans whose data is held by an increasingly powerful federal government.

WHY THIS MATTERS NOW:  DOGE, DATA FUSION, AND THE NEW SURVEILLANCE REALITY

DOGE's activities in 2025 demonstrated in real time what happens when there is no adequate legal framework. Unvetted political operatives accessed sensitive personal data at Treasury, SSA, NLRB, and Interior through the invocation of existing exceptions the law was never designed to accommodate at this scale. The Trump administration's stated interest in using federal data to detect "waste, fraud, and abuse" through cross-agency data integration, social media monitoring, and AI-driven analytics represents precisely the kind of governmental overreach the Privacy Act was meant to prevent but can no longer restrain.

The federal government is not merely storing files about Americans. It is building models of Americans, models that are predictive, probabilistic, and persistent. The current Privacy Act has no meaningful answer to that.

Critically, agencies are also buying from commercial data brokers what they could not constitutionally collect themselves (e.g., location data, social media profiles, facial recognition results) bypassing Fourth Amendment protections entirely. And older legal doctrines like "no reasonable expectation of privacy in public," developed when being in public meant being seen by a few people in one place at one time, are wholly inadequate to a world in which AI systems can reconstruct a detailed picture of your life from continuous, city-scale data collection. The technology has outrun the doctrine. Any serious reform must reckon honestly with that.

THE TRAHAN REPORT:  WHAT IT GETS RIGHT

The report earns real credit on multiple fronts. Its shift from a system-centric to a purpose-centric, harm-and-risk-based regulatory model is the right structural move, focusing regulatory attention on what agencies do with data rather than how they file it. The report correctly states that Congress must be clear that cognizable privacy harms extend well beyond financial loss. The current Act's emphasis on "actual damages" has made civil enforcement nearly useless precisely because the most serious privacy harms — the chilling of free expression, the loss of autonomy and dignity, the anxiety and attendant self-censorship of knowing one is under surveillance, the stigma of having sensitive personal information exposed, the concrete risks to safety that flow from data misuse — are not readily reducible to a dollar figure.

The proposal to organize coverage around "agency activity affecting privacy" (A3P), which are activities – regardless of systems or data involved – that have the potential to impact individual privacy, is intellectually sound, though the relationship between the A3P concept, purpose, and the implication of personally identifiable information needs to be sharpened in definitions to avoid interpretive confusion.

The proposal to eliminate or substantially narrow the routine use exception is one of the report's most important practical contributions. The algorithmic disgorgement concept — borrowed from FTC enforcement practice, requiring agencies to destroy not just unlawfully obtained data but any AI model trained on it — is workable and addresses the "privacy-washing" problem directly. The commercially available information (CAI) authorization framework, modeled on FedRAMP, is practical; standardized authorization, quality controls, and audit trails for government data purchases are long overdue. And the call for empowered Chief Privacy Officers reporting directly to agency heads is good governance design.

We note one definitional gap: the report's CAI definition appears limited to data already available for public sale or sharing. It should be broadened to encompass data that government agencies request directly and privately from organizations, proprietary datasets (like location data or patient admission records) that never hit the open market but feed the same surveillance apparatus.

WHERE THE REPORT MUST GO FURTHER

1. The Information Fiduciary Principle: Its Time Has Come

The single most important conceptual shift is a reconception of the relationship between the federal government and the people whose data it holds — from regulatory subject to trustee. The information fiduciary framework, developed by legal and ethical scholars, holds that certain entities, because of the nature of the trust relationship with those whose data they hold, take on special obligations of loyalty, care, and confidentiality analogous to those of doctors, lawyers, and financial advisors.

This does not require treating the federal government as a legal fiduciary in the technical sense. It requires Congress to enshrine a foundational principle:  personal data entrusted to the government is held in trust for the individual, not for the agency's convenience. It cannot be repurposed to harm the people it describes, at least not without an overriding, authorized, and articulated public necessity. The government has an affirmative duty to act in the interests of those whose information it holds. This principle should appear at the front of a modernized Privacy Act and should guide the interpretation of every subsequent provision.

2. The Law Enforcement, Intelligence, and Immigration Gap Cannot Stand

The report explicitly excludes law enforcement, intelligence, and immigration enforcement from its scope. We understand the legislative strategy and that a comprehensive solution is politically difficult. But we must be direct: the most acute privacy violations occurring right now (e.g., DHS integrating IRS and SSA data for immigration targeting, facial recognition deployed against people engaged in constitutionally protected activity, AI-driven profiling of individuals based on inferred characteristics) are happening precisely in these excluded domains. A Privacy Act that carves out the contexts where surveillance is currently most aggressive provides thin protection to the people who need it most.

The revised Act should establish minimum baseline protections across all federal activity, including civil immigration data, and should explicitly call for a dedicated legislative process to address law enforcement and national security data practices. At minimum, it should prohibit agencies from purchasing commercial data that would require a warrant if collected directly, closing the constitutional bypass that has become a central feature of the modern surveillance state.

3. AI Accountability Must Be Structural

The important algorithmic disgorgement idea addresses past violations. The Act also needs prospective obligations before any high-risk AI system that interacts with personal data is deployed. Individuals should have an affirmative right to know when AI materially influenced a decision about them and a right to human review of adverse AI-assisted determinations. Public registries of high-risk government AI systems should be required.

4. Independent Oversight, Not Congressional Oversight

The report's proposal for real-time legislative branch telemetry access to agency systems captures the right instinct but places it in the wrong institutional home. A congressional body with real-time access to agency data systems is as susceptible to political weaponization as any other congressional power. The better model, demonstrated by data protection authorities in other countries, is a genuinely independent Federal Privacy Authority:  technical, legal, and ethical expert commissioners; layperson commissioners to represent the public; rulemaking authority; audit powers; enforcement capacity; and operational independence from both branches, with robust reporting obligations to each.

5. The Harm-Risk Taxonomy Needs Independent Stewardship

The proposal to segment Privacy Act requirements by harm and risk level is conceptually correct. But delegating the classification of statutory purposes to Congress, which must make granular, technically sophisticated determinations about data processing risk and then update them as technology evolves, is asking an institution poorly suited to the task. A standing Federal Privacy Authority, with civil society and technical expert representation, as discussed above, should maintain the harm-and-risk taxonomy. Congress should set the framework; an independent body should do the technical work of implementation and ongoing update. Without this, risk classification may become what routine use became:  a mechanism that sophisticated actors shape to their advantage over time.

Two things must be explicit in however Congress constructs this framework. First, the measure of harm and risk must be centered on short- and long-term impacts on individuals, communities, and society, not on agencies' compliance posture. A data practice that creates low administrative burden for an agency but exposes a vulnerable community to surveillance, targeting, or discrimination is a high-harm practice. The framework should be designed to detect and constrain that kind of harm, not to protect agencies from accountability for it.

Second, Congress should set the framework to include a broad range of considerations when determining harm and risk. In other words, the Federal Privacy Authority would be able to consider such issues as sensitivity, volume, velocity, and variety of data, as well as social contexts and novelness of technology when determining harm and risks.

7. Privacy Principles and Standards: Expand the FedRAMP Analogy

The Act should also articulate updated federal privacy principles that reflect the modern era:  purpose specification, use limitation, affirmative individual rights to contest and delete information, and the stewardship obligations the information fiduciary concept captures. Just as FedRAMP establishes security baselines for cloud vendors serving the government, a modernized Act should establish privacy baselines and substantive data protection standards covering encryption, anonymization, data minimization, retention limits, and audit requirements across the entire federal data ecosystem, including contractors and AI vendors.

8. De-identification Is Not Enough: Real Anonymization Standards Are Required

Current practice allows agencies to treat data as outside Privacy Act protections once obvious identifiers are removed. That standard is no longer adequate. Modern re-identification research has repeatedly shown that stripped datasets can be re-identified with high accuracy using combinations of attributes; ZIP code, birth date, and gender alone are sufficient to uniquely identify a large share of the population. The availability of commercial datasets and AI inference tools makes this increasingly routine.

A modernized Act must distinguish clearly between de-identification (removing direct identifiers) and genuine anonymization (eliminating realistic re-identification risk), establish NIST-developed standards for each, and make clear that an agency cannot simply declare data "de-identified" and treat it as exempt. The operative standard should be realistic re-identification risk against sophisticated adversaries, not the absence of a name field.

CONCLUSION: A STRONG FOUNDATION, AN UNFINISHED BUILDING

The Trahan report is serious, substantive, and genuinely progressive. It does not nibble at the margins: it proposes structural changes to the Act's regulatory model, meaningful enforcement improvements, and innovative new tools for governing commercial data purchases and AI risks. It is a meaningful advance.

A few revisions and the framework can be a working model for protecting privacy in 2026 and beyond — one grounded in the information fiduciary principle, equipped with real anonymization standards, extended to the law enforcement and immigration contexts where surveillance is most acute, and governed by genuinely independent oversight. None of these additions are inconsistent with the report's architecture. The framework is sound. The work of completing it begins now.

Pegah K Parsi is a privacy attorney and advocate. Besides consulting on law enforcement impacts on privacy, she works on education, research, and health privacy. Pegah previously served two terms as Vice-Chair on the City of San Diego’s Privacy Advisory Board.

Next
Next

Estimating the Number of Automated License Plate Readers in California