"Privacy budget" (also called a "privacy loss parameter" or denoted as epsilon (ε)) controls how much noise (or fake data) is added to the original dataset.
The goal of Google's “privacy budget” is to reduce organizations' ability to create a statistical identifier from web client technographics often used to detect fraud.
Privacy budget suggests that a limit on this information will be provided to an organization for each user session. Once the Google-specific limit on access to this information has been used up, Google will "stop sending correct information, substituting it with imprecise or noisy results or a generic result."
Particular technographics Google wants to prevent other organizations from accessing include:
- Detailed user agent strings including operating system and browser minor version;
- Screen resolution, installed system fonts, and similar data;
- Easily available client IP address information.
The exact information that will be counting the use of this budget remains unknown.
By removing this ability, this has the risk of preventing organizations from detecting non-human traffic.
Another potential impact of removing this information is a degraded end user experience.
Perhaps the largest impact of this proposal is the discrimination against smaller publishers who rely on supply-chain partners to operate and grow their business. Given the information asymmetries they face competing against larger, more established rivals, they benefit by pooling information across other small publishers to provide comparable user experiences. By reducing their ability to work with supply chain partners, given the impairment on scaled access to interoperable, pseudonymous information, many of these smaller publishers will likely need to migrate their own websites to instead "host" their content on larger publishers, thus further centralizing access to publisher content.