Difference between revisions of "Privacy Budget"
(Created page with "The goal of Google's “privacy budget” is to reduce organizations' ability to create a statistical identifier from web client technographics. Privacy budget suggests that...") |
m |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | The goal of Google's “privacy budget” is to reduce organizations' ability to create a statistical identifier from web client technographics. | + | "Privacy budget" (also called a "privacy loss parameter" or denoted as epsilon (ε)) controls how much noise (or fake data) is added to the original dataset. |
− | Privacy budget suggests that a limit on this information will be provided to an organization for each user session. | + | |
+ | The goal of Google's “privacy budget” is to reduce organizations' ability to create a [[Statistical Identifier|statistical identifier]] from web client technographics often used to detect fraud. | ||
+ | |||
+ | Privacy budget suggests that a limit on this information will be provided to an organization for each user session. Once the Google-specific limit on access to this information has been used up, Google will "stop sending correct information, substituting it with imprecise or noisy results or a generic result."<ref>https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/992975/Notice_of_intention_to_accept_binding_commitments_offered_by_Google_publication.pdf, paragraph 5.23</ref> | ||
Particular technographics Google wants to prevent other organizations from accessing include: | Particular technographics Google wants to prevent other organizations from accessing include: | ||
Line 7: | Line 10: | ||
* Easily available client IP address information. | * Easily available client IP address information. | ||
− | The exact information that will be | + | The exact information that will be counting the use of this budget remains unknown.<ref>https://iabtechlab.com/blog/explaining-the-privacy-sandbox-explainers</ref> |
== Impact == | == Impact == | ||
− | By | + | By restricting this information from other software providers, this has the risk of preventing organizations from detecting [[Non-human Traffic|non-human traffic]]. |
+ | |||
+ | Another potential impact of removing this information is a degraded end user experience. | ||
+ | |||
+ | Perhaps the largest impact of this proposal is the discrimination against smaller publishers who rely on supply-chain partners to operate and grow their business. Given the information asymmetries they face competing against larger, more established rivals, they benefit by pooling information across other small publishers to provide comparable user experiences. By reducing their ability to work with supply chain partners, given the impairment on scaled access to interoperable, pseudonymous information, many of these smaller publishers will likely need to migrate their own websites to instead "host" their content on larger publishers, thus further centralizing access to publisher content. | ||
== References == | == References == | ||
Line 17: | Line 24: | ||
[[Category:Glossary|Privacy Budget]] | [[Category:Glossary|Privacy Budget]] | ||
+ | [[Category:Privacy Sandbox|Privacy Budget]] |
Latest revision as of 14:01, 19 February 2022
"Privacy budget" (also called a "privacy loss parameter" or denoted as epsilon (ε)) controls how much noise (or fake data) is added to the original dataset.
The goal of Google's “privacy budget” is to reduce organizations' ability to create a statistical identifier from web client technographics often used to detect fraud.
Privacy budget suggests that a limit on this information will be provided to an organization for each user session. Once the Google-specific limit on access to this information has been used up, Google will "stop sending correct information, substituting it with imprecise or noisy results or a generic result."[1]
Particular technographics Google wants to prevent other organizations from accessing include:
- Detailed user agent strings including operating system and browser minor version;
- Screen resolution, installed system fonts, and similar data;
- Easily available client IP address information.
The exact information that will be counting the use of this budget remains unknown.[2]
Impact
By restricting this information from other software providers, this has the risk of preventing organizations from detecting non-human traffic.
Another potential impact of removing this information is a degraded end user experience.
Perhaps the largest impact of this proposal is the discrimination against smaller publishers who rely on supply-chain partners to operate and grow their business. Given the information asymmetries they face competing against larger, more established rivals, they benefit by pooling information across other small publishers to provide comparable user experiences. By reducing their ability to work with supply chain partners, given the impairment on scaled access to interoperable, pseudonymous information, many of these smaller publishers will likely need to migrate their own websites to instead "host" their content on larger publishers, thus further centralizing access to publisher content.