Difference between revisions of "Sparrow"

From Bitnami MediaWiki
Jump to navigation Jump to search
m
m
 
Line 13: Line 13:
 
== Open Questions ==
 
== Open Questions ==
 
* If different buyers send their models to different gatekeepers, how many gatekeepers can one publisher query to retrieve auction information?
 
* If different buyers send their models to different gatekeepers, how many gatekeepers can one publisher query to retrieve auction information?
 +
* What is the process for determining which organizations' servers are trusted?
 
* What is the time-delay in providing the data necessary for publishers to optimize their revenue?   
 
* What is the time-delay in providing the data necessary for publishers to optimize their revenue?   
 
* What is the time-delay in providing the data necessary for marketers to optimize their media spend?
 
* What is the time-delay in providing the data necessary for marketers to optimize their media spend?
 
* How scalable and costly is centralizing the processing of the ecosystem into a few organizations, when buyers often update large models (tens of gigabytes) multiple times per day?
 
* How scalable and costly is centralizing the processing of the ecosystem into a few organizations, when buyers often update large models (tens of gigabytes) multiple times per day?
 +
  
 
== References ==
 
== References ==

Latest revision as of 16:12, 12 January 2021

The goal of Google's Privacy Sandbox is prevent marketers from engaging a particular audience in a particular context.

Criteo's SPARROW proposal provides the same functionality as Google's Turtledove, but allows for multiple organizations to provide this trusted "gatekeeper" service rather than just Google.[1]

Google acknowledges that Criteo's proposed SPARROW is an improvement on Google's Turtledove by acknowledging the need for server-side processing. This trusted server hosts bidding models and executes auction logic. Relative to the in-browser solution proposed by TURTLEDOVE, SPARROW provides more flexibility with bidding and consumes fewer browser resources.[2]

Impact

By limiting the number of organizations that can offer gatekeeper services, this does not provide publishers with open market choices.

By disclosing audience information, publishers and buyers are leaking their intellectual property associated with that web client (e.g., high value customer) to the gatekeeper organization. Given the potential conflict of interest, Criteo suggests that gatekeepers cannot compete with their any of their clients.

Open Questions

  • If different buyers send their models to different gatekeepers, how many gatekeepers can one publisher query to retrieve auction information?
  • What is the process for determining which organizations' servers are trusted?
  • What is the time-delay in providing the data necessary for publishers to optimize their revenue?
  • What is the time-delay in providing the data necessary for marketers to optimize their media spend?
  • How scalable and costly is centralizing the processing of the ecosystem into a few organizations, when buyers often update large models (tens of gigabytes) multiple times per day?


References