Difference between revisions of "Fledge"

From Bitnami MediaWiki
Jump to navigation Jump to search
Line 79: Line 79:
 
** Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic?
 
** Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic?
 
** How do Google-controlled servers become trusted and will this same process be available to all rivals?   
 
** How do Google-controlled servers become trusted and will this same process be available to all rivals?   
 +
***Related Fledge Issue: https://github.com/WICG/turtledove/issues/120
 
** What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids?
 
** What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids?
 
* Experiment Goals
 
* Experiment Goals
 
** Why is Google not evaluating the metrics proposed by [[Teetar]], such as measuring experience and perception of users exposed to cohort-based advertising?
 
** Why is Google not evaluating the metrics proposed by [[Teetar]], such as measuring experience and perception of users exposed to cohort-based advertising?
 
+
***Related Fledge issue: https://github.com/WICG/turtledove/issues/95
 
== See Also ==
 
== See Also ==
  

Revision as of 17:20, 24 February 2021

Google's First Locally-Executed Decision over Groups Experiment (FLEDGE) is a proposal to measure the effectiveness of Google's Turtledove auction mechanism as being a viable replacement for the interoperable identifiers that support the decentralized, open web.[1]

FLEDGE has a goal to quantify the economic impact of Turtledove on publishers.

Experiment Design

FLEDGE will rely upon the following steps:

  • Pre-auction Audience Segmentation
    • Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google-specified endpoint
      • The mechanics of sending this logic information are described by their forthcoming documentation on worklets.
      • The first information set contains logic rules that determines marketer-defined audience segmentation
        • Each Chrome browser will fetch marketer desired segmentation logic
        • Each Chrome browser will processes each marketer's audience segmentation logic and send its unique identifier whether or not it qualifies for the audience segment to the Google-controlled server
        • The Google-controlled server will count distinct number of identifiers belong to each audience segment
          • If the number of identifiers exceeds a Google-defined threshold, then this server will notify these browsers that they may use such audience segments for auction logic
        • Each audience segment will have a maximum lifespan of 30 days
  • Pre-auction Auction Desirability Logic
    • In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability
      • Marketer-specific desirability logic can include ad size, publisher domain, prior frequency of exposure to a give set of ads, and audience segmentation
      • Marketers will also send budget information per campaign to a Google-controlled trusted server
    • Each Chrome browser will separately request information from a Google-controlled trusted server to fetch marketer-specific desirability logic
      • The Google-controlled trusted server will apply the marketer desirability logic to generate a bid for each combination of ad size, audience information and context information per campaign independent of current context
    • Publishers may conduct an out-of-band creative review process to pre-approve particular creatives
      • Chrome browser will not allow publishers to render ads that have previously been otherwise eligible to win auctions for a minimum number of distinct browser identifiers, thus this is the second set of browser information that must be sent to a Google-controlled server to compute distinct counts
  • Pre-auction Publisher ad slot implementation
    • Publishers will implement a Fenced Frame to query the browser APIs for ads and render the resulting ad
      • After the experiment phase, the Fenced Frame will not communicate any information about the winning ad to the publisher
    • Publishers will load into the Chrome browser logic called "worklets" to select which bid response will win the on-device auction
      • Publisher logic MUST whitelist each buyer's access to audience segments
      • Publisher logic can adjust desirability of each bid response, based on price and other factors
        • Publisher desirability logic can filter which marketer buying platforms can compete in the auction
        • Publisher can also apply an out-of-band creative review process to be used as an input into this desirability logic
  • Auction Mechanics
    • When triggered by the Fenced Frame, the Chrome browser will send a bid request to a limited number of marketers' buying platforms (e.g., DSPs) containing only the context of the given ad slot
    • DSPs receiving the Chrome browser request for a bid will determine whether they want to return a bid response
      • DSPs calculate in real-time the bid for a context-only ad, while the audience-out-of-context bid is calculated by the desirability logic they posted to the Google designated end point
    • Publishers may also send to the Fenced Frame their most desirable ad from their own direct sales process, if they have one
    • The Chrome browser conducts an on-device auction to determine a local winning ad
      • The Chrome browser will filter the returned bids based on the presence of audience attributes
      • The Chrome browser will apply the publisher desirability logic to first choose the on-device auction winning ad
      • The Chrome browser will compare this ad to the direct-sold publisher, if any, and apply the publisher desirability logic to first choose the on-device auction winning ad
  • Post-auction
    • During the experiment phase, the Chrome browser will send reporting data to pre-specified publisher and buyer end points with event level data as to the outcome of the auction
      • After the experiment phase, the Chrome browser will not send event-level data to the publisher or marketer
        • It will "not [be] possible for any code on the publisher page to inspect the winning ad or otherwise learn about its contents"
      • In neither phase, will the Fenced Frame allow the publisher or buyer browser identifiers to be associated with the event-level data
    • Marketer buying platforms that lose auctions will get access to aggregate metrics on some yet to be determined, time-delayed basis

Impact

Given FLEDGE's design, the following are impacts to marketing effectiveness will not be measured:

  • Cohort vs attribute level audience inputs to buyer algorithms
  • Aggregate vs event-level feedback to buyer algorithms
  • Time-delayed vs real-time event-level feedback to buyer algorithms

Open Questions

  • Marketer Effectiveness
    • What is the acceptable level of impairment to marketing effectiveness, such that marketers will not reduce payments to publishers?
    • What limits on marketer audience segmentation logic will be placed?
    • How can marketers and their agents protect their intellectual property (audience data, campaign budgets) from being disclosed to Google?
    • What time delay, if any (e.g., on lost bids), will be used to quantify the impact of Turtledove on marketers' value of publisher inventory?
    • How frequently can marketers adjust budgets per campaign (e.g., time delay after they realize that a campaign is not performing well)?
  • Publisher Revenues
    • Given publishers will be monetizing the same ad slot with existing monetization and Chrome-based monetization, how will the metrics associated with each be reported for comparison?
    • Google states publishers must pre-approve each buyers' audiences (or allow all), but how can this be accomplished without buyers leaking intellectual property to sellers?
    • Google states publishers must pre-approve each buyers' audiences (or allow all), but how feasible is it to send large lists of approvals to the browser on each ad request?
    • How does the publisher learn the clearing price of the ads on a per advertiser basis, so that it can negotiate direct deals with them?
    • Why is Google mandating the order of operations which has the Chrome browser controlling the final auction, rather than the publisher's monetization platform (e.g., Publisher Ad Server or SSP) which otherwise could compare the locally winning ad and its bid price to other demand for this same ad slot and select the winning ad to render?
    • How does the publisher learn the clearing price of the ads on a per advertiser basis, so that it can negotiate direct deals with them?
  • Creative Review
    • How does the publisher perform the ad creative review process across all live ads in the ecosystem given they do not know which ones will win auctions on their websites?
    • Given publishers will no longer be able to see which creative renders on its property in real-time, how do they prevent bad actors from swapping a pre-approved creative with a file of the same name but containing a new image?
    • Because FLEDGE proposes that ad creatives will be cached ahead of time, such that the request for the creative by the browser will not be used to signal to buyers any information, this raises the question how the browser can cache video creatives?
  • Ecosystem Impact
    • Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic?
    • How do Google-controlled servers become trusted and will this same process be available to all rivals?
    • What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids?
  • Experiment Goals

See Also

Teetar

References