Fledge

From Bitnami MediaWiki
Jump to navigation Jump to search

Google's First Locally-Executed Decision over Groups Experiment (FLEDGE) is a proposal to measure the effectiveness of Google's Turtledove auction mechanism as being a viable replacement for the interoperable identifiers that support the decentralized, open web.[1]

FLEDGE has a goal to quantify the economic impact of Turtledove on publishers.

Experiment Design

FLEDGE will rely upon the following steps:

  • Pre-auction Audience Segmentation
    • Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google-specified endpoint
      • The mechanics of sending this logic information are described by their forthcoming documentation on worklets.
      • The first information set contains logic rules that determines marketer-defined audience segmentation
        • Each Chrome browser will fetch marketer desired segmentation logic
        • Each Chrome browser will processes each marketer's audience segmentation logic and return back with its unique identifier whether or not it qualifies for the audience segment to the Google-controlled server
        • The Google-controlled server will count distinct number of identifiers belong to each audience segment
          • If the number of identifiers exceeds a Google-defined threshold, then this server will notify these browsers that they may use such audience segments for auction logic
  • Pre-auction Auction Desirability Logic
    • In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability
      • Marketer-specific desirability logic can include ad size, publisher domain, prior frequency of exposure to a give set of ads, and audience segmentation
      • Marketers will also send budget information per campaign to a Google-controlled trusted server
    • Each Chrome browser will separately request information from a Google-controlled trusted server to fetch marketer-specific desirability logic
      • The Google-controlled trusted server will apply the marketer desirability logic to generate a bid for each combination of ad size, audience information and context information per campaign independent of current context
  • Pre-auction Publisher ad slot implementation
    • Publishers will implement a Fenced Frame to query the browser APIs for ads and render the resulting ad
      • After the experiment phase, the Fenced Frame will not communicate any information about the winning ad to the publisher
    • Publishers will load into the Chrome browser logic called "worklets" to select which bid response will win the on-device auction
      • Publisher logic can adjust desirability of each bid response, based on price and other factors
        • Publisher desirability logic can filter which marketer buying platforms can compete in the auction
        • Publisher can also apply an out-of-band creative review process to be used as an input into this desirability logic
  • Auction Mechanics
    • When triggered by the Fenced Frame, the Chrome browser will send a bid request to a limited number of marketers' buying platforms (e.g., DSPs) containing only the context of the given ad slot
    • DSPs receiving the Chrome browser request for a bid will determine whether they want to return a bid response
      • DSPs will return separate bids for context-only and others based on each combination of attributes the browser may or may not contain
    • Publishers may also send to the Fenced Frame their most desirable ad from their own direct sales process, if they have one
    • The Chrome browser conducts an on-device auction to determine a local winning ad
      • The Chrome browser will filter the returned bids based on the presence of audience attributes
      • The Chrome browser will apply the publisher desirability logic to first choose the on-device auction winning ad
      • The Chrome browser will compare this ad to the direct-sold publisher, if any, and apply the publisher desirability logic to first choose the on-device auction winning ad
  • Post-auction
    • During this experiment, the browser will send reporting data to per-specified publisher and buyer end points with event level data as to the outcome of the auction
      • After this experiment, the browser will not send event-level data to the publisher or marketer
    • Marketer buying platforms that lose auctions will get access to aggregate metrics on some time-delayed basis

Impact

Given FLEDGE's design, the following are impacts to marketing effectiveness will not be measured:

  • Cohort vs attribute level audience inputs to buyer algorithms
  • Aggregate vs event-level feedback to buyer algorithms
  • Time-delayed vs real-time event-level feedback to buyer algorithms

Open Questions

  • What time delay, if any (e.g., on lost bids), will be used to quantify the impact of Turtledove on marketers' value of inventory?
  • What limits on marketer audience segmentation logic will be placed?
  • Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic?
  • How can marketers and their agents protect their intellectual property from being disclosed to Google?
  • What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids?
  • What is the acceptable level of impairment to marketing effectiveness, such that marketers will not reduce payments to publishers?
  • Given publishers will be monetizing the same ad slot with existing monetization and Chrome-based monetization, how will the metrics associated with each be reported for comparison?
  • Why are we not also evaluating the metrics proposed by Teetar, such as measuring experience and perception of users exposed to cohort-based advertising.

See Also

Teetar

References