Difference between revisions of "Fledge"
Jump to navigation
Jump to search
m (→Impact) |
|||
(15 intermediate revisions by 4 users not shown) | |||
Line 6: | Line 6: | ||
FLEDGE will rely upon the following steps: | FLEDGE will rely upon the following steps: | ||
− | * Pre-auction Audience Segmentation | + | * '''Pre-auction Audience Segmentation''' |
− | ** Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google- | + | ** Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google-specified endpoint |
− | *** The first information set | + | *** The mechanics of sending this logic information are described by their forthcoming documentation on [[Worklet|worklets]]. |
− | **** | + | *** The first information set contains logic rules that determines marketer-defined audience segmentation |
− | **** Each browser will processes each marketer's audience segmentation logic and | + | **** Each Chrome browser will fetch marketer desired segmentation logic |
− | **** The Google-controlled server will count distinct number of identifiers | + | **** Each Chrome browser will processes each marketer's audience segmentation logic and send its unique identifier whether or not it qualifies for the audience segment to the Google-controlled server |
− | **** If the number of identifiers exceeds a Google-defined threshold, then this server will notify these browsers that they may use such audience | + | **** The Google-controlled server will count distinct number of identifiers belonging to each audience segment |
− | * Pre-auction Auction Desirability Logic | + | ***** If the number of identifiers exceeds a Google-defined threshold, then this server will notify these browsers that they may use such audience segment in the marketer's desirability logic. |
+ | **** Each audience segment will have a maximum lifespan of 30 days | ||
+ | |||
+ | * '''Pre-auction Auction Desirability Logic''' | ||
** In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability | ** In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability | ||
− | *** Marketer-specific desirability logic can include ad size, publisher domain, prior frequency of exposure to a give set of ads, and audience | + | *** Marketer-specific desirability logic can include ad size, publisher domain, prior frequency of exposure to a give set of ads, and audience segmentation |
*** Marketers will also send budget information per campaign to a Google-controlled trusted server | *** Marketers will also send budget information per campaign to a Google-controlled trusted server | ||
− | ** | + | ** Each Chrome browser will separately request information from a Google-controlled trusted server to fetch marketer-specific desirability logic |
*** The Google-controlled trusted server will apply the marketer desirability logic to generate a bid for each combination of ad size, audience information and context information per campaign independent of current context | *** The Google-controlled trusted server will apply the marketer desirability logic to generate a bid for each combination of ad size, audience information and context information per campaign independent of current context | ||
− | * Pre-auction Publisher ad slot implementation | + | ** Publishers may conduct an out-of-band creative review process to pre-approve particular creatives |
− | ** Publishers will implement a Fenced Frame to query the browser APIs for ads and render the resulting ad | + | *** Chrome browser will not allow publishers to render ads that have previously been otherwise eligible to win auctions for a minimum number of distinct browser identifiers, thus this is the second set of browser information that must be sent to a Google-controlled server to compute distinct counts |
− | *** | + | |
− | ** Publishers will load into the browser logic called "[[Worklet|worklets]]" to select which bid response will win the on-device auction | + | * '''Pre-auction Publisher ad slot implementation''' |
+ | ** Publishers will implement a [[Fenced Frames|Fenced Frame]] to query the browser APIs for ads and render the resulting ad | ||
+ | *** After the experiment phase, the Fenced Frame will not communicate any information about the winning ad to the publisher | ||
+ | ** Publishers will load into the Chrome browser logic called "[[Worklet|worklets]]" to select which bid response will win the on-device auction | ||
+ | *** Publisher logic MUST whitelist each buyer's access to audience segments | ||
*** Publisher logic can adjust desirability of each bid response, based on price and other factors | *** Publisher logic can adjust desirability of each bid response, based on price and other factors | ||
− | *** Publisher desirability logic can filter which marketer buying platforms can compete in the auction | + | **** Publisher desirability logic can filter which marketer buying platforms can compete in the auction |
− | *** Publisher can apply an out-of-band creative review process to be used as an input into this desirability logic | + | **** Publisher can also apply an out-of-band creative review process to be used as an input into this desirability logic |
− | * Auction Mechanics | + | |
− | ** | + | * '''Auction Mechanics''' |
− | + | ||
− | ** | + | ** DSPs post to the Google-designated end point the desirability logic to calculate the Audience-out-of-context bid. They can post information to a Google-controlled key value server to help store inputs into their logic. While the information may be fetched by the browser in real-time from this trusted server, the updates to it will not be in real-time given the buyer does not have real-time feedback to adjust what it should alter. |
− | ** The browser conducts an on-device auction to determine a local winning ad | + | ** Publishers may also send to the Fenced Frame their most desirable ad from their own direct sales process, if they have one |
− | *** The browser will filter the returned bids based on the presence of audience attributes | + | ** The Chrome browser conducts an on-device auction to determine a local winning ad |
− | *** The browser will apply the publisher desirability logic to | + | *** The Chrome browser will filter the returned bids based on the presence of audience attributes |
− | ** The | + | *** The Chrome browser will apply the publisher desirability logic to first choose the on-device auction winning ad |
− | * Post-auction | + | *** The Chrome browser will compare this ad to any direct-sold publisher ad, and apply the publisher desirability logic to choose the on-device auction winning ad |
− | ** During | + | |
− | *** After | + | * '''Post-auction''' |
− | ** Marketer buying platforms that lose auctions will get access to aggregate metrics on some time-delayed basis | + | ** During the experiment phase, the Chrome browser will send reporting data to pre-specified publisher and buyer end points with event level data as to the outcome of the auction |
+ | *** After the experiment phase, the Chrome browser will not send event-level data to the publisher or marketer | ||
+ | **** Because it will "not [be] possible for any code on the publisher page to inspect the winning ad or otherwise learn about its contents," Google will control the final auction on the page -- rather than allowing publishers to control their yield | ||
+ | *** In neither phase, will the Fenced Frame allow the publisher or buyer browser identifiers to be associated with the event-level data | ||
+ | ** Marketer buying platforms that lose auctions will get access to some aggregate metrics on some yet to be determined, time-delayed basis | ||
== Impact == | == Impact == | ||
− | Given FLEDGE's design, the following are impacts to marketing effectiveness will not be measured: | + | Given FLEDGE's design, the following are impacts to marketing effectiveness will not be measured during the trial: |
* [[Cohort]] vs [[attribute]] level audience inputs to buyer algorithms | * [[Cohort]] vs [[attribute]] level audience inputs to buyer algorithms | ||
* [[Aggregate Reporting API|Aggregate]] vs event-level feedback to buyer algorithms | * [[Aggregate Reporting API|Aggregate]] vs event-level feedback to buyer algorithms | ||
− | * Time-delayed vs real-time event-level feedback to buyer algorithms | + | * Time-delayed vs real-time event-level feedback to buyer algorithms |
+ | |||
+ | The FLEDGE trial can measure impact on marketer effectiveness and publisher revenues due to: | ||
+ | |||
+ | * Delayed ability to change audience-membership rules | ||
+ | * Delayed ability for publishers to change desirability logic for auction management rules | ||
+ | * Delayed ability for marketers to change desirability logic for auction management rules | ||
== Open Questions == | == Open Questions == | ||
− | * What time delay, if any (e.g., on lost bids), will be used to quantify the impact of Turtledove on marketers' value of inventory? | + | * Marketer Effectiveness |
− | * | + | ** What is the acceptable level of impairment to marketing effectiveness, such that marketers will not reduce payments to publishers? |
− | * Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic? | + | ** What limits on marketer audience segmentation logic will be placed? |
− | * How | + | ** How can marketers and their agents protect their intellectual property (audience data, campaign budgets) from being disclosed to Google? |
− | * What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids? | + | ** What time delay, if any (e.g., on lost bids), will be used to quantify the impact of Turtledove/Fledge on marketers' value of publisher inventory? |
− | * What is the | + | ** How frequently can marketers adjust budgets per campaign (e.g., time delay after they realize that a campaign is not performing well)? |
− | * | + | ** How frequently will marketers overspend, given changes to the budget decrementing will delayed by Aggregate Reporting? |
− | * Why | + | * Publisher Revenues |
+ | ** Given publishers will be monetizing the same ad slot with existing monetization and Chrome-based monetization, how will the metrics associated with each be reported for comparison? | ||
+ | ** Google states publishers must pre-approve each buyers' audiences (or allow all), but how can publishers have granular control without buyers leaking intellectual property to sellers? | ||
+ | ** Google states publishers must pre-approve each buyers' audiences (or allow all), but how feasible is it to send large lists of approvals to the browser on each ad request? | ||
+ | ** How does the publisher learn the clearing price of the ads on a per advertiser basis (rather than on an average basis), so that it can negotiate direct deals with them? | ||
+ | *** For example, a publisher may prefer Advertiser A who spends $2 CPM per ad over Advertiser B who spends $6 on some and $1 on others, even if both have identical averages (5*2 vs 4*1+1*6). | ||
+ | ** Why is Google mandating the order of operations which has the Chrome browser controlling the final auction, rather than the publisher's monetization platform (e.g., Publisher Ad Server or SSP) which otherwise could compare the locally winning ad and its bid price to other demand for this same ad slot and select the winning ad to render? | ||
+ | * Creative Review | ||
+ | ** How does the publisher perform the ad creative review process across all live ads in the ecosystem, given they do not know which ones will win auctions on their websites? | ||
+ | ** Given publishers will no longer be able to see which creative renders on its property in real-time, how do they prevent bad actors from swapping a pre-approved creative with a file of the same name but containing a new image? | ||
+ | ** Because FLEDGE proposes that ad creatives will be cached ahead of time, such that the request for the creative by the browser will not be used to signal to buyers any information, this raises the question how the browser can cache video creatives? | ||
+ | * Ecosystem Impact | ||
+ | ** Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic? | ||
+ | ** How do Google-controlled servers become trusted and will this same process be available to all rivals? | ||
+ | ***Related Fledge Issue: https://github.com/WICG/turtledove/issues/120 | ||
+ | ** What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids? | ||
+ | ** What is the maximum level of latency chrome is willing to accept on their browser caused by the JS function running in worklet? | ||
+ | *** The browser latency to process bid calculation for every DSP for every Advertiser client = thousands of calculations for each ad slot on each page. | ||
+ | **** The browser must call out to key-value trusted server on each ad slot on each page (since Fenced Frames do not communicate with other Fenced Frames) OR rely on stale instructions and information passed by publishers and marketers. | ||
+ | ** How chrome intends to handle potential billing dispute, being the only one with access to billing data via their reporting APIs? | ||
+ | * Experiment Goals | ||
+ | ** Why is Google not evaluating the metrics proposed by [[Teetar]], such as measuring experience and perception of users exposed to cohort-based advertising? | ||
+ | ***Related Fledge issue: https://github.com/WICG/turtledove/issues/95 | ||
== See Also == | == See Also == |
Latest revision as of 13:28, 25 June 2021
Google's First Locally-Executed Decision over Groups Experiment (FLEDGE) is a proposal to measure the effectiveness of Google's Turtledove auction mechanism as being a viable replacement for the interoperable identifiers that support the decentralized, open web.[1]
FLEDGE has a goal to quantify the economic impact of Turtledove on publishers.
Experiment Design
FLEDGE will rely upon the following steps:
- Pre-auction Audience Segmentation
- Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google-specified endpoint
- The mechanics of sending this logic information are described by their forthcoming documentation on worklets.
- The first information set contains logic rules that determines marketer-defined audience segmentation
- Each Chrome browser will fetch marketer desired segmentation logic
- Each Chrome browser will processes each marketer's audience segmentation logic and send its unique identifier whether or not it qualifies for the audience segment to the Google-controlled server
- The Google-controlled server will count distinct number of identifiers belonging to each audience segment
- If the number of identifiers exceeds a Google-defined threshold, then this server will notify these browsers that they may use such audience segment in the marketer's desirability logic.
- Each audience segment will have a maximum lifespan of 30 days
- Marketers will periodically (maximum of once per day in this experiment) send two sets of information to a Google-specified endpoint
- Pre-auction Auction Desirability Logic
- In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability
- Marketer-specific desirability logic can include ad size, publisher domain, prior frequency of exposure to a give set of ads, and audience segmentation
- Marketers will also send budget information per campaign to a Google-controlled trusted server
- Each Chrome browser will separately request information from a Google-controlled trusted server to fetch marketer-specific desirability logic
- The Google-controlled trusted server will apply the marketer desirability logic to generate a bid for each combination of ad size, audience information and context information per campaign independent of current context
- Publishers may conduct an out-of-band creative review process to pre-approve particular creatives
- Chrome browser will not allow publishers to render ads that have previously been otherwise eligible to win auctions for a minimum number of distinct browser identifiers, thus this is the second set of browser information that must be sent to a Google-controlled server to compute distinct counts
- In addition to audience segmentation logic, marketers will send logic to determines marketer auction desirability
- Pre-auction Publisher ad slot implementation
- Publishers will implement a Fenced Frame to query the browser APIs for ads and render the resulting ad
- After the experiment phase, the Fenced Frame will not communicate any information about the winning ad to the publisher
- Publishers will load into the Chrome browser logic called "worklets" to select which bid response will win the on-device auction
- Publisher logic MUST whitelist each buyer's access to audience segments
- Publisher logic can adjust desirability of each bid response, based on price and other factors
- Publisher desirability logic can filter which marketer buying platforms can compete in the auction
- Publisher can also apply an out-of-band creative review process to be used as an input into this desirability logic
- Publishers will implement a Fenced Frame to query the browser APIs for ads and render the resulting ad
- Auction Mechanics
- DSPs post to the Google-designated end point the desirability logic to calculate the Audience-out-of-context bid. They can post information to a Google-controlled key value server to help store inputs into their logic. While the information may be fetched by the browser in real-time from this trusted server, the updates to it will not be in real-time given the buyer does not have real-time feedback to adjust what it should alter.
- Publishers may also send to the Fenced Frame their most desirable ad from their own direct sales process, if they have one
- The Chrome browser conducts an on-device auction to determine a local winning ad
- The Chrome browser will filter the returned bids based on the presence of audience attributes
- The Chrome browser will apply the publisher desirability logic to first choose the on-device auction winning ad
- The Chrome browser will compare this ad to any direct-sold publisher ad, and apply the publisher desirability logic to choose the on-device auction winning ad
- Post-auction
- During the experiment phase, the Chrome browser will send reporting data to pre-specified publisher and buyer end points with event level data as to the outcome of the auction
- After the experiment phase, the Chrome browser will not send event-level data to the publisher or marketer
- Because it will "not [be] possible for any code on the publisher page to inspect the winning ad or otherwise learn about its contents," Google will control the final auction on the page -- rather than allowing publishers to control their yield
- In neither phase, will the Fenced Frame allow the publisher or buyer browser identifiers to be associated with the event-level data
- After the experiment phase, the Chrome browser will not send event-level data to the publisher or marketer
- Marketer buying platforms that lose auctions will get access to some aggregate metrics on some yet to be determined, time-delayed basis
- During the experiment phase, the Chrome browser will send reporting data to pre-specified publisher and buyer end points with event level data as to the outcome of the auction
Impact
Given FLEDGE's design, the following are impacts to marketing effectiveness will not be measured during the trial:
- Cohort vs attribute level audience inputs to buyer algorithms
- Aggregate vs event-level feedback to buyer algorithms
- Time-delayed vs real-time event-level feedback to buyer algorithms
The FLEDGE trial can measure impact on marketer effectiveness and publisher revenues due to:
- Delayed ability to change audience-membership rules
- Delayed ability for publishers to change desirability logic for auction management rules
- Delayed ability for marketers to change desirability logic for auction management rules
Open Questions
- Marketer Effectiveness
- What is the acceptable level of impairment to marketing effectiveness, such that marketers will not reduce payments to publishers?
- What limits on marketer audience segmentation logic will be placed?
- How can marketers and their agents protect their intellectual property (audience data, campaign budgets) from being disclosed to Google?
- What time delay, if any (e.g., on lost bids), will be used to quantify the impact of Turtledove/Fledge on marketers' value of publisher inventory?
- How frequently can marketers adjust budgets per campaign (e.g., time delay after they realize that a campaign is not performing well)?
- How frequently will marketers overspend, given changes to the budget decrementing will delayed by Aggregate Reporting?
- Publisher Revenues
- Given publishers will be monetizing the same ad slot with existing monetization and Chrome-based monetization, how will the metrics associated with each be reported for comparison?
- Google states publishers must pre-approve each buyers' audiences (or allow all), but how can publishers have granular control without buyers leaking intellectual property to sellers?
- Google states publishers must pre-approve each buyers' audiences (or allow all), but how feasible is it to send large lists of approvals to the browser on each ad request?
- How does the publisher learn the clearing price of the ads on a per advertiser basis (rather than on an average basis), so that it can negotiate direct deals with them?
- For example, a publisher may prefer Advertiser A who spends $2 CPM per ad over Advertiser B who spends $6 on some and $1 on others, even if both have identical averages (5*2 vs 4*1+1*6).
- Why is Google mandating the order of operations which has the Chrome browser controlling the final auction, rather than the publisher's monetization platform (e.g., Publisher Ad Server or SSP) which otherwise could compare the locally winning ad and its bid price to other demand for this same ad slot and select the winning ad to render?
- Creative Review
- How does the publisher perform the ad creative review process across all live ads in the ecosystem, given they do not know which ones will win auctions on their websites?
- Given publishers will no longer be able to see which creative renders on its property in real-time, how do they prevent bad actors from swapping a pre-approved creative with a file of the same name but containing a new image?
- Because FLEDGE proposes that ad creatives will be cached ahead of time, such that the request for the creative by the browser will not be used to signal to buyers any information, this raises the question how the browser can cache video creatives?
- Ecosystem Impact
- Who will pay for operating the Google-controlled servers that process marketer audience segmentation logic?
- How do Google-controlled servers become trusted and will this same process be available to all rivals?
- Related Fledge Issue: https://github.com/WICG/turtledove/issues/120
- What is the maximum number of DSPs that can operate in the ecosystem under a model where the browser must directly contact each for bids?
- What is the maximum level of latency chrome is willing to accept on their browser caused by the JS function running in worklet?
- The browser latency to process bid calculation for every DSP for every Advertiser client = thousands of calculations for each ad slot on each page.
- The browser must call out to key-value trusted server on each ad slot on each page (since Fenced Frames do not communicate with other Fenced Frames) OR rely on stale instructions and information passed by publishers and marketers.
- The browser latency to process bid calculation for every DSP for every Advertiser client = thousands of calculations for each ad slot on each page.
- How chrome intends to handle potential billing dispute, being the only one with access to billing data via their reporting APIs?
- Experiment Goals
- Why is Google not evaluating the metrics proposed by Teetar, such as measuring experience and perception of users exposed to cohort-based advertising?
- Related Fledge issue: https://github.com/WICG/turtledove/issues/95
- Why is Google not evaluating the metrics proposed by Teetar, such as measuring experience and perception of users exposed to cohort-based advertising?