Difference between revisions of "Scaup"
(Created page with "The goal of Google's Scaup is to provide marketers look-alike modeling capabilities, without providing access to event data.<ref>https://github.com/goo...") |
m |
||
Line 1: | Line 1: | ||
− | The goal of Google's Scaup is to provide marketers [[Look-alike modeling|look-alike modeling]] capabilities, without providing access to event data.<ref>https://github.com/google/ads-privacy/tree/master/proposals/scaup</ref> | + | The goal of Google's Scaup is to provide marketers [[Look-alike modeling|look-alike modeling]] capabilities, without providing marketers access to event data.<ref>https://github.com/google/ads-privacy/tree/master/proposals/scaup</ref> |
− | Google's Scaup proposal relies on Multi-Party Computation (MPC) whereby the web client sends its information to one trusted server to build models and a second trusted server to apply them. The web client periodically queries the trusted server to receive guidance on whether it belongs to a look-alike model. | + | Google's Scaup proposal relies on Multi-Party Computation (MPC) whereby the web client sends its personal information to one trusted server to build models and a second trusted server to apply them. The web client periodically queries the trusted server to receive guidance on whether it belongs to a look-alike model. |
+ | |||
+ | Under this proposal, the marketer is allowed to push the machine learning model to the trusted server. The trusted server informs the browser to store the events and features that would be inputs into the machine learning model. | ||
== Impact == | == Impact == | ||
Line 11: | Line 13: | ||
== Open Questions == | == Open Questions == | ||
* Given each marketer would like to generate different look-alike models for different products, how much data is required to be sent and stored on the web client? | * Given each marketer would like to generate different look-alike models for different products, how much data is required to be sent and stored on the web client? | ||
+ | * What is the process for determining which organizations' servers are trusted? | ||
== References == | == References == |
Revision as of 16:12, 12 January 2021
The goal of Google's Scaup is to provide marketers look-alike modeling capabilities, without providing marketers access to event data.[1]
Google's Scaup proposal relies on Multi-Party Computation (MPC) whereby the web client sends its personal information to one trusted server to build models and a second trusted server to apply them. The web client periodically queries the trusted server to receive guidance on whether it belongs to a look-alike model.
Under this proposal, the marketer is allowed to push the machine learning model to the trusted server. The trusted server informs the browser to store the events and features that would be inputs into the machine learning model.
Impact
Prospecting is a critical activity for marketers. Prospecting chooses audiences that are believed to be more likely to become new customers. By centralizing control over this important function, marketers will have less choice on the vendors they can use.
Another potential impact is given the time delay in this look-alike modeling process assigning a given web client eligibility for a new audience attribute, the user experience will be diminished.
Open Questions
- Given each marketer would like to generate different look-alike models for different products, how much data is required to be sent and stored on the web client?
- What is the process for determining which organizations' servers are trusted?