Friday, June 30, 2023
Attacking the economics of scams and misinformation
Tuesday, June 13, 2023
Optimizing for the wrong thing
Take a simple example. Imagine an executive who will be bonused and promoted if they increase advertising revenue next quarter.
The easiest way for this exec to get their payday is to put a lot more ads in the product. That will increase revenue now, but annoy customers over time, causing a short-term lift in revenue but a long-term decline for the company.
By the time those costs show up, that exec is out the door, on to the next job. Even if they stay at the company, it's hard to prove that the increased ads caused a broad decline in customer growth and satisfaction, so the exec gets away with it.
It's not hard for A/B-tested algorithms to go terribly wrong too. If the algorithms are optimized over time for clicks, engagement, or immediate revenue, they'll eventually favor scams, lots of ads, deceptive ads, and propaganda because those tend to maximize those metrics.
If your goal metrics aren't the actual goals of the company -- which should be long-term customer growth, satisfaction, and retention -- then you easily can make ML algorithms optimize for things that hurt your customers and the company.
Data-driven organizations using A/B testing are great but have serious problems if the measurements aren't well-aligned with the long-term success of the company. Lazily picking how you measure teams is likely to cause high future costs and decline.