Unfair AI underwriting for NET's cofounder/C

Cofounder and CEO of Cloudflare, Matthew Prince, had this funny anecdote to share on twitter yesterday morning:
https://twitter.com/eastdakota/status/1426552810896871435?s=…

He applied for an Apple card and received $4500 credit limit and 21.99% APR despite being a billionaire!

Seems to me, AI/ML from Goldman Sachs/Apple isn’t really all that special, despite their presumably large teams of data scientists and engineers and big budgets. I see it as an anecdote showing how big institutions with big data doesn’t magically equal ‘fair’ or ‘good’ AI underwriting models.

He also linked an article (below), highlighting the discriminatory problems with AI/ML. I believe this indirectly highlights how companies (such as UPST with its CFPB No action letter) that have a lead and focus on compliance can build a future “regulatory moat”.
Especially when regulatory bodies (such as NY DFS vs Apple/Goldman in this example) stop throwing soft ball audits/inquiries and kick it up a notch.

https://techcrunch.com/2021/08/14/how-the-law-got-it-wrong-w…

"In late 2019, startup leader and social media celebrity David Heinemeier Hansson raised an important issue on Twitter, to much fanfare and applause. With almost 50,000 likes and retweets, he asked Apple and their underwriting partner, Goldman Sachs, to explain why he and his wife, who share the same financial ability, would be granted different credit limits. To many in the field of algorithmic fairness, it was a watershed moment to see the issues we advocate go mainstream, culminating in an inquiry from the NY Department of Financial Services (DFS).

At first glance, it may seem heartening to credit underwriters that the DFS concluded in March that Goldman’s underwriting algorithm did not violate the strict rules of financial access created in 1974 to protect women and minorities from lending discrimination.

In order to prove the Apple algorithm was “fair,” DFS considered first whether Goldman had used “prohibited characteristics” of potential applicants like gender or marital status. This one was easy for Goldman to pass — they don’t include race, gender or marital status as an input to the model. However, we’ve known for years now that some model features can act as “proxies” for protected classes.

If you’re Black, a woman and pregnant, for instance, your likelihood of obtaining credit may be lower than the average of the outcomes among each overarching protected category.

The DFS methodology, based on 50 years of legal precedent, failed to mention whether they considered this question, but we can guess that they did not. Because if they had, they’d have quickly found that credit score is so tightly correlated to race that some states are considering banning its use for casualty insurance.

The Federal Reserve, OCC, CFPB, FTC and Congress are all eager to address algorithmic discrimination, even if their pace is slow.

In the meantime, we have every reason to believe that algorithmic discrimination is rampant, largely because the industry has also been slow to adopt the language of academia that the last few years have brought. Little excuse remains for enterprises failing to take advantage of this new field of fairness, and to root out the predictive discrimination that is in some ways guaranteed. And the EU agrees, with draft laws that apply specifically to AI that are set to be adopted some time in the next two years.

The field of machine learning fairness has matured quickly, with new techniques discovered every year and myriad tools to help. The field is only now reaching a point where this can be prescribed with some degree of automation. Standards bodies have stepped in to provide guidance to lower the frequency and severity of these issues, even if American law is slow to adopt.

Because whether discrimination by algorithm is intentional, it is illegal. So, anyone using advanced analytics for applications relating to healthcare, housing, hiring, financial services, education or government are likely breaking these laws without knowing it.

Until clearer regulatory guidance becomes available for the myriad applications of AI in sensitive situations, the industry is on its own to figure out which definitions of fairness are best."

59 Likes

Cofounder and CEO of Cloudflare, Matthew Prince, had this funny anecdote to share on twitter yesterday morning:
https://twitter.com/eastdakota/status/1426552810896871435?s=…

He applied for an Apple card and received $4500 credit limit and 21.99% APR despite being a billionaire!

Based on his comments this quarter, I would expect him to reinvest the proceeds into growth.

4 Likes

Based on his comments this quarter, I would expect him to reinvest the proceeds into growth.

This guy openly states that he is spending everything he can make, spends money even before he can make it, and just secured a $1.125B loan. And now he wants more credit???

As part of our long-term model, we have an operating margin target of 20%. When we say long-term, we really mean it. We remain confident in our ability to reach that long-term target, but we are not in a rush to get there. From the point at which we reach breakeven, we intend to aggressively reinvest excess gross profit back into growth. We are nowhere close to being out of ideas for new products to build for customers to buy them.

As a result, this deal along with some other strategic deals we won in the quarter, do not show up in RPO and we have not included the impact of them in our guidance through the end of the year. However, servicing a large customer like the United States government as well as other strategic deals we won in Q2 does require us to increase investments in our network. Anticipating these deals, we began making increased investments in Q1.

Cloudflare (NYSE:NET) has priced upsized $1.125B (from $1B) of 0% convertible senior unsecured notes due August 15, 2026 in a private offering.

5 Likes