UPST CFO Fireside chat

I just watched the JMP event Q/A with CFO of Upstart, Sanjay Datta. (I haven’t yet watched the Deutsche Bank webcast from yesterday, I will likely post about that later)

https://ir.upstart.com/events/event-details/upstart-network-…

My notes are below.

“What really is AI, what does it mean in a lending context? Investors are getting bombarded with references to AI, machine learning, alt data, behavioral science, and after a while the terms are diluting each other.”

Sanjay Datta:
AI is nothing more than a predictive algorithm. It ingests data, variables, and makes a prediction. A linear regression model is an early version of AI. Because math was quite simplistic for that, it obviously makes simplified assumptions about the word. It assumes variables are independent of one another and that they are all linear. Which it’s not in the real world. The way people use AI/ML in a modern sense is now relaxed assumptions, that variables are no longer independent, and the word is not linear. You can now detect much more nuanced patterns and make better predictions. Lending is a prediction problem. We have the machine learning that ingest a lot of data, variables, repayment history and makes sense of lots of patterns to find out who will pay you back.

“What makes Upstart’s AI, or really any other AI lenders, proprietary and form a barrier to entry? Computing power is available to anybody and there are very well financed competitors. How does your model become more and more proprietary and that 5 years from now there won’t be a slew of Upstart imitators?”

Sanjay Datta:
I’d shy away from the term proprietary. It’s more of a large head start. I don’t think you can short circuit the process. Visualize a large spreadsheet. Imagine the columns are the 1600 variables we have, and a lot of rows of repayment history data. We now have a very very large spreadsheet. This is not proprietary in the sense that other institutions have lots and lots of rows. But they don’t have the columns. They only have the rows. They don’t have the variables. They could go tomorrow and ask for variables like where you went to school on old borrowers. But there’s no shortcut here, it can’t be used in pricing loans, you’d have to go and give money to that person again, today, and see if they’ll actually pay you back.
We expect other people to try to copy us as they see the power of AI but we’re far ahead because we’ve done it for 8 years, it doesn’t matter if you throw 300,000 PhDs at the problem, it still takes the same amount of time to gather the data and train your models. That’s how I’d describe our headstart.

"I want to understand more about Upstart’s development cycle. How do you get more columns, rows and change the algorithms. The street has been extremely impressed with improvements in conversion rate since Q4”

Sanjay Datta:
There’s unforuntately no one size fits all answer. We have a very long back log of list of projects for years. Imagine applying machine learning to fraud, or a specific problem of loan stacking, acquisition/targeting of borrowers, servicing part of the business to predict delinquency and who needs outreach. There’s all these different places we can apply AI. We can also add new datasets. We can upgrade our model algorithms by going to more and more sophisticated types of machine learning that we aren’t even using yet. Then there’s incremental accumulation of training data and how you use that to calibrate the models. Then there’s applying AI to new products like auto and so forth. Each project has different effort required. Some can be done in a quarter or some can take a year. They each have predicted impacts estimated by using experiments. Put it all together and each quarter we’re trying to prioritize them with a mix of short term and long term stuff. We draw a line to where are bandwidth is and the rest we push to the next quarter. All model upgrades are significantly backtested. We can take any version of a new model and ensure it is strictly superior to the old one. There’s no world we’ll launch a model that is worse than before. There’s enough backtesting we do to ensure it is statistically superior.

"How do these models help your visibility as a CFO?”

Sanjay Datta:
It’s a problem [laughs]. Sometimes I wish I were a CFO of predictable SaaS business. What we try to do is forecast innovation. Unfortunately that happens lumpy and not smoothly. The forecasting process fundamentally is we have a backlog or list of projects we think we want to work on, each with estimated impact and estimated timing. There’s obviously uncertainty around execution and the impact, so we apply very conservative discounted probabilities of execution and add it up. It results in guidance framework that errs on the side of being conservative. The execution is lumpy. We’ll have quarters where we’ll blow it out when a model single handedly jumps revenue 15%. Then quarters where we’ll meet our numbers if we didn’t get as big impact as we thought we would.

"71% of all loans are automated. Tell us about the unlock of its power and where can that go?”

Sanjay Datta:
What we been able to do is use AI risk models to verify most customers without needing to submit documents. When building the risk models these models are saying ‘hey look for everything I was able to verify automatically, what is the probability that the rest of it is a lie?’ So it amounts to a risk scoring. The power here is fully automatic verification doubles the conversion rate, versus a manual process where documents must be uploaded. Where it can go, the theoretical limit is 1 minus the fraud rate. If you were omniscient, you could automate everyone not trying to defraud you. Fraud rate attempts are in the single digit. So you could in theory get into mid-90s, but probably not a practical limit. But there’s no reason to believe we can’t get to mid 80s or approaching 90%.

"The single most informative comment on the last earnigns call was one bank partner eliminated all FICO requirements. I’m curious, what does this mean in terms of the acceptance of the technology? How did they come to that decision? Can you tell us the size of this bank?”

Sanjay Datta:
This bank is in “the middle” of large and small. They are not one of our original banks [it’s not CRB]. They’ve been up and running with us for a bit.
The way banks start to work with us is they have P and L owners that immediately want to work with us, it’s always an easy conversation. Then they have to get the deal through the risk committee which contains a credit committee. This committee often says ‘hey this looks interesting but I need guardrails.’ FICO score is typically seen as a guard rail - “I don’t want any loans below 700 FICO” What happens is they get started, watch 6-9 months and want more, they say ok let’s drop it to 680, oh wow it’s working more and this bank said ‘hey you know what, let’s just get rid of the guardrail’. It surprised the hell out of us, it’s the first time we’re aware of in modern financial history where a bank gave out a loan without asking for a FICO score. They’re relaxing traditional models and this is the affirmation we’re looking for, they’re embracing our AI. Institutional mindset is changing.

"We have 2 minutes left – can you comment on Spanish platform, update on auto, and what’s next beyond?”

Sanjay Datta:

Spanish-
Spanish speakers are roughly 15% of the population and underserved. It’s a demographic that fits well with our mission to expand lending to the underserved.

Auto-
We have two things happening in parallel.
We have our go- to-market with refi, which the journey is very similar to the personal loan business. Like with personal loans we’re focused on just getting the funnel to perform better and better.
Separately, there’s a point of sale product at the dealership where we use Prodigy software, we are integrating our loan with that platform, that will likely show up at the dealership in Q4. It won’t be meaningful in 2021 but in 2022 it will be meaningful.

Beyond-
Beyond auto, we haven’t signaled anything specific but there are a bunch of markets we love. Most credit markets have a prime segment that’s hypercompeted and very well served, but then there’s giant torso where options dramatically fall off a cliff. We can point to examples of that in almost every market. If you look at the recovery in the mortgage industry, the qualifying loan world where pricing is set by fannie/Freddie, the recovery has recovered completely and grown even more than 2008. But in the nonqualifying world, it’s fallen off the cliff. There’s a million more nonqualifying mortgages in 2001 than today! So fannie and freddie have reduced their credit aperture and have not widened it. So those are examples of market segments where where our strengths can play very well in. There’s more to talk on that in coming quarters.

114 Likes

I have a question here regarding the time taken by the AI models to become effective.

So far they have been testing (learning) personal loans for the past eight years in order for their algorithms to learn and become more effective and legal and lethal.

Now that mortgages and auto and new business segments are being discussed, does that mean it will take a similar duration?
hopefully less than eight years! (Or) they can piggyback on the learning of personal loans which would be ideal.

I know that we have been discussing upstart extensively over the last few weeks and if this was discussed then apologize the question being repeated.

But hopefully there are a few folks in my boat that will benefit from this question.

Thanks,
Praviner

4 Likes

AI Models for loans are based on the BORROWER, NOT the LOAN TYPE. What remains to do with the SAME information, is to tune for differences in the model to predict return on the loan products offered.

Having said that, a new vertical for a bank likely has a separate team and a separate approval process within that bank. The uptake will be far more closely tied to customer (bank and financial institution) appetite for these new products (in auto and mortgage loans).

As UPST is always looking for more data types to add to their model, they may test for model predictivity as related to alternate data. When the correlation is high and improving, that data type is likely to be added to their AI Model.

So, long story short: We should expect the pacing of new verticals to be closely tied with customer onboarding processes, and much less to do with test and trial.

I’ll be watching the conference calls to see information about customer progress in these areas.

4 Likes