Deep learning jobs have collapsed last 6 months

https://news.ycombinator.com/item?id=24330326

https://twitter.com/fchollet/status/1300137812872765440

I think it’s clear that for many smaller companies that invested in deep learning, it turned out not to be essential and got cut post-Covid as part of downsizings. There are somewhat fewer people doing deep learning now than half a year ago, for the first time since at least 2010

I believe this discussion is relevant to Alteryx and similar data analytics companies that sell a machine learning or AI solution.

The twitter thread says they believe it’s just a short-term effect of the COVID shock

However, the first comment of the hacker news thread I find more accurate:

I’ve worked in lots of big corps as a consultant. Every one raced to harness the power of “big data” ~7 years ago. They couldn’t hire or spend money fast enough. And for their investment they (mostly) got nothing. The few that managed to bludgeon their map/reduce clusters in to submission and get actionable insights discovered… they paid more to get those insights than they were worth!

I think this same thing is happening with ML. It was a hiring bonanza. Every big corp wanted to get an ML/AI strategy in place. They were forcing ML in to places it didn’t (and may never) belong. This “recession” is mostly COVID related I think - but companies will discover that ML is (for the vast majority) a shiny object with no discernible ROI. Like Big Data, I think we’ll see a few companies execute well and actually get some value, while most will just jump to the next shiny thing in a year or two.

25 Likes

I agree. But that does not discount the value of machine learning (ML), only the poor choice of where it was applied. It has its place, and correctly applied, it is scary effective.

Classical binary computation: Machines calculate numbers very well. Missile trajectories, spreadsheets, speed of oncoming car in your radar gun, bank balances for everyone at B of A, whatever.

Big Data: Massive amounts of collected data. People find ways to mine that data to learn non- obvious things that are there to find. That’s how they discovered relationships between prescribed drugs and statistically significant risks or lucky positive side effects.

Neural computation: People see and recognize patterns very well. Brains are networks of “neurons” that interconnect to become massively parallel analog computers that are wired to recognize patterns and learn from them. Recent developments in machine learning (ML) succeed by simulating that process. The technology has been around since the 90’s, after some architectural issues that I won’t bore you with were solved. But only recently have our computers become powerful enough to make it viable.

What is pattern recognition? You hear someone’s tone of voice and know their mood. You recognize nervous talk versus confidence. How? Don’t know, you just do, from your life experience. You recognize when your spouse is upset or happy without discussing anything but dinner plans. A baby human hears her mother in the next room. A penguin has a chick, leaves to get food, comes back days later and recognizes the chick’s “voice” from another 250,000 penguins in the same flock. People can find faces in a crowd. Can ML do that? Now it can. Good ML can discern between twins in under a millisecond. China put their fugitive finder software on line and found 700 people in train stations in a day. Scary, but effective. I saw a demonstration where a guy had a neural network observe an experienced trucker back a double trailer into a parking spot several times. It saw the successes and failures. Once trained, the trained network could do it itself. That was 10 years ago in a closed setting. More recently, driving a car in a city known for crazy driving.

The first commercial application of neural networks was loan approval. The developers used “big data” (before it was called that). They used a computer “neural network” examine thousands of nameless, faceless, addressless mortgage applications and compare them to the outcomes: What patterns on the borrowers’ applications were discovered that discern successful versus bad outcomes in home loans? It was a commercial success.

The point here is that ML has uses, and we don’t even know them yet. But imagine a super intelligent human, who knows nothing but can recognize obvious or subtle patterns that you might not even see. All you have to do is provide “big data” “experience” and define successful versus unsuccessful outcomes to train him.

27 Likes

I believe this discussion is relevant to Alteryx and similar data analytics companies that sell a machine learning or AI solution.

As ibuildthings points out, ML and AI are really a bunch of different things. Some of those sounded like cool ideas, but are having trouble coming to useful fruition. I can’t see any such issue impacting Alteryx because their business is providing tools for the analysis of data. This is a very concrete goal and one which they are exceptionally good at supporting. This is far from some vague “maybe we can figure out something”.

7 Likes