So…apparently the consulting firm I joined has helped make some of us a bundle of $
https://www.alteryx.com/press-releases/2018-06-06-alteryx-an…
So…apparently the consulting firm I joined has helped make some of us a bundle of $
https://www.alteryx.com/press-releases/2018-06-06-alteryx-an…
So you know Alteryx inside out?
Warning: Energy industry careerist here
Warning: Electric utility expert subjected to and experienced in exception beauracracy
Warning: Expereimve ranging between Southeast culture and California culture
I bet on AYX because for years I have heard “we have data” but witness “we can’t make sense of the data”
“Data lakes”
“Predictive analytics”
“Machine learning”
Etc etc etc
I have worked with hired Data Scientists who understand the academics but not the business. Who know how to write extremely sophisticated code but have to blindly trust data
AYX allows companies to keep doing what they do best (know their business, manage their data, keep it simple) while letting AYX conduct the analytics at the fraction of a cost of hiring a data scientist.
At least in this industry, where SQL servers haven’t even heard of MongoDB and legacy systems are deeply entrenched, simple out of the box tools that you can trust and enhance the work you do without being sucked into a product to heavy to stand on its own without implementation, simplicity rues the way. Alteryx masters this.
Just a Fool, perhaps even a citizen data scientist i could argue.
@JAFbrblev @CMFAleeb
Hey so you guys are experts on AYX. Spill the beans, man!
Is AYX worth it? What about Talend or Tableau?
I have worked with hired Data Scientists who understand the academics but not the business
Oh God, the story of my life. I’ve met some amazingly nice data scientists, most are PhDs, and many have written overpriced books on data science that I’ve bought as a sort of friendly gesture. They’re all great mathematicians, and like stumping us with complex math problems at work. Unfortunately, every single one of them have been idiots in solving real world problems, who couldn’t code their way out of a paper bag. Three of them on my team have been working a year on figuring out how to write code that can tell that a transaction with “Home Depot” is for the same store as a transaction with “The Home Depot”.
I’m really not worried about AI taking over the world - I’m sort of surprised that vending machines work and planes aren’t falling out of the sky.
At least in this industry, where SQL servers haven’t even heard of MongoDB and legacy systems are deeply entrenched, simple out of the box tools that you can trust and enhance the work you do without being sucked into a product to heavy to stand on its own without implementation, simplicity rues the way. Alteryx masters this. AYX allows companies to keep doing what they do best (know their business, manage their data, keep it simple) while letting AYX conduct the analytics at the fraction of a cost of hiring a data scientist.
There won’t be enough data on Mongo for us to be doing big data analytics on it yet - for the next 5 years, Sql will still be the primary data store that you build your data lakes out of. And, as far as I can tell, all the tools are still using 20 yr old ETL technology - that was the hot new thing in Goldman Sachs back in the mid 90s.
So if Alteryx makes this easy, how much of a head start does it have, and aren’t there lots of other tools almost as good coming up?
I’ve been looking at some of these data tool companies as potential investments and they are recommended on these boards and by the Fool. But I’m just having trouble believing these have a real moat that can last, when I’ve seen so many of them come and go over the years. Can you convince me, or point me to an article or blog post, perhaps?
Steppen,
I just started recently so no, I’m not an expert!
What I can say is that people are really excited about Alteryx and as of now, it is not seen as Alteryx or Tableau (at least in many cases) sometimes they compliment each other.
I’m excited about this job for a lot of reasons. One being, it sits at the cross-section of a lot of the technologies (and companies) we talk about here and how they are implemented by customers.
I’m all finished moving so I should be able to be more active on these boards as I continue learning.
about data scientists and their PhDs and overpriced books: “…one of them have been idiots in solving real world problems,…”
wow. there are a lot of problems that was worked on with no intention of any ‘real world’ applications. Some of them turned out to be rather important in the ‘real world’.
and what is so important about the ‘real world’? If you can you should imagine all you can imagine.
tj
I suggest anyone interested to review the Home Depot case study video for Alteryx. Saul may have previously linked to it. This is the link I believe: https://pages.alteryx.com/driving-category-specific-business…
This is a Home Depot employee, not someone paid by Alteryx. I believe it is a 30 minute or more talk, but if you want to understand Alteryx phenomenon, this is a good start. There are then multiple uTube videos demonstrating how Alteryx blends data, how you create analytical outcomes and the like. There is also a utube video running a simple problem through Alteryx to demonstrate its power and ease of use (that one is more than 20 minutes, but all easy to discover if interested), so you can see the product run a problem from start to finish.
There are also anecdotal stories, such as an IT/data supervisor who had two recent college graduates assigned to him. He gave them a problem to solve regarding putting together and analysing survey 30 different surveys regarding automobiles. Large database as you might imagine. These two recent college graduate went to Excel and took days, and they still had not solved the problem. He then took them into his office, opened up Alteryx, and solved this problem for them in 15 minutes.
So he gave them another problem to work on, with no additional instructions. These two recent college graduates, instead of banging their heads against a brickwall like before downloaded the 14 day free trial of Designer. Set it up themselves, and in a period of hours (or it may have been a day or so) they resolved the problem, made a report of it, and when shared with senior staff, it was appraised as quite satisfactory.
Take these stories for what they are worth. The Home Depot story I think is worth a lot. The anecdotal story, who knows.
I am sure there is diminishing marginal returns the more product that is put into the enterprise, but the total increase in productive explains the 132%+ annual purchase increase that AYX is experiencing. There are some numbers that speak for themselves.
There are a lot of analytical products out there, but none that is so suited to both the citizen data scientist, and yet sophisticated enough for the data jock. AI is certainly making its way into the picture, but AI does not set up and blend the databases, nor create the steps to diagram the problem to e solved. Instead AI can help clean the database (the stuff Talend does, and AI can go even further in cleaning the data) but it is not going to blend the databases together as needed.
Fascinating use of AI in cleaning large databases at GE. GE uses Talend. But GE had a problem that Talend was not able to solve. And that is that GE has thousand of vendors, and huge buying power. But the vendors would use different nomenclature to describe the same thing. Such as two inch screw, 2" screw, screw 2 inch, just these little things that Talend could not cut through.
GE needed to figure this our cross vendor however, so it could consolidate its suppliers supply chains in order to give them volume discounts, which of course would then be passed on to GE.
So GE applied AI to go beyond what a Talend could do, and it apparently worked out quite well to get even a cleaner pool of data so that this sort of data analysis could be done.
Tinker
My son works for a global business consulting company that uses Alteryx for complex projects for their largest clients. My son has been very pleased with the tool and had an opportunity to have dialog with senior execs at Alteryx (he won’t tell me the content of those conversations - citing confidentiality). He did say that the exec team is strong.
…Marc
Tinker,
I’d be curious to know more about how GE solved their data quality problem. I used to work at a great big US corporation and corrupt data was the biggest problem with respect to analytics. There are no user groups (within my experience) who think or will admit to bad data - even when you show it to them.
They don’t see the problem when you show them two different transactions from the same vendor with two slightly different names (noted above, “Home Depot” and “The Home Depot”). They can easily recognize this as the same company, so to them there’s no problem. They don’t get it that a computer sees this as two different entities. Then throw in “Homr Depot,” “Home Deepoe” and a hundred other variants. And no, I’m not exaggerating.
Us very smart IT folks thought we could solve the problem by implementing a Master Data system where the vast majority of entries would come from a pick-list. The master pick-list being maintained by the logical “owner” (BTW, I hate that term) of the data type. What a can of worms that turned out to be. Who’s the data owner for “Vendor”? Did you mean “Supplier”? Or was that “Source”? Can’t even agree on the metadata tag, let alone who has the authority to rule the master data. Just this one data item was in contention by Procurement, Engineering and Accounts Payable. And not without good reason. Engineering might indicate the “source” of a part, material or commodity as coming from a specific company, but Procurement had to indicate specifically which facility, was that the XYZ Co. at Keokuk or the XYZ company at Fresno. Or maybe the ABC Co. (recently acquired by XYZ) at Phoenix. All the same company, while Accounts Payable paid the bills by sending checks to XYZ HQ in Baltimore (which might also have a different name).
What question are you asking? How many things do we source from XYZ? Do we have a quality problem with product coming from XYZ and which facilities are sending faulty stuff? How much dollar volume have place with XYZ over the last six months. How about when you try to intersect a couple of these queries: How much dollar volume have placed with XYZ facilities that are responsible for faulty material?
I don’t know how or if AYX or even Talend can solve these data problems. And those are relatively easy ones. Start tearing into undisciplined text fields in order to abstract meaningful analysis and it gets to be near impossible. I’ll give a couple of examples.
Where I worked we sold large, very complex, very expensive machines to a limited number of customers (total worldwide customer base of about 400). These machines required, by law, regular maintenance. And repairs when stuff broke were also not uncommon. Our customers would send us their maintenance and repair records which were textual descriptions of what they encountered and what they had to do to complete the maintenance/repair. These text files were rich in information for engineering with respect to designing out problem parts and assy’s. Only there were only primitive (mostly homegrown) tools for making it accessible to analytical tools.
Another problem we had was export controlled documentation. We did a lot of work with graphite composite materials. Graphite composites are not controlled if you are making golf clubs, tennis rackets, violin bows, fishing rods, etc. But, if you are making parts with large dimensions they are. The only way of separating export controlled documentation from non-export controlled is to read the engineering notes (or emails, or meeting minutes, etc.) to determine if the dimensions being cited warrant protection or not. Export control violations are very impactful. A company can incur multimillion dollar fines for violations or even have an export license removed.
A Oes AYX offer any solutions for these problems? How about TLND?
http://fortune.com/2017/05/17/startup-saved-ge-millions/?iid…
Brittlerock, here is the article about how GE solved their supplier problem with things, such as you mention “Home Depot,” “The Home Depot,” “Home Depoe” 2” screw, 2 inch screw, etc.
TAMR is a start up that uses AI to go further than a Talend can go. I doubt that this is something that AYX can handle. AYX is excellent at blending disparate data sources, but what GE has is tens of thousands or even more disparate databases, and this requires a Talend or an Informatica, and on top of this, to be able to identify suppliers to the suppliers, so that GE can get their suppliers volume discounts as well (once they figure out which of their suppliers are actually using the same suppliers themselves for what product) GE uses TAMR. A machine learning tool that according to the article goes beyond what is otherwise possible.
I would not be surprised to see Talend buy out TAMR or a company like it to enhance their product. After all, Talend is also a GE customer, but GE had to go to the next step to fully obtain satisfaction from their data.
This is the sort of thing that probably requires a Talend to first ETL or ELT the data, and then you run the TAMR tool on the otherwise cleaned up data to further clean it up.
AYX has a plug in for a similar AI product, but that product works on what is suppose to be already clean data, to better run the analytical protocols that you program (mostly via GUI) using AYX’s core product Designer.
This is how GE did it.
Tinker
Thanks Tinker,
Before I retired (2010) we were using Informatica. I imagine their products have evolved since then. Never hear of TAMR before, and this domain (data quality, metadata management, etc) was in my area of expertise so I imagine they’ve not been around very long.
Taland is also new to me. I’ve only heard of it as an investment opportunity but I had no previous professional knowledge of the company and their products. We were users of IBM-Datastage. I’m still haven’t dug in deeply enough to know what makes Talend superior to any one of a number of ETL products.
Before I retired I was the System Architect for the data conversion team for one of the largest IT projects the company ever engaged in. We were converting all the mainline engineering/manufacturing systems which were largely in-house developed (COBOL, VSAM, IMS and DB2) to COTS. One of the top executives likened the project to changing the tires on a car while traveling 60 MPH. I won’t delve deeply into the problems we encountered, but just as an indicator we had no engineering BOM. As a legacy of WW2 we used a drawing tree rather than a part tree. It actually made sense at a time before computers when virtually every product was cookie cutter like the previous one and we only had one customer (USAF). Configuration and change management was a lot simpler. Most all the design information required by the manufacturing engineers was carried in textual drawing notes.
As the company became a supplier of commercial products, C&CM became a nightmare. We seldom had two identical products sequentially in the line that had the same configuration. New customer introductions or even changing the delivery order led to very large non-value added costs. The majority of cost of data conversion was in manual data clean-up effort. All those notes had to be parsed with the data loaded to a relational DBMS. Our programmers did a pretty good job, even though the text was undisciplined, the note codes were meaningful so they at least had an idea of content based on the note codes. Still, there was an enormous amount of data that had to be examined and deciphered with eyeballs and brains.