Requirements Drivel and AI in Corporate America

Efforts to develop Artificial Intelligence (AI) capabilities to create software recently generated headlines when stories were published describing how OpenAI built a coding tool named ChatGPT by generating AI rules from scanning millions of lines of source code housed in repositories within GitHub. GitHub was purchased by Microsoft in 2018 and Microsoft not only allowed the scanning of these GitHub repositories for this purpose, it invested in OpenAI to fund the effort. The main concern raised with that news was that such a capability would essentially allow the creation of new, derivative works using patterns -- if not complete actual snippets of source -- from other parties' copyrighted code without prior permission or even compensation.

OpenAI is now attempting to augment the training process for its product with an additional approach summarized here on the web site Semafor:

OpenAI's new idea? Hire over one thousand developers to solve a variety of "canned" programming problems, have those developers create a prose description of how they defined the problem and structured their code to solve it, then feed the code and the prose into databases to be scanned by training algorithms for ChatGPT. The idea behind this approach is to create a better mapping BACKWARD from working code examples to higher level non-technical "asks" that might come from "business owners" to further streamline the process and make development faster and less expensive. Extrapolated to an extreme, a result of this approach could be to allow a non-technical "business owner" to type in a description of a capability they want in their lingo and map it to actual code / data concepts required to implement and build it.

At first glance, this idea of enhancing AI training by having it look further up the development cycle closer to business users makes sense. It might even help avoid legal disputes over copyright violations stemming from focusing solely on source patterns, or make it easier to deflect copyright claims in court.


The effort involves having the DEVELOPER solve the problem then provide a PROSE explanation on how they approached the problem then feeding (code + developer prose) into the AI. Can anyone spot the problems with that?

First, the prose being fed to train the AI comes from the DEVELOPER, not the original "business owner." That's not how most large scale software systems are created. Developers are the LAST in a line of specialties involved in parsing someone's "requirements" prose from one vernacular (perhaps the "business owner's" requirements writers) to another (perhaps the developer team's business analyst team) to another (the enterprise architects) to another (the solution team's architects) then the actual developers. Every one of those handoffs requires re-interpretation and introduces new semantics which can obscure or distort intent.

The second problem is that not all developers excel at explaining in prose terms WHAT they are building, WHY they are building it that way and HOW it will actually work. The code may be perfect but there is no guarantee someone outside the head of the developer will be able to understand the solution from the developer's description of it. Granted, some of the best developers have TREMENDOUS communication and coding skills which makes them such good developers. Unfortunately, not everyone works for Lake Woebegone Systems, where all of the developers are above average in this area. In many cases, the prose created as explanation for what / why will likely be so poorly structured it could prevent an AI from accurately parsing / extrapolating it to new scenarios.

Thirdly, companies building imaginative new systems with Google-level engineering teams likely have better engineers and better requirements writers to use as training fodder but chances are, these upper-crust firms are probably the least interested in using this type of AI. Their teams can create a web service in a few days and launch a new feature on their customer portal in a few weeks. The target market for these AI capabilities is the typical Fortune 500 firm needing ongoing development of tools for internal HR / accounting / analytics systems. Those systems are typically controlled by departmental executives who

  • have little or no understanding of their existing or desired business processes
  • have little or no understanding of the technical aspects of their existing or desired business processes
  • have little or no interest in learning about the details to provide better process designs or requirements
  • don't trust the team within the company formally assigned to do development / integration work on their behalf
  • often have their own parallel sets of business owners, project managers and business analysts to provide direction to the actual development team

This is a horrible environment in which to work for all parties involved but it is driven by turf battles which inevitably crop up when leaders value control and autonomy (or the illusion of control and autonomy) over efficiency, cost effectiveness and security of the larger firm. It is these layers of overhead for "client" ownership / project management / business analysis added to the "delivery organization" ownership / project management / business analysis that turn $500,000 projects into $2,000,000 dollar projects or $3,000,000 into $15,000,000 projects that are delivered months / years late with only small fractions of the originally promised functionality. Until those duplicate resources are eliminated and the underlying turf battles that created them are eliminated, pointing AI systems at the drivel that passes for "requirements" in Corporate America will barely move the needle on any measure of quality, functionality or cost.



@WatchingTheHerd this is not my field of expertise, but I wonder…

How does your Scenario 1 (“OpenAI built a coding tool named ChatGPT by generating AI rules from scanning millions of lines of source code housed in repositories within GitHub”) differ from your Scenario 2 (“Hire over one thousand developers to solve a variety of “canned” programming problems, have those developers create a prose description of how they defined the problem and structured their code to solve it, then feed the code and the prose into databases to be scanned by training algorithms for ChatGPT.”)?

Don’t all the turf battles and misunderstandings apply equally to both scenarios?



The approaches are similar in that they are attempting to select prior source code for re-use when solving a new problem. The difference is in the level of abstraction of the input used to drive the search for the re-usable code. In the current ChatGPT incarnation, the AI is watching a developer as they type lines of source code that the AI recognizes as a common pattern. For example, the AI might be watching a Java developer type this code:

public static List retrieveCds() throws SQLException {

String selectSQL = "select * from cds";
List listCd = jdbcTemplate.query(selectSQL, new RowMapper() {

and realize the developer appears to be writting a subroutine using a JdbcTemplate library to fetch rows from a table in a database. The AI has seen THOUSANDS of source coe files using the JdbcTemplate class and might immediately suggest an entire block of code that looks like this:

public static List retrieveCds() throws SQLException {

String selectSQL = "select * from cds";
List listCd = jdbcTemplate.query(selectSQL, new RowMapper() {

    public Cd mapRow(ResultSet rs, int rowNum) throws SQLException {
       Cd cdObj = new Cd();
       return cdObj;
return listCd;

But that is all based on a developer already being in the weeds of development and knowing they need to write a “Data Access Object” class (DAO) and having spent time starting to do that. At that point with existing methodologies, weeks or months could have already been spent debating other forms of “requirements.”

The new approach is having the developer summarize what they were doing in prose, which might read like this:

Logic needs to select a list of financial CDs from a table matching certain criteria and convert them from database rows into JSON encoded strings to return through a SpringBoot based web service controller.

However, even that high level description of what the developer did is still several layers of abstraction away from the way a system architect or business owner would describe the problem. The system architect might describe that same task this way:

Code and deploy a web service that accepts customer account identifiers and optional selection criteria and return the JSON encoded list of matching responses to expose on a proxy web service so our internal customer service rep portal and the customer self-help portal can use the same service. Protect the proxy service with OAuth to ensure only authorized agents can see customer CDs and ensure customerA can never inadvertently retrieve CDs owned by customerB.

The business owner’s requirements would be even more abstract, maybe something like this:

Fetch customers’ CD records and make them available for agents and customers.

The problem is that WEEKS of time can be spent at those higher levels of abstraction making them MORE concrete so actual code can be written without inadvertently introducing security flaws or missing what the business owner really wanted because they didn’t clearly communicate how the function should operation. Pointing the AI at the developer’s prose view of their own work doesn’t eliminate many middlemen that are bloating costs AND pointing the AI at the developer’s prose won’t do much good if there is poor quality / consistency in those developer prose summaries.

It’s also worth mentioning that many developer teams operate across 24 time zones and reflect United Nations level of cultural and first-language diversity, which makes prose descriptions more dependent upon consistent use of language idioms and technical terminology. Even with the technical terminology, it is COMMON for software architects and developers who don’t REALLY understand the libraries and technologies to SAY they are using a certain pattern when they are actually MIS-USING it. For example, many developers say they are writing REST based web services but in fact ignore many of the concepts called for by REST standards when doing so. Their “prose” may say they wrote a REST-oriented web service when in fact the resulting code is not compliant with the standards.



I am in awe of how well you described that.
YOUR prose was a vivid description of the issue and your code an excellent choice - easy to see and follow why the AI approach is handcuffed from the start.

And this from the OP describes the daily life of IT development in corporate America to a ‘T’…



If I can ask, what are you seeing on IRR?

Or, they are building a system that goes up the normal food chain, like things are done today with humans. Start with sales and marketing and product research, move to requirements of various levels of detail, then to coding. You don’t think they can go straight from business leader to generated code in one step, do you? (you can’t, this will require layers of models, each feeing the other).

1 Like

Thanks so much @WatchingTheHerd for this detailed post!! As a former IT career guy, I’m wondering how the above code is solved “generically”. Throughout my career, every client that I worked with named their database columns differently, and/or some used the similar column name, but the definition of that column was slightly different from the definition from another client. The only standardization I ever saw across clients was if they were using a standardized data model such as SAP where the column names were all the same and the definitions of the data that were in the columns meant the same thing.

Take for example, a column by the name of “gross_sales”. Depending on the business, the values in that column could very well have different meanings. Does “gross sales” include sales tax amount or not? etc…

I’m wondering how “generic” code could be mapped to individual database columns without someone mapping the generic code column names (such as “cdbank” in your example above) to what might be a column named “Originating Bank Code” which might be the name in the database that the code is designed to work on?

This seems no different than the challenges that most IT folks and the folks that set out the requirements need to do - you still need to understand the underlying data structures, relationships, and column definitions of the data that you want to work on in order to produce accurate results that have true business meaning. This is the hard part, that I think people might be missing.

I’m interested in people’s thoughts on this. Maybe I’m just another IT dinosaur nowadays!

A few answers to questions…

Leap1 – I have yet to meet anyone in IT roles (in formal “IT” departments or doing equivalent work as “shadow IT” in other departments) who bothers attempting to estimate an Internal Rate of Return on doing development internally versus externally or an IRR on an overall project at all. 100% of the project work I’ve seen treated the decision to DO or NOT as a given and only asked for design / dev / test costs as a budgeting exercise. Virtually NONE of the large projects ever asked for a recurring cost of managing the system beyond the first license period for any commercial software used. Of course, that makes “IT” a “cost center” in outlying years with budgeteers constantly trying to trim costs by simply refusing to support recurring licensing / support costs of applications previously approved.

bjurasz – You are correct. In a requirements chain that looks like this

client —> client BA —> delivery BA —> delivery enterprise architecture —> delivery solution architecture —> developer ----> code

the current ChatGPT focuses on automating some of the labor in that last (rightmost) arrow. This new approach of having the developer summarize in prose what they did to make the code AI more searchable based on human terms rather than technical terms like “DAO”, “DTO”, getter / setter, JWT token, etc. is not the final target of language mapping. Eventually, the goal would be to “shift left” closer to the terminology used in the ask of the original client.

38Packard – You are addressing one of the biggest problems with software design. As an example, there are existing utilities for Java such as Hibernate that focus on simplifying “database persistence”, meaning the grunt work of mapping a model of some business object like a financial CD as viewed by a human user into a memory model passed around between layers of code into a database model optimized for housing in a database. Even that final DB layer gets complicated because storing 10 million records efficiently in a relational DB like Oracle can be vastly different than using a newer “big table” based technology like Cassandra or MongoDB or Redis.

These libraries make assumptions about naming of variables both in the database and in the programming language so it can quickly perform mappings like the above. These conventions can speed productivity greatly IF you are dealing with a “greenfield” application AND IF you are willing to comply with the library’s naming conventions. If instead you are dealing with an existing relational DB designed 15 years ago that already has five other systems consuming it, you’re not going to redesign that database model and column names in the tables, you have to meet the mountain on the mountain’s terms, at which point these tools become very tedious to configure.


The point of all of this is not to bore people with inside baseball debates raging in the software world. The point is that I assume efforts at AI based automation of software design and development is NOT likely to succeed over the short term at actually replacing developers and designers with a wizard that asks a client WHADDAYAWANT? and spits out fault-tolerant, secure solutions that meet Fortune 500 needs. But that will not stop firms from attempting to sell solutions based on this promise and it won’t stop Fortune 500 executives from gambling a big project on promises from such vendors and deploying something that fails on every conceivable success criteria.



All aspects of the digital arts include from a few components in the processing to the entire work being partially decided and put together by AI.

Even as a person only using digital means my GPU has AI components that make up the difference between my ideas and the end result as I want it. Because digital art can has technical limitations AI computing can overcome problems.

If I making an animation carrying one frame to the next is improved by an AI component.

@WatchingTheHerd outstanding responses. I’ve been a database, UI, API, workflow developer and systems project manager for >30 years. I can’t wrap my head around the hubris of declaring this model could work except on an extremely limited problem space, 1:1. Our business people now would feed into GPT something like “I need a 150-item questionnaire with yes/no questions, optional toggle choices and some numeric entry fields with responses fed to a proprietary analytics model to score, and fed to a materialized data matrix graphics page for review of scores in prioritized order. Oh, and we need that in a month.”


Agreed on the limited circumstances but it gets interesting in some ways.

Classical odds like playing with dice or other games absolutely.

Self driving car maybe even with the accidents involved. First human drivers might be worse. Next Tesla insuring their own cars means a profit to Tesla while building perhaps a safer driving car than all the ICE autos out there. I do not know if we are there yet but we do not have to be fully there.

1 Like

As someone from outside the IT world, I have a couple questions.

First, it sounds to me like this project is basically asking developers to contribute to their own demise. They’re being asked for information to train an AI so that the AI can do their job. What incentive do they have to provide good information instead of bad information that might sabotage the project and protect their own jobs?

The other is more of a thought. The chain of requests from higher level business requirements to what the developer is told sounds like the old game of telephone. The message gets changed along the way. But if you are trying to replace a coder with an AI, doesn’t it make sense to ask the coder to explain what they understood when looking at what code they wrote? After all, they are going to write code based on their understanding of the requirements, not necessarily on what the next person up the chain thought they were asking for.


1 Like

That is exactly what is going on there. I remember 20+ years ago when the mantra was “train your new replacement before we let you go”. Now its “train the AI before we let everyone go”.

I’m not sure how far this will go, honestly. Some in-the-loop people I know are saying that possibly 20% of all new code today is already generated code. That number is expected to rise. And some are calling this the Death of Programming. (Obviously, we’ll still need some programmers, but possibly no the hordes of them we have today).

Seriously, glad I’m within 10 years of retiring. And not certain I can tell my 11 year old daughter that programming is a future career for her.

Interestingly, just saw this article from The Atlantic that came out a week ago. I don’t think it’s behind a paywall.

From the beginning of the article:

In the next five years, it is likely that AI will begin to reduce employment for college-educated workers. As the technology continues to advance, it will be able to perform tasks that were previously thought to require a high level of education and skill. This could lead to a displacement of workers in certain industries, as companies look to cut costs by automating processes. While it is difficult to predict the exact extent of this trend, it is clear that AI will have a significant impact on the job market for college-educated workers. It will be important for individuals to stay up to date on the latest developments in AI and to consider how their skills and expertise can be leveraged in a world where machines are increasingly able to perform many tasks.

There you have it, I guess: ChatGPT is coming for my job and yours, according to ChatGPT itself. The artificially intelligent content creator, whose name is short for “Chat Generative Pre-trained Transformer,” was released two months ago by OpenAI, one of the country’s most influential artificial-intelligence research laboratories. The technology is, put simply, amazing. It generated that first paragraph instantly, working with this prompt: “Write a five-sentence paragraph in the style of The Atlantic about whether AI will begin to reduce employment for college-educated workers in the next five years.

The article goes on to say that AI is coming, but major shifts in technology adoption leading to productivity improvements have historically taken decades to become meaningful contributors to the overall economic picture. It’s also true that the rate of change seems to be accelerating, and with the introduction of AI, new ways of learning / working will allow even faster realization of anticipated productivity improvements.

If I were a betting man, which I’m not, but if I was, I would put my money on AI making significant dents in many occupations that lend themselves well to AI technology as cited in the article and this could happen much sooner than we think.




I am thinking call centers and medical problems.

Both of which were often easy to outsource.

X rays read in Australia overnight as someone lay on a table in an American hospital after midnight. X rays any time of day can be read more accurately by a machine. Given time.

Or people in India answering mid level software and hardware problems in haphazard ways that at times wont help the consumer. AI will narrow the problem down and return simple but full email responses with a couple of pictures on how to fix the problem. Or using SMS? takeover your PC and fix it.

An analogy occurred to me earlier today as I thought more about where AI might be adopted versus where it might succeed or fail catastrophically.

At its core, AI is an algorithm for running a collection of algorithms that

  1. map inputs reflecting some problem / task into models representing physical attributes (dimensions, weight, position, color, texture, etc.) or conceptual attributes (time, money, ownership, etc.) or events related to these other attributes
  2. devising models to represent prior “best practice” decisions about how different combinations of those inputs should be mapped to some decision or output
  3. training that mapping model by importing thousands / millions of data points of (prior inputs + prior decisions) and have the algorithm create mapping tables
  4. have experts review the derived mappings as a sanity check
  5. collecting inputs for a NEW problem / task, converting them into the model format, then feeding that to the AI algorithm to determine a new decision

This process can work well when the inputs are primarily physical and objective. It may encounter dangerous problems for models where inputs are subjective or the target of the decision is – ummmm – a human who is capable of purposely providing false inputs.

For an extreme example, it might be possible for an AI to replace an orthopedic surgeon. Why? Knee problems are nearly 100% PHYSICAL. They can be examined objectively with X-rays and MRIs to diagnose physical bone structure and contact points, bone density issues can be detected from those tests, and even tissue / meniscus problems can be accurately captured to create an accurate diagnosis. Couple a digital MRI scan with pictures accurate to a millimeter with CNC controls and a DaVinci robot and it might be possible to diagnose a torn up knee and drive a DaVinci surgical robot to perform the corrective surgery. There are a few conditions that might generate pain but not appear on X-rays or MRIs but orthopedic surgery in general tends to be very concrete / objective in diagnosis and correction.

In contrast, it might be far more difficult to devise an AI to diagnose mental health issues. Why? Because the target of the AI – the patient – might be incapable of accurately describing their symptoms (like a child) or might attempt to mask their mental state out of shame or criminal intent (as an adult, or even as a child). When model inputs are SUBJECTIVE and can be distorted / colored by language semantics, it becomes vastly more difficult to model inputs and encode past results in order to “train” the algorithms.

Again, this is not to say that AI attempts won’t be made in ill-suited disciplines. Where corporations can shave costs by taking humans out of the product or service, they will attempt to do so. Success, however, will likely elude such attempts in “gray area” fields.


1 Like

In January 2011, I went in for spinal surgery on a near emergency basis. I had a severe spinal stenosis in my neck. The spinal cord was being compressed to 4mm, which is a lot. Everyone was surprised I was still buttoning shirts and gripping things. Anyway, the MRI and the CT Mylograph (sp?) told the surgeon that a lot needed to be done, including removal of some hard tissue. A 5 hour surgery was planned that involved work from the front, then from the back, including some scaffolding. Surgery starts, surgeon gets eyes on the inside of me, realizes its not that bad and a more-or-less normal fusion of C4-5-6 was all that was needed. Surgery was just over 3 hours long.

Glad I had a rock-star human surgeon and not the robot.


A group of 1000 radiologists worked with IBMs watson to read scans over a period 6 years. My long time friend is a mammography specialist within the group. He mentioned how remarkable the watson algo was towards the end of the training. In the beginning it was sort of good reading 9/10 correctly with no mistakes.

Those mistakes were usually anomaly which was identified but not diagnosed.

Towards the end, watson was capable more than 99% of the time. Clinically, all reads were supervised and redundant.

Healthcare Data, Technology and Analytics | Merative recently bought the technology to add to their portfolio of medical consulting services.


I believe the different is pretty obvious. Code in GitHub is not available to anyone who wants to use it, it still must be licensed. The fact that it is “public” is irrelevant, somewhat analogous to having books in the library, which doesn’t mean you are able to copy them and exploit them for profit.

This proposal is to “hire” developers for the stated purpose of allowing AI (down the road) to clip, snip, or use the code in any way it wants. That is quite different from AI using, without permission, existing code from whatever source it wants to.

The “explaining in prose” is a minor red herring; AI is currently repurposing bits of code in its “new” creations without any such prose. The “explanation” would merely add another possibility of understanding so that the AI would have a further source of information for working upstream or finding appropriate routines. (I have read prose written by coders, and it sometimes is helpful, and often is not pretty.)

It’s merely the issue of “using without permission” and “creating a library (and paying for it) for the express purpose of learning, or partial or wholesale appropriation.”

1 Like

Elon Musk Joins the AI Team. Recruiting staff for AI projects.

1 Like