ChatGPT: What It Is and What It Can and Cannot Do
- January 24, 2023
- Snippets
Practices & Technologies
Software & Computing Trademark Patent Portfolio Management Artificial IntelligenceThe talk of the Internet these days (with spillover into traditional media) is ChatGPT, a large language model that is capable of producing remarkably human-like text. Trained on millions of human language documents, ChatGPT can generate impressive results from a simple prompt.
For example, I asked ChatGPT to “Write a rap song about trickle-down economics invoking Adam Smith and Ronald Reagan.” It came back with the following (from which two verses are excerpted):
Adam Smith and Ronald Reagan,
Two icons of the trickle down scene.
They preached the gospel of supply and demand,
And said the economy would be freed from government’s hand.The theory goes like this: the rich invest,
And the benefits of their success will be expressed.
Through higher wages and more jobs, it’s a win-win,
But some say the reality is much more grim.
Not bad. ChatGPT can also explain the difference between a patent and a trademark:
A patent is a form of intellectual property that gives the person who holds the patent the exclusive right to prevent others from making, using, selling, and importing an invention for a limited period of time, usually 20 years from the filing date of the patent application. Patents are granted for new, useful, and non-obvious inventions or discoveries.
A trademark is a word, phrase, symbol, or design, or a combination of these, that identifies and distinguishes the source of the goods or services of one party from those of others. A trademark can be a brand name, slogan, logo, or other distinctive mark that is used in connection with a product or service. Trademarks are used to protect brand identity and ensure that consumers can easily identify and distinguish one company’s products or services from those of another.
Also not bad. In addition, ChatGPT can write text in the style of a particular author, essays on virtually any topic, and even computer code.[1] It can also answer questions. Thus, the text that it generates encompasses the functionality of search engines and Wikipedia, but adds a new aspect – it can create. That is where things get sticky. Not only are ChatGPT creations cogent enough to make school teachers reach for their aspirin bottles, they may suggest to naïve users that ChatGPT is intelligent, wise, and human-like.
While ChatGPT would not pass the Turing Test in its current incarnation, it is good enough to fool the easily fooled. But even if it quacks like a duck that does not mean that it flies south for the winter. ChatGPT’s internal operation is nothing like human language processing.
In short, ChatGPT is autocomplete on steroids. Underlying ChatGPT is a transformer-based large language model that is trained by a massive amount of text to predict the next word of a sentence given the first n words of that sentence. In order to get it to respond to queries, a second model is trained, one in which thousands of prompts and associated responses are generated (some automatically, some manually), and human reviewers rate the quality of each response with respect to its prompt. A further model is trained to predict how well a human would rate each response. Then, ChatGPT is trained to generate responses that would be highly rated by a human.[2]
This does not resemble our current understanding of human cognition. For example, as far as we know human beings use cognitive models to represent real-world objects and their behavior. Based on these models, humans can make predictions of what these objects would do in new situations. Thus, a child might (hopefully) be able to infer that a stove hot enough to boil water will be painful to touch.
In contrast, modern computer language models struggle making such predictions. This is a general problem in artificial intelligence – unless a model is specifically programmed or trained with data representing a scenario, its prediction of what may occur or what it should do in that scenario may be way off the mark.
For an example of this, ask ChatGPT to write a review of a well-known movie, such as Star Wars or Titanic. The result will be impressive. Then ask it to write a review of a lesser known, newer movie. Prepare for disappointment. A number of comical and disturbing ChatGPT fails have been noted.
Further, human beings have a theory of mind, in that we assign mental states to other persons in order to understand their behavior. ChatGPT does not do this and therefore cannot modulate its output based on the emotions exhibited by a user. Despite the fact that ChatGPT can provide you with a definition of a person, it does not actually understand what a person is and it lacks the ability to emote. It might be able to simulate various emotions but it does not have the ability to have emotions of its own or to exhibit true empathy.
Moreover, ChatGPT does not know whether it is generating text that represents the truth. It has no concept of “truth” and no way of verifying that what it is saying is accurate, much less a moral compass to guide its actions.[3] And it has been known to generate very convincing falsehoods.
ChatGPT can even draft patent claims. I asked it to “draft a patent claim for a mechanical pencil” and it produced the following:
A mechanical pencil comprising a barrel, a lead advance mechanism, and a tip assembly, the barrel having a cylindrical shape and containing a lead supply chamber, the lead advance mechanism located at a proximal end of the barrel and adapted to selectively advance a lead from the lead supply chamber to the tip assembly, the tip assembly comprising a sleeve and a tip that is rotatably supported by the sleeve, the tip having a lead receiving portion and a lead dispensing portion, the lead receiving portion adapted to receive a lead from the lead advance mechanism and the lead dispensing portion adapted to dispense the lead for writing.
Again, not bad. And not surprising considering that it was trained on voluminous writings of the past (apparently including patents and/or patent applications). If you ask ChatGPT to draft a claim to a new invention (you know, the novel and non-obvious stuff that by definition was not in its training data), lower your expectations. So patent attorneys can breathe a sigh of relief, as our jobs are not in danger…yet.[4]
ChatGPT creations are merely sophisticated pastiches of the writing on which it was trained. So don’t ask ChatGPT for psychological help, dating advice, or who to vote for. And if you do, take its output with a huge dose of skepticism. ChatGPT is not Skynet or HAL 9000, but it also is not a trustworthy digital assistant. Like all artificial intelligence to date, it is very good at a limited number of tasks and embarrassingly bad at the rest. Keep that in mind when you use it.
Michael S. Borella, Ph.D., is Chair of MBHB’s Software and Business Methods Practice Group.
[1] On the other hand, I asked ChatGPT to “Write an article about Alice v CLS Bank in the style of Kurt Vonnegut,” and the result was rather bland with mild sarcasm sandwiching a textbook description of the case. So it goes.
[2] See https://pub.towardsai.net/chatgpt-how-does-it-work-internally-e0b3e23601a1 for a more detailed technical description.
[3] If you ask ChatGPT, it will admit that this is the case.
[4] On the other hand, ChatGPT may very well be useful in helping us write the backgrounds of patent applications or describe known technologies.