Generative AI Finally Making More Sense By Interlacing Conversations As Showcased Via The Newly Announced … – Forbes

74 minutes, 42 seconds Read

In today’s column, I will be taking a close look at a somewhat surprising problem that most users of generative AI might not have realized they faced but that once they become aware of the issue or limitation at hand, they are often puzzled and tend to express a sense of irksomeness and dismay.


World’s Leading High-rise Marketplace

What is this semi-hidden vexing problem?

I am referring to the inability of many generative AI apps to allow an intermingling or interlacing of the various and at times numerous online text-based conversations a user has while using the generative AI for interactive text-oriented discussions and dialoguing. The key here is that your conversations are usually considered completely distinct and separate from each other. Nary two or more conversations shall meet, as it were, therefore restricting or locking your mindful discussion into a given conversation and not allowing the tidbits and keystones to see the light of day in another conversation that you might be having with the generative AI.

This is a really, really, really big sticking point, whether you knew about it or not, and has stoked fervent efforts behind the scenes by AI researchers and AI developers to find suitable means to allow a user’s conversations with generative AI to be sensibly commingled. One such early version solution has been recently and prominently announced and I will be walking you through the particulars of that pronouncement. For my ongoing and extensive coverage of the latest in AI and what’s happening, see my column analyses at the link here.

Let’s dig into this topic of conversational interlacing and see what we can fruitfully uncover.

When Conversations Are Isolated Like Soundproof Rooms

Allow me a moment to explain the conundrum of whether conversations are distinct from each other or are commingled or interlaced.

You probably know that when using a generative AI app such as the widely and wildly popular ChatGPT by AI maker OpenAI you do so via what is referred to as “conversations”. A conversation consists of you entering prompts and the generative AI generating responses to your prompts. The idea is that this is akin to having a conventional type of conversation with a human, namely that the accepted norm of how we converse with each other as humans consists of taking turns when expressing ourselves.

Each time that you log-in to a generative AI app, you either start a new conversation or can continue a previously saved conversation. While using the generative AI app, you can also opt to begin a new conversation and thus momentarily switch away from a conversation you were already engaged in. Usually, the generative AI will automatically save your conversations, ergo you can switch back and forth between a multitude of such conversations and not worry about them being inadvertently tossed asunder or otherwise lost.

But here’s the rub.

Whatever you indicated in one conversation is not automatically carried over into another conversation. Think of each conversation as existing in a tightly enclosed soundproof room. There is no leakage or slippage of what you said in one conversation that makes its way to another conversation. Therefore, each conversation that you start anew begins with an entirely fresh basis (there are some exceptions to this, which I’ll mention later herein).

Is this distinctiveness or strict independence of conversations a good thing or is this a bad thing?

Well, it depends.

I will use human-to-human conversations as a crude analogy. I do so with some reluctance because I abhor the anthropomorphizing of AI. I have repeatedly bemoaned in my column that people tend to ascribe human sentience qualities to today’s AI. This is unfortunate and quite misleading. For my in-depth discussions about the reasons why this is wrong to do, see the link here and the link here.

Anyway, I will briefly refer to human-to-human interactions merely to get you into a proper frame of mind about the topic at hand. Do not let that slop over into assuming or thinking that generative AI is sentient, thanks.

I am going to provide you with a human-to-human conversational scenario. Ready? Okay, you come upon a complete stranger while riding on a train. The two of you engage in a conversation. You’ve never met each other before.

Is there any prior conversation between the two of you that can carry over into this newly begun conversation? Nope. You and the other person have never had a conversation and thus there is nothing that can be borrowed or gleaned from prior discourse between the two of you.

In one sense, you might delight in such a fresh conversation. There is no prior conversational baggage of things that the two of you may have previously disagreed on or quarreled over. The grass is green. The landscape is fresh. The two of you can flow in whatever direction the conversation will take. Whew, a refreshing time is to be h

There is unfortunately a downside. You and the other person must expend a lot of energy on establishing common ground. Do you like sports, the other person asks. No, is your response, so you ask if the other person likes to cook. No, they respond, but do you like to drink fine wine? On and on this goes.

Imagine that the two of you have quite an enjoyable conversation throughout the train ride, lasting about an hour. You discover each other’s interests and predilections. The other person is quick to get snippety whenever you bring up the topic of relationships, so you make a mental note about that facet. Meanwhile, the two of you relished talking about clothing. You probably spent a third of the time talking about the latest fashions.

A week goes by, and you’ve generally dealt with many other hectic elements in your life. The earlier undertaken perchance meeting on the train is now a distant memory. Many more fish need to be fried, as they say. It was a presumed one-and-done random encounter.

Lo and behold, you get onto that train again for your commute and you by happenstance see that same person. The two of you decide to sit with each other.

I want to ask you a question that might seem silly or maybe even insulting. It is not meant to be.

Would the two of you be starting utterly fresh and act as though you have never had any prior conversation, or is it much more likely that the two of you would engage with each other and do so while having in the back of your minds the myriad of points that were made last time?

I wager that the two of you would nearly immediately have in mind a slew of topics and elements that were percolating at the ready because of your prior conversation (i.e., don’t talk about relationships, do talk about fashion). This is naturally expected. It happens all the time. Past conversations with a person are likely to float around in your head. Maybe not every tiniest detail, but some of what was discussed is still in your noggin, even if hazy. A bunch of factors come into play such as how long ago you conversed with the other person, the length of the conversation, the nature of the conversation, etc.

A smarmy reader here might argue that suppose that one of you has had amnesia in the intervening week. Or suppose that one of you is a secret spy and wants to keep their cover from being blown, so they insist that the two of you have never met before. Yes, sure, we can concoct all manner of fanciful reasons to declare that the two won’t recall even an iota of their prior conversation.

Let’s be real, shall we, and dispense with those rare exceptions or contrived possibilities.

The norm of humans is that on a human-to-human conversational basis, you almost certainly will allow prior conversations to enter into or some say bleed into new conversations with that same person. Our conversations from one point in time with someone will intermingle into future conversations with that same person. Again, not everything gets carried over. You might have forgotten some aspects, you might misremember some aspects, or you might confuse or misapplied a completely different conversation that you had with someone else and mistakenly believe it was with the person you are currently chatting with. We make conversational guffaws like that all the time.

Mull over the next and abundantly serious point.

Which would you prefer the world to be, namely that you always began completely fresh with every person in your life, acting as though you had never had any previous conversations, or allowing those conversations to exist over time by bringing forward prior conversations into the world of the moment at hand?

Necessity answers that leveraging prior conversations is the sensible way to go. I dare suggest that if we were all moment-to-moment stuck with having to start each conversation brand new, the amount of time and effort to bring each other up to speed would be enormous. It would be exhausting. It would be limiting.

Aha, you have now landed where I wanted you to land.

Most of today’s generative AI apps are devised so that each conversation is completely distinct and separate from any prior conversations you have had with that generative AI. You start utterly fresh with each new conversation.

A lot of users of generative AI don’t realize that this limitation exists. In some sense, they don’t experience the limitation because they always start a conversation on a new topic and do not care whether any prior conversations with the generative AI are carried into the new discussion. They probably assumed that conversations were distinct anyway. No big deal, they might have thought, it’s just the way the AI works. Live with it and don’t carp about it.

Other users have observed the limitation. For them, the whole thing stinks. They want their prior conversations to carry over into their new conversations. The problem they face is that they must essentially reinvent the wheel for each new conversation. They have to expend a lot of the new conversation toward bringing the generative AI up-to-speed. This is tiring, requires a lot of excess typing, and can be seemingly needlessly costly if you are paying for the use of the generative AI (some generative AI apps charge based on the number of tokens or words entered and produced, or the number of turns taken).

An exception of sorts to this dilemma is that you can give the generative AI an overall indication about your conversations that you want to have globally utilized across all your conversations, which is sometimes referred to as establishing prompts or custom instructions, see my coverage at the link here. For example, you can set up your generative AI so that each time you log-in, the establishing prompts might say what you like and don’t like, what your age is, and other personal aspects that are to be right away to be included in any new conversations that you start up.

Those establishing prompts are handy but do not overcome the issue of having conversations that are distinct from one another. Your conversations can essentially borrow from the establishing prompt, but each conversation is still an island unto itself as to any flow over into other conversations. Nothing escapes a given conversation.

When I bring this matter of distinct conversations in generative AI to the attention of people at my speaking events, the usual reaction is that the problem of not sharing across conversations would seem to be a trivial problem. The assumption is that a simple switch of some kind would seem to fit the bill. Just make all conversations for a given user a global pot of gold in their respective account and be done with the ridiculous handwringing.

Problem solved; they say with glee.

I regret to inform you that things are not so easy. Life always seems to be a lot harder than it might seem on the surface. Darned if we do, darned if we don’t, that’s a common refrain and will apply here.

Let’s unpack the matter further.

When You End Up With A Global Morass Of Conversations

Go with me on a thought experiment journey.

I am going to return to my human-to-human conversational postulations to get the thought experiment underway. Again, do not overstate this and please do not assign any sentience to AI out of this.

Suppose you work with someone in your office, and you meet daily and numerous times to chat about work, along with occasionally discussing other outside work topics. Let’s imagine that you have ten distinct conversations on an average day with this person. Each conversation is rich with all manner of topics being covered. You see this person about five days a week, for around fifty weeks of the year (I’m rounding for the sake of simplicity).

This means that you have had approximately ten conversations per day, times the five days of the workweek, which is 50 conversations per week, and then we need to multiply the fifty work weeks of the year to arrive at around 2,500 conversations per year. You’ve worked at this office for two years and thus have amassed around 5,000 conversations with that one person.

So far, I trust that you can discern there is nothing unusual or outlandish about this.

I am going to ask a loaded question, so get ready.

Would you want all these roughly 5,000 conversations to be completely memorized and at hand at all times, in their entirety, and be fully infused into the 5,001st conversation so that they bloated and permeated this new conversation and could be bandied around even if much of those prior snippets in those conversations had little or no pertinence to the new conversation?

I told you that the question was loaded. Here’s why. If somehow the snippets of prior conversations were kept at bay and only entered the 5,001st conversation when needed, you would probably be quite happy with this arrangement. On the other hand, if all those snippets are flooding their way into the new conversation, you would have a mess on your hands. You would be weighed down by prior stuff that confounds your latest conversation.

The dilemma is this. When commingling conversations, you want to have the relevant stuff come to the fore when appropriate, but you want to keep the irrelevant stuff in the background such that it doesn’t get in the way of the topics at hand. Too much information can be bad if the information is floating and disturbing what otherwise is hoped to be a cleanly proceeding conversation.

I will give you an example that is an often-used joke about couples that know each other far too well.

You are talking with a couple that has been together for twenty years. The three of you are discussing what to order at a restaurant. The focus is placing your order and getting your meals cooked and presented to you for dining. While discussing the menu, you bring up that oysters would be really good to eat at this time.

The couple looks at each other furtively. Something is amiss. Suddenly, one of them says to the other that by gosh they had better not start talking about oysters. Don’t do it! The other one is heated up and ready to explode. Finally, after a moment of immense tenseness, the two devolve into an acrimonious discussion about oysters.

You might as well forget about ordering the dinner. Those two are going to argue bitterly about oysters until the cows come home.

What just happened in that conversation?

Some prior conversations about oysters that were known to the couple have reared its ugly head. There isn’t a bona fide need in this instance to go down the rocky road of the oysters. They could have ignored the comment and remained aimed at the menu. Lamentedly, they allowed a prior conversation or set of conversations to completely derail their latest conversation.

Let’s shift back to considering generative AI.

If we merely had a simple switch that would allow all your generative AI conversations to fully intermingle, which admittedly could be relatively easily done, the odds are that you would have a devil of time with each new conversation. The chances are that the new conversation is going to get overcome and flooded with stuff that is not relevant to whatever you intended the new conversation to be.

For example, suppose I have a conversation with generative AI about Abraham Lincoln. I want to know about his life. After finding out what I needed to know, I ended that conversation. I start a new conversation. My focus for the new conversation is that I want to find out which U.S. presidents are still alive today.

Imagine my surprise if the first thing that the generative AI indicated was that former President Abraham Lincoln is not alive today. I probably knew that already. I wasn’t even considering that he was alive today. In the case of the generative AI, since I had a prior conversation about Lincoln, the mathematical and computational linkage could be that since I was now asking about U.S. presidents, there was a solid chance that perhaps Lincoln ought to be mentioned.

Do you see how a snippet of a prior conversation can leak into a new conversation?

It is easy to have happen, overwhelmingly so.

And, most importantly, it is very hard to prevent.

When you go the global route on all your conversations within your account, the number of snippets is bound to be enormous and would be at the beck and call of your newest conversation. How is the generative AI to computationally determine which prior snippets apply now? Which ones don’t? Should a snippet be used behind the scenes and not even alert you that it is being used? And so on.

Humans generally do a reasonably good job of keeping at bay the prior snippets they have in their minds when performing human-to-human conversations. In your mind, you somehow have placed those snippets here or there. You seem to know when to invoke them. You seem to know when to not invoke them.

Sometimes you misjudge. The couple I mentioned earlier has an oyster trigger word that could set the two of them on flame at a moment’s notice. Envision the two of them having a wonderful time at a shopping mall when one of them inadvertently sees something that leads them to utter the word oyster. Yikes, watch out for the fireworks.

A fascinating and unsolved problem for generative AI is how to mathematically and computationally contain the zillions of snippets of conversation that a person might have had with the generative AI, and use them when appropriate, plus avoid them when irrelevant.

That’s why we tend to have generative AI right now that is devised to keep the conversations in their distinct and separate rooms. It is far better to have the user deal with a lack of interlacing than it is to have them contend with a beehive of interlacing. They would go bonkers dealing with the vast amount of irreverent and nonsensical banter that would arise.

AI makers are astute enough to know that the tradeoff is well worth keeping things distinct for now and waiting until they can get their arms technologically around how to suitably enable conversational interlacing.

Guess what?

Are you sitting down?

You have now been seamlessly presented with an AI insider “dirty secret” (please note that this is not to cast aspersions on the topic; just using a catchphrase that this a daunting and stubborn topic conventionally hidden in plain sight and not often brought explicitly up to outsiders, considered instead to generally be an inside the bubble kind of a concern). Welcome to the club. It is a bit of a private club.

You have received an honorary badge and have now joined the club.

I’d like to next share some weighty background research and once I’ve done so, we can take a close look at the latest attempt at interlacing of conversational dialogue in generative AI which has prominently been announced by OpenAI for ChatGPT. This foundation that I’ve provided to you herein will allow you to see things clearly and with your eyes wide open.

Selected Research Associated With Conversational Interlacing

Put your mind back to the 1980s. At that time, a line of inquiry was formulated that sought to undertake what was coined as “conversational analysis”. That’s not to suggest that humankind has not been trying to figure out how conversations work before that time. You would certainly be on safe ground to proclaim that we have been seeking to nail down how conversations work from the days when we likely first could carry on intelligible conversations at the get-go.

A researcher named Harvey Sacks became known in the 1980s for devout attention to unraveling how conversations function. Consider this excerpt from one of his mid-80s research works:

  • “The idea is to take singular sequences of conversation and tear them apart in such a way as to find rules, techniques, procedures, methods, maxims (a collection of terms that more or less relate to each other and that I use somewhat interchangeably) that can be used to generate the orderly features we find in the conversations we examine. The point is, then, to come back to the singular things we observe in a singular sequence, with some rules that handle those singular features, and also, necessarily, handle lots of other events. So, what we are dealing with is the technology of conversation.” (Source: Harvey Sacks, On Doing ‘Being Ordinary’, Structures of Social Action, Chapter 16, Cambridge University Press, 1985.

The reference point here is to note that it makes useful sense to disassemble a conversation. Like taking apart a car engine, we would be wise to see what happens by taking apart conversations. The notion then is to develop a kind of technology that underlies conversations. What makes them tick? How do they form? Etc.

From our perspective on the interlacing of conversations, we might strive to use these means of dividing up conversations into components so that we can make progress on deciding how to intermingle multiple conversations.

Flash ahead to the modern-day. When examining a conversation, a unit of measure is sometimes used that consists of conversational turns, known as TCUs (turn constructional units). This is a means of subdividing a conversation.

Here’s an excerpt from “Conversation Analysis” by Jack Sidnell, Oxford Research Encyclopedia, March 2016 that helps explain the TCU construct:

  • “Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) of naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases.”
  • “The ‘turn constructional component’ determines the shape and extent of possible turns by specifying a sharply delimited set of units from which turns can be composed. Specifically, in English, turn constructional units (TCUs) can be lexical items, phrases, clauses, and sentences.”
  • “This feature of projectability allows a recipient to anticipate possible completion of the current TCU and to target this ‘point of possible completion’ as a place to begin his or her own talk.”

AI researchers are keenly interested in leveraging the longstanding human-to-human conversational analysis to see what can be applied when using AI. The aim is to devise AI that on a human-to-AI conversational basis can mimic the likes of a human-to-human conversation.

AI researchers are typically trying to come up with a data structure that can be beneficially used to represent conversations, and likewise devise a type of specialized lingo or language to describe the nature of conversations. By doing so, the ability to develop AI that can parse and process conversations is going to be advanced. This approach entails being able to slice and dice a conversation, being accomplished at varying levels of granularity.

For example, in a research study entitled “ConvoKit: A Toolkit for the Analysis of Conversations” by Jonathan Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, Cristian Danescu-Niculescu-Mizil, arXiv, May 2020, the researchers made these salient points (excerpts):

  • “Firstly, conversations are more than mere ‘bags of utterances’, so we must capture what connects utterances into meaningful interactions. This translates into native support of reply and tree structure as well as other dependencies across utterances.”
  • “To address these issues, a unified framework for analyzing conversations must provide both a standardized format for representing any conversational data, and a general language for describing manipulations of said data.”
  • “The language of manipulations must be expressive enough to describe actions at different levels of the conversation: individual utterances, entire conversations, speakers in and across conversations, and arbitrary combinations of the above.”
  • “This paper describes the design and functionality of ConvoKit, an open-source toolkit for analyzing conversations and the social interactions embedded within.”

One lingering question is whether conversations of a human-to-human caliber are necessarily of a different norm than human-to-AI conversations. In other words, when you interact with generative AI, are you going to adjust how you carry on a conversation?

Some would argue that this is a chicken-or-the-egg issue. If the generative AI is not on par with human-to-human conversations, it is almost guaranteed that people will alter how they carry on their conversations with the AI. Look at Siri and Alexa. They are based on the older stilted ways of natural language processing (NLP), which has frustrated people to no end. Most people end up crimping their fluency to get those AI apps to do what is requested.

An interesting survey article on the topic of human-to-AI conversational styles is entitled “The Human Side of Human-Chatbot Interaction: A Systematic Literature Review of Ten Years of Research on Text-Based Chatbots” by Amon Rapp, Lorenzo Curti, and Arianna Boldi, International Journal of Human-Computer Studies, July 2021, and proffers these key points (excerpts):

  • “In this article, we present a systematic literature review that maps out the current state of research on how humans interact with text-based chatbots.”
  • “Notwithstanding the proliferation of prior literature reviews in the field of conversational agents exploiting embodied or spoken forms of verbal interaction, we still know little about research addressed to chatbots exclusively communicating through natural written language.
  • “More precisely, we do not have a comprehensive overview of what has been investigated so far from the human side of human-chatbot interaction, i.e., what people expect, feel, and, more in general, experience when they ‘encounter’ a chatbot.”

I’ll note that the above research article predates the advent of today’s modern generative AI and there is an ongoing reexamination of the human-to-AI conversational aspects. That being said, let’s not neglect what has taken place already. There is much too much overlooking of what has already been well-traversed ground in the AI field and researchers sometimes resort to reinventing the wheel and not leveraging what has usefully come before.

An additional topic that I thought you might find intriguing is whether we should be gauging human-to-AI conversations on a scale that has mainly been used for human-to-human interactions. There have been concerted efforts to consider that humans have an IQ (intelligence quotient), and an EQ (emotional quotient), and some suggest there is even a CQ or C-IQ (conversational related).

One example of seeking to categorize conversations and rate them on some kind of CQ or C-IQ is exemplified by an article entitled “The Neuroscience of Conversations” by Judith Glaser, Psychology Today, May 2019, which noted these types of typified conversations:

  • Transactional Conversations. “Transactional conversations include interaction dynamics such as asking and telling. These types of conversations confirm what we know and give people a platform for giving and receiving information.”
  • Positional Conversations. “Positional conversations include interaction dynamics such as advocating and inquiring. These conversations allow us to defend what we know; they give people a platform for having and expressing a strong opinion about something. In these conversations, we are less open to influence and more interested in selling our ideas.”
  • Transformational Conversations. “Transformational conversations, also called co-creating conversations, include interaction dynamics such as sharing and discovering. This means asking questions for which you have no answers, listening to the collective, discovering, and sharing insights and wisdom. This generative way of communication leads to more innovative insights and deeper listening to connect to others’ perspectives.”

The above quick tour of AI and conversational subject matter research will hopefully whet your appetite about looking further into the great deal of work that has previously been done on the nature of conversations, both human-to-human ones and human-to-AI ones.

I will next get you prepared to see what the latest announcement about ChatGPT has to say on this substantial matter.

Being Able To Dissect And Reuse Conversational Snippets

I am first going to define some terminology here to make life easier as we go deeper into the nature of conversations.

My goal in introducing this terminology is to try and avoid using language that seems to anthropomorphize AI. For example, in my view, the words “thoughts”, “understanding” and “thinking” when used to refer to AI are entirely misleading. Those words are heavily embedded in the base of human experience. The moment you use those words as a crossover to discuss AI, the immediate reflective reaction is to infer that the AI of today has sentience.

Let’s do our best to avoid that slippery slope.

I will use the word “snippets” to describe portions of conversations. This seems like a relatively neutral word.

Snippets can range in size. A particular snippet might just be a few words, or it might consist of nearly an entire sentence. I am not going to get bogged down here on the details, though they are crucial, and I am just saying that herein I don’t have the space to get into the underlying formulations, see my detailed coverage at the link here.

Another consideration is whether a snippet is considered to be of a verbatim variety. A verbatim snippet is when the words of the snippet are directly captured and used without any change as lifted from a conversation. In contrast, a virtual snippet is a type of snippet that emblematically represents what was stated in a conversation, though it isn’t a word-for-word rendition.

Snippets are customarily associated with a particular conversation. In that sense, you could say that the snippet is connected or linked to a specific conversation. For various useful reasons, we might want to also have snippets that are in a sense disconnected from any particular conversation. This is especially the case when making use of virtual snippets.

When a conversation is examined, the idea is to try and surface snippets from the conversation. Doing so can be hard to do since you could go wild and find tons of snippets in even the most modest of conversations. We are also faced with a challenging intrinsic problem that natural language is said to be semantically ambiguous, as I have touched upon in my column many times, see the link here. Semantic ambiguity means that any utterance is subject to a wide array of interpretations. There is nearly always wiggle room associated with any utterance, including snippets.

The snippets that are identified or devised will be cataloged and stored to make them readily available.

Let’s consider the overall process of when snippets would be first uncovered and how they would be later utilized. Envision that I become engaged in a conversation with generative AI. Refer to this as a source conversation. Within that source conversation are numerous potential snippets that are computationally culled and then stored in a special storage area intended to contain the snippets and can be readily searched for their reuse.

Next, imagine that I am about to have a new conversation, which we will refer to as the target conversation. Your mission, if you should decide to undertake it, would be to try and figure out which if any snippets from the source conversation are to be suitably applied to the target conversation. I want to strenuously emphasize that the aim involves suitably utilizing snippets. We could wantonly apply snippets, we can erroneously apply snippets, and overall, unsuitably apply snippets.

We don’t want to do that.

This same analysis is to be repeated, namely each new conversation becoming a source conversation that begets more snippets. If you then have had say a dozen conversations, they are all construed ultimately as source conversations. When the next conversation is undertaken, consider it to be the thirteenth conversation (maybe an unlucky number!), and it now momentarily becomes the target conversation (inevitably another source conversation too).

Rinse and repeat.

I’d like to have you contemplate the ways that snippets are going to be applied to a target conversation.

Consider these major actions when applying a conversational snippet to a target conversation:

  • Suitably apply a snippet to the target conversation (yay, our goal!).
  • Carry in a snippet that steers astray the target conversation (bad).
  • Carry in a snippet that is irrelevant but luckily not impactful (neutral).
  • Fail to carry in a snippet that would have been usefully impactful (missed opportunity).
  • Other

Lots of tough computational choices need to be made about how to reuse conversational snippets.

Another issue that we need to confront is that these snippets will presumably last forever since they are being stored online and we would expect that the storage will be persistent. With human-to-human conversations, the odds are that humans will eventually forget something that they uttered during a prior conversation. In some respects, this can be good. Suppose that the aforementioned couple forgot what their oyster triggering was all about. Perhaps they would be better off.

We might want to give users of generative AI the capability to peruse the snippets and decide which ones they want to delete. That might seem straightforward. The devil is again in the details.

Consider these snippet deletion difficulties:

  • User suitably deletes a no longer wanted conversational snippet (good)
  • User deletes a snippet, but it is a large one and inadvertently a smaller snippet contained within gets also deleted and yet it would have been fruitful for future conversations (bad)
  • User deletes a small snippet, but it was crucial to a larger snippet that now loses its meaning, or its meaning is no longer what it once was (oopsie bad)
  • User deletes a snippet, and was happy at the time, but then later regrets the deletion, and perhaps the snippet is no longer available (sorry bad)
  • Other

A notable complexity of conversational snippets is that they often are part of a convoluted web of semantic meaning and only make sense when interpreted within a given context. Suppose that I give you the sentence that says the lazy dog leaped over the tall fence. One snippet would be to carve out the part that says there was a lazy dog. Is the dog truly lazy? Well, the dog leaped over a tall fence, which seems to be perhaps contradictory to being lazy. The point is that a snippet is likely to have many meanings and the meaning as used in the original context might be quite significant to later reuse.

We are only touching the tip of the proverbial iceberg on this.

All in all, we have these major functional facets of human-to-AI conversations:

  • Identify sensible conversational snippets in conversations.
  • Figure out the context and web-like entrenchment of how that snippet was utilized.
  • Categorize or analyze the conversational snippets to try and assess later proper application.
  • Store the conversational snippets in a prudent means.
  • Try to ensure that conversational snippets can be readily accessed and will not overly delay or burden target conversations that will potentially be using them.
  • Allow users to peruse the conversational snippets that have been stored.
  • Provide a means for users to delete conversational snippets.
  • Perhaps allow deleted conversational snippets to be brought back into existence.
  • Suitably select and apply conversational snippets to target conversations.
  • Possibly use those reuses to further assess the snippets and their best use in ongoing usage.
  • Give the user options about when, how, why, where, and what is associated with the whole conversational snippet’s confabulation.
  • Other

Do you see how this is quite a jampacked litany of functional capabilities?

I might have added that an aspirational goal is to do all of the above in an easy way that the average user will almost mindlessly be able to leverage, and herald with great acclaim that the use of conversational snippets is a boon to their use of generative AI.

That is a tall order.

A mighty, mighty, mighty tall order.

I hope that the landscape sketched here gives you a tangible sense of why this is such a challenge and why it is a whole bunch easier to simply toss in the towel and declare (or not especially mention) that all of your generative AI conversations are distinct and separate.

It is by far a lot easier to wash one’s hands of the massive contrivance. But, fortunately, the indomitable spirit of making advances in AI, along with the specter of fierce competition amongst AI makers, keeps the wheels turning and the lights blazingly on.

A Foray Into Conversational Reuse As Announced For ChatGPT

Now that you’ve been introduced to the ins and outs of interlacing conversations in generative AI, you are well-prepared to take a glimpse at the recent announcement by OpenAI regarding ChatGPT. The discussion will be easy-peasy for you at this juncture.

I’d like to say one thing before we jump into the matter.

You might recall that at the start I had mentioned my distaste for the use of human-cognitive oriented words as seemingly applied to AI that ergo tends to anthropomorphize AI. In the arena of conversational interlacing, there is a similar tendency to do so. In particular, the word “memory” is another one of those instances.

If we set up a conversational snippets database or datastore, I would be fine with using verbiage that says the snippets are being stored in computer memory or on disk space, etc. The issue is that if you say that the snippets are being retained in memory, absent of being absolutely clear that this is not at all akin to human memory, the inference often is that the AI is somehow sentient. In short, I don’t favor referring to generative AI conversations and their snippets as being held in memory. I prefer that the phrasing might say something such as they are being kept in a datastore or at least say data memory.

Just wanted to get that off my chest because you are about to witness the jockeying around of the word memory and I trust that you will realize this has nothing to do with sentience and is entirely about computer memory, thanks.

Okay, on with the show.

OpenAI recently posted their blog an announcement entitled “Memory and New Controls for ChatGPT”, OpenAI Blog, February 13, 2024. I am going to quote excerpts from the blog and will provide an explanation that ties back to my discussion herein about conversational interlacing in generative AI.

Let’s start with these excerpts from the OpenAI blog posting:

  • “We’re testing memory with ChatGPT. Remembering things you discuss across all chats saves you from having to repeat information and makes future conversations more helpful.”
  • “You’re in control of ChatGPT’s memory. You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings. You can also turn it off entirely.”
  • “We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is. We will share plans for broader rollout soon.”

I have a bit of heartburn about the phrasing since the tendency would be to infer that ChatGPT has “memory” in some manner akin to how humans have memory, including that you can tell ChatGPT as though it is sentient to remember things. The indication could have readily been written to dispel that implication, such as by noting that you can control ChatGPT’s data memory and that you can instruct the AI app to retain or store the data items or snippets that are to be placed into the data memory or datastore. Note that this provides the same foundational precepts and does so without either by design or via happenstance landing into an anthropomorphizing zone.

I won’t say anything more on that facet at this time and will next go with the flow.

Here are additional excerpts from the OpenAI blog and indicate examples of how the newly rolling out function works:

  • “As you chat with ChatGPT, you can ask it to remember something specific or let it pick up details itself. ChatGPT’s memory will get better the more you use it and you’ll start to notice the improvements over time.
  • “For example:”
  • “You’ve explained that you prefer meeting notes to have headlines, bullets, and action items summarized at the bottom. ChatGPT remembers this and recaps meetings this way.”
  • “You’ve told ChatGPT you own a neighborhood coffee shop. When brainstorming messaging for a social post celebrating a new location, ChatGPT knows where to start.”
  • “You mention that you have a toddler and that she loves jellyfish. When you ask ChatGPT to help create her birthday card, it suggests a jellyfish wearing a party hat.”
  • “As a kindergarten teacher with 25 students, you prefer 50-minute lessons with follow-up activities. ChatGPT remembers this when helping you create lesson plans.”

Let’s briefly consider one of those examples.

Can you guess which one caught my fancy?

Read on and see.

Jellyfish And Conversational Interlacing Walk Into A Bar

I’ll pick the example about having mentioned in a ChatGPT conversation that you have a toddler, and that the toddler loves jellyfish. Admittedly, that is a pretty cute example. We can play with it to illustrate the conversational interlacing precepts.

Imagine that the remark about your toddler was made in a conversation with ChatGPT that had previously been undertaken while you were trying to come up with a project idea for a class that your toddler is taking. You, as the parent of your beloved toddler, were trying to come up with project ideas that you could then share with the toddler.

A few weeks go by.

You log into ChatGPT to do some planning for a birthday party since your toddler’s birthday is coming up. Assume that you start a new conversation. In the prevailing capacity of ChatGPT, the remark about your toddler and jellyfish is confined within that other prior conversation. The new conversation about birthday planning has no immediate access to that prior conversation. The two conversations are in separate soundproof distinctive rooms, as it were.

While in the new conversation, there is no reasonable chance that ChatGPT would miraculously suddenly suggest that you create a birthday card that has a jellyfish wearing a party hat. I say this because the jellyfish remark is locked away in the other conversation. One supposes that there is an extremely remote random chance of ChatGPT landing on a jellyfish reference (one out of a gazillion, I would say), but this would not be due to the prior mention of jellyfish.

If ChatGPT were using the new functionality, we can suppose that the remark about the jellyfish would be a potential snippet that has been placed into the datastore. Upon starting a new conversation, all of the numerous snippets are sitting at the ready. Some of them might be pertinent to the new conversation, but many are probably not pertinent.

In the new conversation, you mention that you want to plan a birthday party for your toddler. Right away, the mention of your toddler is a means to start bridging back to the snippets and is a kind of helpful clue for finding something that might be relevant. Lo and behold, you ask what kind of gift to get for the toddler, and the link to the toddler’s interest in jellyfish is computationally computed as highly relevant. ChatGPT then generates an indication to make a birthday card for the toddler that incorporates a jellyfish-related element.

Voila, this is quite exciting and illustrative of how valuable conversational interlacing can be.

I would like to point out that this same example could have gone somewhat awry. I am not trying to be a party pooper and only want to provide balance about what can take place with conversational interlacing. I assure you that I relish parties and am a true partygoer.

Envision that you had started the new conversation about birthday planning and asked ChatGPT to provide gift ideas for your toddler. Assuming that the conversational sharing mode is active, once again the jellyfish reference might automatically come to the fore.

Suppose that ChatGPT generated an indication or recommendation that you should consider buying a jellyfish as a present for your toddler. Yes, an actual living breathing jellyfish. How does that seem to you? On the one hand, you have to give props to ChatGPT for the idea since it does tie to what your child is interested in. The downside of course is that getting your toddler a live jellyfish is a bit edgy unless you already expressed that you have a home aquarium or fish tank. That conversational interlacing is perhaps a tad off-course but not wacky or totally out of the blue.

I am showcasing that interlacing conversational snippets are not always necessarily squarely on the mark.

Let’s push things further. Turns out that in the new conversation, you have indicated that the birthday party is for a friend of your toddler. It is feasible that ChatGPT might get the jellyfish reference mathematically or computationally intertwined where it doesn’t belong. For example, suppose that ChatGPT suggests making a birthday card for the close friend of your toddler and that a jellyfish motif might be the way to go. You could argue that since your toddler likes jellyfish, maybe it makes human common sense to have your child produce such a birthday card, though we don’t know that the friend of the toddler likes jellyfish (maybe the friend is frightened of jellyfish).

We can keep going on these variations. The gist is that an interlacing is not going to be perfect and there is plenty of chance that the interlacing won’t be quite so ideal.

More Details Of Interest About Conversational Interlacing As Adopted

Returning to the OpenAI blog, they made these additional announcements about the new functionality (excerpts):

  • “You can turn off memory at any time (Settings > Personalization > Memory). While memory is off, you won’t create or use memories.”
  • “If you want ChatGPT to forget something, just tell it. You can also view and delete specific memories or clear all memories in settings (Settings > Personalization > Manage Memory). ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations. Deleting a chat doesn’t erase its memories; you must delete the memory itself. You can find more details in our Help Center.”
  • “If you’d like to have a conversation without using memory, use a temporary chat. Temporary chats won’t appear in history, won’t use memory, and won’t be used to train our models. Learn more about temporary chats in our Help Center.”

Those above points indicate that you can turn off the conversational interlacing.

Nice touch.

It is helpful that you can switch it off since you might find that conversational interlacing is disruptive or at least more stressful than it is beneficial. I would guess that much of the time people will want to have the capability turned on. From time to time, you might have a solid reason to turn it off. Of course, there will be people who always insist on having it on, or don’t realize that it is on and just go with it, and there will be people who turn it off and forget they did so or turn it off and want it to never arise again.

Users will be users.

There is also a mention in the points above about being able to delete snippets. I will be keenly interested in seeing how that goes. Will users be satisfied with the delete functionality? For example, the approach taken says that the snippets aren’t associated with conversations per se, thus, if you delete a conversation the snippets will seemingly remain intact. Will people comprehend that twist, or will they be gobsmacked that they deleted a particular conversation and darned if the remnants still exist, as though a ghost has arisen?

As I mentioned earlier, there are a lot of open-ended questions and lots of research still needed on the wide array of subtopics surrounding the matter of AI-based conversational interlacing, including the user interface (UI) and user experience (UX) facets.

Continuing the look at OpenAI’s approach, the blog also notes these points (excerpts):

  • “We may use content that you provide to ChatGPT, including memories, to improve our models for everyone. If you’d like, you can turn this off through your Data Controls. As always, we won’t train on content from ChatGPT Team and Enterprise customers. Learn more about how we use content to train our models and your choices in our Help Center.”
  • “Memory brings additional privacy and safety considerations, such as what type of information should be remembered and how it’s used. We’re taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details – unless you explicitly ask it to.”

I have often covered in my column the sobering issues associated with data privacy and confidentiality that occur when you use generative AI, see for example my in-depth analysis at the link here and the link here. You need to make sure that you read the licensing agreement of any generative AI that you decide to use. Do not just blindly get an account and start entering your lifelong secrets. Bad idea.

It is helpful that the blog makes explicit the data considerations associated with the new functionality. The point is well made. Of course, we are faced as usual with the question of whether day-to-day users who opt to use the functionality will be cognizant of contending with the data-related privacy aspects.

I would like to make a side note while on this consideration about privacy and confidentiality when using generative AI. Allow me a moment to do so.

There was media coverage of the OpenAI conversational functionality announcement that went on and on about how upsetting this was due to the potential privacy and confidentiality concerns. I was left scratching my head about this because those concerns already do exist, and this new functionality does not somehow seem to especially enlarge or exponentially increase the risks. The same risks are essentially still at play. Either those media pundits didn’t understand that the risks already were around, or they seemed to be in an unstated way suggesting that the new functionality materially compounds the risks. I will keep my eye out for any sensible reasoned argument as to why that might be the case (again, I am referring to the relative risks of this added functionality in comparison to prior risks, and I am not referring to the bagful of risks, just the asserted incremental risks).

There are a number of other points made in the OpenAI blog and I am not going to cover all of them in this discussion. I wanted to provide a representative sampling so that you can see how my discussion about conversational interlacing is being implemented in real life.

In particular, since I have extensively discussed the GPT Store that OpenAI has been promulgating, such as my coverage at the link here and the link here, I would like to mention briefly how conversational interlacing comes into play in that portion of the ChatGPT ecosystem.

Here are some salient excerpts from the OpenAI blog:

  • “GPTs will have their own distinct memory. Builders will have the option to enable memory for their GPTs. “
  • “Like your chats, memories are not shared with builders. To interact with a memory-enabled GPT, you will also need to have memory on.”
  • “Each GPT has its own memory, so you might need to repeat details you’ve previously shared with ChatGPT.”
  • “Memory for GPTs will be available when we roll it out more broadly.”

Please keep in mind those caveats if you are either intending to use GPTs that are in the GPT Store and/or if you are making GPTs that you aim to post in the GPT Store.

Another Expression About The ChatGPT Conversational Interlacing

On the same day as the OpenAI blog posting, there was a tweet on X that provided a summary of the new functionality and I thought would be handy for you to read here for further insights on the approach being taken. I will quote the tweet as it was posted so please be aware that it is written in the customary informal style of tweets (Source: Tweet on X posted by OpenAI product lead Joanne Jang, February 13, 2024):

“we just launched a small experiment for memory on ChatGPT.”

“how it works”

“- it’s quite similar to custom instructions, except chatgpt is the one driving it (like auto vs. stick shift!)”

“- basically, we taught chatgpt to keep a notepad for itself. every time you share information that might be useful for future reference, it’ll (hopefully) add it to the notepad.”

“you’ll be in control!

“there are five different ways to manage memory – you can:”

“1. just tell chatgpt to remember or forget something in chat, conversationally”

“2. use ‘temporary chat’ for one-off convos involving topics you don’t want chatgpt to pick up on”

“3. delete individual memory snippets”

“4. clear all memories”

“5. disable memory altogether”

“it’s a smol experiment

“i’m sorry if you’re not in the experiment, and want to be!”

“we’re taking a bit more time than usual with this feature – given how memory, almost by definition, will take a bit longer (+ more interactions) to build. we can’t wait to learn from this experiment and iterate on the feedback, so that personalization features like this can be truly useful to everyone.”

“we’ll keep you posted – and if you’re in the experiment, please give us all the feedback!”


We are living in exciting times.

The emergence of generative AI conversational interlacing is indeed exciting. I hope you’ve now seen why this is a hard problem to tackle and solve. If the topic interests you, the good news is that this is fresh ground, and you can make a mark by getting into the game.

I have given reasons why trying to figure out conversations and their intertwining is computationally difficult, and I think you know from your own experience as a living human that human-to-human conversational interlacing is daunting as well.

For now, let’s let William Shakespeare have the last word on this weighty matter: “Conversation should be pleasant without scurrility, witty without affectation, free without indecency, learned without conceitedness, novel without falsehood.”

That poetically illuminates the tip of the iceberg when it comes to devising generative AI that can aptly deal with and interrelate conversations. Methinks that the wisdom of Shakespeare applies to generative AI.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts