Skip to main content
Gen AI

Finding Success with GenAI: A Guide for the Non-Techies

by Kat Kollett July 7, 2025

It seems like everyone is talking about generative AI (GenAI) and what it means for us. Will it replace jobs? Threaten industries? Is it the beginning of something new or the end of something familiar? Regardless of these concerns, many businesses feel pressure to adopt it — quickly.

At One North, we’ve had many conversations about GenAI with clients, and we’ve learned that a lot of people misunderstand its fundamental workings. This confusion can lead to unrealistic expectations. It’s occurred to us that when people understand more implicitly what it can do, they feel more confident in using it and less concerned about its potential impact on our lives.

This article is not for those of you who are well-versed in GenAI and large language models (LLMs). It’s also not intended to give those of you who are not well-versed a technical understanding of how they work. Instead, it offers an approachable way to understand how GenAI works — through the lens of pattern matching and statistical probabilities. This lens helps clarify both its powers and limitations.

 

GenAI Is Not as New as It May Seem

One quick way to understand pattern matching and statistical probabilities with respect to language is to think about the predictive text available when we compose messages on our phones. With every word we type, our device will recommend a handful of likely next words, and often it’s spot on, especially when trying to help with frequently-used phrases, like greetings and sign-offs. This works because the tools we’re using have access to statistics about word frequencies, the likelihood that words will be used together, and in what order. Some of them even learn from our habits and make recommendations based on them. This is a form of generative AI, although it’s simpler than what we see in tools like ChatGPT, Copilot, Claude, and Gemini. GenAI based on LLMs does the same thing but at a massive scale. It processes far more — and more granular — data. This means it’s both more and less precise than predictive text. We’ll come back to this, but, in essence, it means that GenAI produces output more like it’s pulling from memories and less like it’s following a linear thought process.

Along these lines, you can build a general understanding of how GenAI works by thinking about the automatic connections that human brains make.

 

Pattern Matching Like Human Recall

If you grew up speaking English in the US, you can probably, automatically, finish the following:

  • The early bird…
  • If you can’t stand the heat…
  • All that glitters…
  • Nothing ventured…
  • Pride goes before…

But it’s unlikely that you’ll be able to finish these:

  • Little by little, the bird…
  • Everyone sees noon…
  • The first pancake…
  • Eyes are afraid, but…

The first two of these less familiar sayings are French; the third and fourth are Russian.

(In case you’re curious: little by little, the bird builds its nest; everyone sees noon at their door; the first pancake is always a lump; and eyes are afraid, but hands do).

So, this obviously isn’t exactly how GenAI works; but also, it is—in a foundational sense. Other analogs are your ability to automatically sing along to songs you haven’t heard in decades and even, more abstractly, ride a bike after a years-long break.

The reason you can finish the English proverbs is that you’ve heard them all your life. They’ve been repeated so many times that they’re just available in your memory, and you recall them without even thinking about it because their patterns are so familiar to you.

 

Like Human Abilities, but Also Not…

Generative AI is like human proverb recall in three ways:

  1. It is able to “produce” content based on patterns it knows from exposure.
  2. The more frequently it encounters a pattern, the more likely it is to incorporate it into a response.
  3. If it can’t find a pattern that’s a good match, it can invent one that’s at least superficially connected to the prompt to which it’s responding.

It is not like human proverb recall in that it has no idea what its responses mean nor whether they are appropriate for a given prompt.

GenAI is not typically used for recall purposes (FWIW, yes it can complete proverbs accurately), but rather to answer questions or generate content. While we can all likely point to plagiaristic examples to the contrary, Gen AI’s responses are not generally pulled in their entirety from the data it has access to (the engines include hidden prompts to avoid this). Responses are instead composed of smaller “chunks” of information that repeat across the content to which it has access and that are related back to the prompt it was given.

These “chunks” of information are called tokens. LLMs use tokens to break information down into segments of the “right” size to be recombined. For text, they’re usually words or parts of words (prefixes, bases, suffixes, etc.). For images, they’re most likely pixels. Essentially, GenAI crafts “remixes” of tokens that are good matches for prompts it receives and good extenders of responses it’s already begun.

So, when generating a text response, GenAI will piece together words or parts of words based on the statistical probability that each should come “next” in the sequence. For image generation, it does the same thing with pixel values, determining the “right” pixel value based on the values of previously-generated adjacent pixels. It is only because it has enough data at its disposal that a GenAI engine can do this and produce something that seems unique and makes sense.

A few implications of this:

  1. GenAI cannot create anything truly new. Everything it creates will be derivative of something it has encountered in the data it has access to. Note that this means that even if content it produces isn’t literally plagiarized, it borrows from the work of others. This is particularly troubling with respect to creative works, as the people whose work has been “sampled” are not compensated.
  2. GenAI cannot be insightful. It cannot sense connections among ideas that are not literally in the content it takes in, meaning it can’t draw parallels to ideas from outside the specific context it’s exploring or make inferences that would strengthen its work.
  3. GenAI cannot reason. If you ask it to perform logic, including simple math calculations, its only path is to find content that includes the answer or logical conclusion — it can only parrot reason. This means it cannot solve novel problems or make specific, justified recommendations.

 

More and Less Precise than Predictive Text

LLM-based GenAI is more precise than predictive message text because it’s working at the token level, and less precise because it lacks strict rules to follow when crafting responses. Working in a more precise way enables the generation of all sorts of content types, from text and images to video and voice. Working in a less precise way, combined with a determination to provide a result and an inability to understand when it doesn’t have access to sufficient or accurate data to produce a “good” result, leads to GenAI “hallucinations.” Hallucinations basically amount to GenAI fabrications based on whatever seemingly-related information it can find.

And just like GenAI’s strengths are similar to human recall, GenAI’s hallucinations have a human analog. As mentioned above, when GenAI cannot find a good pattern to follow to generate content, it will find the next best thing. Sometimes, it’s not a very good match (real examples of terrible GenAI recommendations include adding glue to pasta sauce to thicken it and sprinkling antifreeze on the lawn to dispose of it). Humans do the same thing, like when we make up lyrics if we can’t exactly understand what’s being sung, and every time a toddler invents a new fact out of nowhere (to us—to the toddler, it’s likely based on their interpretation of something they’ve experienced). GenAI is more like the toddler, though, first in that it projects extraordinary confidence around all of its responses, and also in that it can’t understand time (or anything mathematical), leading to claims like sharks being older than the moon.

 

Finding Success with GenAI

LLMs are capable of processing and filtering information at an incredible pace. In this, they emulate part of the capacity of the neural networks that are human brains. But human brains do so much more than just process information. Human brains are wonderful at making unexpected connections and deducing things logically—the sparks of brilliance and problem solving skills that make us unique. GenAI does not have this ability.

Keep this in mind as you explore and take advantage of all that GenAI does have to offer, and stay vigilant to avoid its pitfalls. Here’s how:

  1. Pay attention to its sources of data and training. Context matters, and the sources of data an LLM is trained on and has access to, as well as the prompts it includes behind the scenes to shape responses, will determine the quality of the results you’ll see and the likelihood of hallucinations.
  2. Provide as much direction as possible. Include specific instructions with your requests to guide level of detail, tone, follow-up instructions, and more. You can also choose to run results through multiple models to take advantage of their different strengths. Incidentally, if you don’t know how to help a model you’re working with give you the results you’re hoping for, you can ask the model.
  3. Never take it at face value. Remember that what GenAI cannot source, it will invent using the best information it can find, but without the sense to understand if this next best information is useful, irrelevant, or possibly even harmful. If you’re using it to generate factual output or recommendations, it’s important to validate what they generate.

Most importantly, never ever forget that GenAI cannot think or create anything that’s really new. They can be very helpful, but humans need to provide the context to make them truly so. Use it to accelerate what you do, not to replace your own thinking and creativity.

 

 

Watch my session from the 2025 One North Assembly for a deeper dive into the powers and pitfalls of AI—View the full recording here

If you’re looking to apply GenAI in a way that’s practical and effective, talk to a One North expert. We can help you identify where it adds value, how to use it responsibly, and what to watch out for along the way.

Photo Credit: Christopher Burns | Unsplash

Kat Kollett

Senior Director of Strategy

Kat Kollett is the Senior Director of Strategy, leading One North’s multidisciplinary team of Brand, Content, CX, UX, Data and Technology strategists. She brings a user-focused approach to innovations in brand, digital, analog, environmental, and interpersonal experiences, and helps clients apply those innovations to meet their strategic objectives.