The Second Phase of the Age of AI

Written by: Aaron Salzman

Description: Though it has failed to deliver on many of its early promises, AI is here to stay. What does that mean for marketing, advertising and other creative industries?

On March 24, 2026, OpenAI announced the unexpected shutdown of Sora, the AI generated video app that had, less than a year ago, landed a $1 billion deal in partnership with Disney. This is the latest addition to a long line of OpenAI projects you can read about here that, after promising to change the world, have instead found their way to an early grave.

In a time where companies like Nvidia, Apple and Microsoft are valued around 25% of the US GDP, investors are still betting on AI, though the cost-to-benefit ratio just isn’t adding up so far.

What does this all mean for folks who actually use AI tools in their creative workflow? How about those who have been heavily encouraged to use these tools despite trepidation, or those who outright refuse to use a tool built directly on the back of copyrighted material; essentially, from the theft of other artists’ work?

At the University at Buffalo, home to Empire AI and one of the first universities to formally implement AI training across humanities programs, I’ve had a front-row seat to how these tools are actually being discussed and used. As a career development specialist, I’ve seen the language around AI adoption change in the workplace, and witnessed the challenges faced both by entry-level employees and senior staff. As a former newspaper director, creative writer and graduate student I’ve been fascinated by the rise of generative AI ever since a favorite professor of mine admonished myself and a class full of writers that we were doomed for unemployment.

I’m glad to say that, years after graduation, the folks from that class are generally doing well. I never did end up losing my job at the newspaper. In fact, before I made the choice to leave for academia, we’d hired a couple extra staff, approved some more freelancers, and our clicks were performing better than ever. Anecdotal, I know. Despite some doomsday predictions, our human-run operation seemed built to last.

Since leaving the paper and joining UB, I’ve continued to follow the top AI publications, in conjunction with those in the creative writing spaces, so the content of this article will focus mostly on written material. While it’s easy to acknowledge the power of AI tools, I’ve never been convinced that AI will be able to write a great novel or short story, never produce a great movie or television show – at least, not without such intensive human involvement that you might as well say that Microsoft Word had written the novel, or Adobe Premiere had created the movie.

The conversation around the second wave of AI has as much to do with how AI is perceived as how it operates. There’s a huge gap between that perception and reality that can answer why AI development seems to have slowed down from the record-setting pace of adoption after the launch of ChatGPT.

The biggest reason AI hasn’t taken the next jump to replace all of us white-collar workers starts with this fact: AI is not intelligent in the sense that humans are. What currently exists is a pattern-recognition and text-generating tool based on large subsets of information; an LLM, or large language model. The jump to the next step, called Artificial General Intelligence (AGI) would lead to, theoretically, an AI model that can reason across domains. It would have its own goals, recall past tasks to produce answers for new ones, learn tasks without retraining and create its own plans.

I use the word “theoretically” because no such system exists, and it seems like we’re a very, very long way off from the leap from LLMs to AGI. A calculator is not a mathematician, and calling the two one step away from each other is quite a jump.

Many investors are betting on AI as though AGI already exists, and many developers are betting on their technology taking the described “next step” far sooner than it can. A deadline tends to reveal the truth; hence, the long list of dead projects.

What we’ve got now is largely what we’re stuck with until the next big change.

So what if AGI is a long way off? LLMs still rule the day, right? Aren’t we still on the precipice of mass underemployment, as AI takes away more tasks previously done by humans?

Let’s start by looking at what’s already happening.

1. Each class of employee will leverage their unique advantage.

AI is proving to be more valuable a tool than Microsoft Word was to writers, or the internet was to researchers, on a shorter time horizon. It may not be able to write the next great novel, but it can help you figure out what’s working and what isn’t in your own work far faster and more accurately than you could on your own. It can’t complete the research for you, but it can comb the vast theatre of your field to pull studies that are directly relevant to your subject weeks, if not months faster than you could prior.

For mid-level and advanced career specialists like senior associates, account managers and creative directors, AI has the power to make you, a creator, many-times more efficient because you have the expertise necessary to manage it. You know when a suggestion misses the mark and have the ability to correct it; you know when cited research was not thoroughly vetted or came from a compromised source, and can work with the tool to discover better options. Your training has been built from early years of struggle, often in school and as an entry-level employee, and solidified over the years you’ve had getting it right before AI was an option. When faced with a new tool, you have the ability to integrate it and make what you already know work better.

The challenge for senior level officials will be integrating a complicated new tool with a bad reputation. Luddites appear in every generation for a variety of reasons, and the senior level staff who are in danger of falling behind are those who are unwilling to integrate the new technology into their workflow. Integration does not have to look like generating an image or a draft of an article; in fact, any work that was clearly generated by AI will continue to draw ire from creatives and clients alike (unless clearly stated otherwise). Rather, an experienced employee could simply use an AI brush tool on Adobe to add a highlight or remove a misplaced figure from a background, far faster than they could by using a traditional tool. AI can scan an article and recommend a sentence or two to cut out. These minor adjustments can have a big impact without forcing you to lose your creative integrity. 

Those newer to the workplace will face different challenges. They’ve used AI to some degree throughout their studies; if producing an essay was too hard, they could have a workable first draft in seconds. Forget about paying for an expensive line editor – they’ve got six different versions online, all charging less than ten dollars a month. Their expertise lies in the proactive integration of AI in whatever space they enter. This is simply another tool.

However, these same entry-level workers still lack the experience to know whether what AI helped them produce will actually resonate with a human audience. Though the work itself is easier, the timeframe to go from beginner to expert will remain largely the same. The biggest change is that newer employees will lose access to many of the opportunities that their predecessors once had if AI can replace the “grunt” work that a newer employee used to start out with.

We’re left to ask, will those trying to enter the workforce be cut out of the career pipeline entirely before they’ve gotten a chance to enter it?

The worst case scenario here is that young creatives will lose out on the opportunity to develop creative skills and rely entirely on AI to generate content for them, essentially getting lost in the woods early on, dismissed by other creatives and relegated to a period of disenfranchisement with creative industries; that is, until a foray into hard work brings them back.

2. Even more people will sound and look the same. Real creativity will cut through the slosh.

Take a scroll through your LinkedIn feed. What percentage of posts sound like they were written using AI? Is it 30%? 50%? 90%

Is it hard to tell?

There will be folks who argue they can accurately spot the usage of AI from the techniques used by the publisher (publisher here is a wide term; while we may not all be writers in the second age of AI, everyone with access to a social media page is now a publisher). Those techniques include, but are not limited to:

1. “It’s not X, it’s Y” statements.

2. Unnecessary paragraph breaks for emphasis.

3. Insightful, beautifully technical sentences that sound great but, in reality, say nothing.

4. An aversion to take a strong stance one way or another.

5. Repetitive sentence length and composition.

6. Confidence in statistics that are outdated, or outright incorrect.

7. The dreaded em-dash.

However, the reason AI loves these techniques is that human writers love them. They’ve proven to be effective. It’s a bummer for well-trained humans that our formerly beloved techniques have been co-opted by a different user. While the em-dash is too effective of a tool to lose completely, as are many of the listed techniques, human writers will forever have to be careful how we use them to avoid accusations. It’s too much to ask that we might avoid suspicion altogether.

If something is too present it loses value; the more these techniques are used by poor writers with minimal effort, the less effect they will have even when used properly.

On the other hand, you’ve noticed when a piece of writing appears with real distinction. I’m not talking about a hastily added grammatical error as proof of humanity – rather, a strong opinion, a research paper with well-matched sources, a flowery sentence that technically runs a bit too long (in this one sentence, there are two examples of the listed “AI alert content” and “human error” alone). That distinction, or a sense of “newness” or “difference” will clue in an audience to the fact that the creator, at their source, is human, even if AI was somehow involved in the creation process. AI as editor rather than generative creator could be viewed as a tool to use rather than a thief to punish.

As AI work continues to dominate the internet, it seems that generative AI will increasingly stand out as LLMs run out of human material to copy. That takes us to our last point.

 

3. AI content will continue to get worse.

LLMs had a huge head start because of the amount of material that exists on the internet. New models could effectively consume material from online, even behind paywalls and copyright protections, because there was no built-in way to protect this material from a thief that hadn’t yet existed. Additionally, the internet hadn’t yet been flooded with AI material, so all the material being added to the massive data sets of the LLM did not yet pollute how that material was processed.

In short, LLMs had total and complete free rein over the material that would make it most powerful, without any of the drawbacks that would have naturally existed or that we could have put in place.

However, from the production of the very first AI generated article, an LLM becomes weaker. That’s because the material it’s consuming wasn’t created by a human. Like incest damaging human DNA over generations, an AI generated article hampers an LLM’s ability to distinguish when a technique is being used properly, opting instead to decide that the more it is used, the more it must be correct to use it. An LLM is a system that operates on averages being elevated to status. If an LLM is trained on enough AI generated material, instead of human made, the entire system threatens to collapse in a jumble of poorly used techniques.

Already, you’d be hard pressed to go anywhere on the internet without running into AI generated material. The more that continues, the worse the material being generated will become.

 

4. Legislation will start to be developed around copyright, fair use and plagiarism. Artists will continue to shun users of generative AI.

Generative AI is the talk of the world, with fears about mass-unemployment, water usage, environmental damage and more. There is also an unbelievable amount of money involved.

Generative AI can’t stay unregulated like this for much longer. As the next generation of officials take office, it will be up to them to create new rules around the usage of AI. We will see the creation of new positions around the ethics of generative AI use.

Different industries will create their own regulations. Just this past month, Hachette Book Group, one of the big-five publishers, pulled an originally self-published novel by Mia Ballard, Shy Girl, from the shelves, under suspicion that portions of the book had, at the very least, been edited too heavily using generative AI (new reports indicate the book may have been over 78& generated by AI). Hachette made this decision after scouting Ballard and offering a rare contract due to the book’s success on the indie circuit

However, when Hachette was forced to admit that fears of AI use in the project might be real, the backlash from writers and readers was swift and clear. Readers didn’t want the book on their shelves. Hachette was forced to publicly denounce the creative process used to produce the novel, at great cost to their own business, and more importantly, their reputation.

In the publishing industry, a clear line was drawn. I expect that other industries, and artists, will face similar trials as the year continues. After legal ramifications, it will be up to each agency, and each audience, what type of AI content we’re willing to tolerate.

Over the next few years, I expect we’ll see new industries arise where experts in various creative fields determine which copyright laws were broken by which AI systems, and how much money certain AI companies will need to pay out to original creators. We’re sure to see huge teams of legislators and experts called in to create and manage all the new systems that will eventually lock AI in, not as a fringe tool, but a fundamental one. If AI really is going to be as world-altering as investors predict, we should see the creation of entirely new sectors of white-collar work to keep up with demand.

Just like when the horse-drawn carriage industry fell to automobiles and society reeled with fear of unemployment; only a couple years later, new industries blossomed in place of the old. All those jobs that are disappearing will be replaced.

Conclusion

So far, my English professor has turned out to be wrong when it comes to my classmates and I. To his credit, that professor specialized in technical writing; the type of boring, technical, grammatically accurate yet stale kind of writing that the creative writers sought to avoid. For that type of writing, perhaps he was dead right. Perhaps it really is time to retire those positions. Though those classes were great for my development, they were never really what I wanted to do.

That’s one of the key words as we think about the future: development. I think about the students who won’t have the opportunity to struggle to produce a technically correct chapter of a manual for a piece of construction equipment (believe it or not, this was a common assignment in technical writing courses) or piece together a rulebook that will guide the direction of a new organization. The newfound ease of producing this kind of writing does have the potential to devalue it. Perhaps this increased access will allow more people to create things they don’t truly understand.

It used to be incredibly difficult to become great at writing, perhaps one of the hardest, most precise skills in the world. Now, anyone with internet access can mimic greatness with minimal effort. But that mimicry gets increasingly obvious with each passing day. Eventually, it will lose its power. Maybe it already has. It will be up to the masters to create the next generation of what we consider to be great work.

I don’t believe in the death of the human artist. If anything, I believe in the rise of a new type of artist, one that reflects the traditional values of the humanities; creation for the sake of aesthetic achievement, feeding the soul rather than the stomach and work that connects across cultures and time periods, rather than instilling fear of the unknown.

If you’ve made it this far, despite the long sentences, overused or misused punctuation, conclusions that only reveal more questions or other techniques that ChatGPT and technical-writing faculty alike have tried to purge from my repertoire, I hope you feel confident you’ve just read a piece written entirely by a human being. My sentences might resemble a bumpy road. I hope the jostling felt familiar, as did the necessity of slowing down.

See you next time.