Skip to content
Gen AI
4 min

Developing Your Principles of Responsible Use of AI

by Brendan O'Neill March 21, 2024

Better. Faster. Stronger. This is not only the mantra of many athletes as they practice their sport and hone their skills. It also represents the demands that seemingly every business puts on its website and marketing teams.

Reaching those goals and satisfying those demands can sometimes seem impossible. It’s like that old maxim of the Iron Triangle, where each corner represents a key attribute of a project: Good, Fast, or Low-Cost. Traditionally, you can only choose two. But, with the help of artificial intelligence (AI), it appears we’re stepping closer to delivering all three at the same time.

We know that many companies and individuals are using AI tools in a variety of ways to help them do things better, faster, and cheaper than ever before. But the ubiquity of these tools and the extreme speed at which they’ve been adopted has many of us sounding like Dr. Ian Malcolm from Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

That brings us to the heart of the matter: using AI responsibly. Yes, we all can use AI, but should we? If we do use it, how are we using it? What rules govern our use of AI tools, and what are those rules based on? Companies, brands, publishers, and even governments must all develop a core set of principles for exactly when, where, and how they will use AI.

The Importance of Principles

For the past decade or more, digital marketers have known that people prefer to purchase from or work with companies that share their values. “In fact, 63 percent of consumers say they want to buy products and services from companies that have a purpose that resonates with their values and belief systems. They will even go out of their way to avoid companies that don’t mesh with what they believe—which goes to show that a company’s values have both internal and external implications,” contributing writer Brent Gleeson shared in a article. 

In order to develop such a connection with potential customers, we have seen companies devote more resources to stating their mission, purpose, and values. But it’s not enough to simply state a set of core beliefs; it’s about how those beliefs are communicated. Successful customer relationships are developed in three ways:

  • Shared Values: Express the core beliefs and priorities of your company, and understand how consumers may perceive them.
  • Authenticity: Ensure your values truly align with the company’s purpose and personality. Readers can spot a phony a mile away.
  • Transparency: Provide your values publicly and prominently—don’t make readers sift through dozens of pages to learn what you stand for.

Today’s organizations must understand the importance of similarly constructed principles for such a new, powerful, and controversial set of tools. A company’s perspective on the responsible use of AI may soon sit alongside other policies/systems like Environmental, Social and Governance (ESG) and Diversity, Equity and Inclusion (DEI).

If you’re developing yours, where do you start? The good news is, plenty of discussion and research has already been done on this subject, and there are many resources and examples available to help guide you along the way.

Organizations Weigh In

Harvard Business Review (HBR) identified the “pressing competitive pressure to fully embrace AI” and warned of the risks surrounding irresponsible implementation of AI, which “can result in severe penalties, substantial damage to reputation, and significant operational setbacks. The concern is that in their quest to stay ahead, leaders may unknowingly introduce potential time bombs into their organization, which are poised to cause major problems once AI solutions are deployed and regulations take effect.”

To help prevent such mistakes, HBR published its own checklist of 13 Principles for Responsible AI at Work, covering what it identified as the major areas of concern:

Similarly, the Content Marketing Institute (CMI) recently detailed some areas of focus when developing principles, processes, and policies as part of an AI operations plan. CMI’s recommended framework includes:

The White House Can Help

The need for guiding principles regarding AI is not limited to the tech space, corporate websites, and publishers. The U.S. federal government is well aware of the potential risks involved. In the fall of 2022, the White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights, which includes its own set of five principles accompanied by From Principles to Practice, described as “a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process.”

The five principles outlined in the Blueprint for an AI Bill of Rights “should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.” The five principles described are:

Learn from the Trailblazers

It’s one thing to understand the need for and value of creating a set of guiding principles that establish how your company will employ and govern the use of AI in its various forms. It’s quite another to actually craft the words and put your goals and values in writing.

Worry not—in addition to the White House, many large and influential organizations have already established their position on this subject. By looking over some of their principles, you’ll be able to find some commonalities that can serve as best practices and inform your efforts.

(Note: This is a glimpse at the objectives and principles that leaders in this space are publicly providing regarding their position on using AI. For a more detailed, descriptive view, please visit Google, Amazon, Microsoft, and Intel’s respective websites for additional information.)

Public principles and perspectives regarding the use of AI are just another opportunity for companies to connect with their audiences. If done poorly, it can serve as a reason for your customers to choose your competitors. If done well, AI principles provide an additional entry point for people to develop a deep relationship with your brand.

Photo Credit: Toa Heftiba | Unsplash

Brendan O'Neill
Content Strategy Lead

Brendan O’Neill is a Content Strategy Lead at One North. He helps clients evaluate and reimagine content structure, development, tactics, and strategy incorporating both industry best practices and innovative methodology. Brendan has an extensive background in journalism, editing, and managing content for everything from local newspapers and trade publications to national consumer magazines and Fortune 500 brands.