What AI Can—And Can’t—Do For CIOs

Jonas Hansson, CIO, Axis Communications
Don’t look to AI to make your job easier right now, says Jonas Hansson, information chief at Axis Communications. And that’s OK.

When AI makes a decision, who is responsible? We are.

Information technology professionals would do well to remember that, says Jonas Hansson, CIO of Axis Communications, a network technology company based in Sweden with U.S. headquarters office in Chelmsford, Massachusetts. Hansson spoke with StrategicCIO360 about the importance of monitoring any AI programs, why cheaper isn’t necessarily better and why he would never trust an AI-written biography.

In your opinion, how will generative AI disrupt the current technology landscape? How will it evolve over the next five to 10 years?

As far as disruption, previous paradigm shifts show the way. While there will be certain job roles that will be completely replaced by “mechanical AI” performing bread-and-butter-tasks, other roles will be redefined and grow in capabilities and responsibilities with the help of AI. And most importantly, there will be new, emerging roles that will be created.

I think this shift will benefit people who understand as opposed to just do. Understanding dependencies or other complex conditions means you will be able to find other ways to solve problems even if AI starts to change your existing role.

The first influence of disruptive technology is usually a disruption of chores like basic coding, writing and imaging. This impact will help make daily tasks less tedious and more efficient, while enabling employees to have a higher basic level of capability.

Over time, as AI matures and gains greater trust, this will evolve into new areas, methods and technologies. But in my experience, the technological development is much faster than the human ability to adapt. So, in that sense, wherever trust is involved and needed, speed of AI adoption will not only be defined by how fast we can innovate technically.

Ethics are important to keep in mind. Every software has bugs. All training data has bias. We must never forget that there is no such thing as complete objectivity—even if AI might give the deceptive impression of objectivity and impartiality.

In reality, I don’t see how that can be the case. Instead, we have to consider the aspect of accountability. When AI makes a decision, who is responsible? We are. And that’s important to remember. We cannot abdicate beyond that point: it remains our fault. We will do well to remember that.

How do you maintain pragmatic expectations for AI while continuing to prioritize innovation?

Haha, simply ask your friendly neighborhood AI to write your biography. What you’ll receive back is a story mostly about someone else’s imaginary life. I used to work for the Swedish National Encyclopedia and am trained to enjoy facts and fact checking. Using AI means properly understanding what it does well, and what it guesses or assumes. In some ways, it appears that AI has attended the management training that tells people to never show weakness, i.e., “Fake it ’til you make it.”

In that respect, it’s more dangerous for our parents than our children. At least mine. My parents still trust the system. My kids question everything. With AI, that’s a good approach to take. You have to own and take responsibility for the result even if you pulled none of the weight.

For now—and taking text-based chat-AI as an example—that might mean using AI for structures, but not for facts. Ask it for an outline for an IT strategy—great. You get headlines and fill in the blanks yourself, making it your product. But if you ask it for facts, you’ll likely get a credible sounding result that is presented with the confidence and assurances of a habitual liar. And then we haven’t even started looking at the legal risks that you either expose your intellectual property rights or risk stealing someone else’s.

The short answer is: begin by assessing real risk as well as currently realistic potential and the quality of potential answers. Find use cases where people can start using it risk free and with reliable results. Then initially wait with some of the additional use cases. Work to negotiate the bad side effects of risk and grow over time. That is my game plan.

When introducing new technology, what are the top factors to consider to ensure end-user value?

An old journalistic saying is to never underestimate a reader’s intelligence, but never overestimate the same reader’s knowledge. The same is true for any launch. All heavy lifting must be done in the background. The system has to be self-explanatory to the end user and problems with laws or agreements should be addressed as part of the technical implementation.

Also, it’s important to add realistic supporting limits to the tech—just like raising the rails when you bowl with beginners. When users are let in, they should be allowed to roam free and use the tech at its current true potential and not have to second guess themselves or stay updated on the latest threats of an IPR risk, which currently might be the case with some generative AI. 

The second part of the answer is to do the heavy lifting yourself and understand that taking shortcuts and cutting corners does not create tangible end user value. Our job is to make it easy for the end user even if that means we have to work harder.

I use Spock’s quote from Star Trek to explain this: “Logic clearly dictates that the needs of the many outweigh the needs of the few—or the one.” In essence: make sure to keep the end users free from unnecessary red tape or tasks that you can do. It may make a bit more work for you, but it also makes for a better end user experience—which is always the right priority.

How can CIOs articulate the value of new technology investments amid constricted budgets?

There are some things I don’t get. One is the broken perspectives affecting some crucial components of our daily lives. Offices and facilities are sometimes seen as just costs and not assets. That is, the only good office is a low-cost place. Sometimes the same is applied to IT. The cheaper the better.

That perspective is flawed. Offices are assets. They are places to meet, collaborate and build culture, loyalty and joy. In much the same way, IT, systems, security, compliance and AI are all levers in business processes that make or break a company. They have to be seen as a part of the business, of the skills and know-how, of the earnings and as a way to keep a company working. As a result, they must be measured against their potential and realized potential, not just the size of the bill.

For us, this means aiming for a business-oriented CIO role that communicates a lot. A role that builds bridges to all parts of the company. Someone that understands that decisions are made together with the business, not for it or by it. 

The role is to ensure cross-functional efficiency in the company. Acting as a generalist counterweight to silos and segment matter experts—enhancing their abilities to succeed in their fields but also always remembering that we measure success from the outside-in.

We represent the combined efficiency of the company and have the role to never suboptimize the company unknowingly. We represent the perspectives that have no say in matters: colleagues in other departments that will be affected by a desired change, future colleagues that have not yet been employed and the company’s needs five years from now.

Get the StrategicCIO360 Briefing

Sign up today to get weekly access to the latest issues affecting CIOs in every industry

MORE INSIGHTS

Strategy, Insights, Action

In our weekly newsletter, get insight into the biggest issues facing CIOs, along with strategic ideas, solutions, and interviews.

Strategy, Insights, Action

Once a week, get insight into the biggest issues facing CIOs, along with strategic ideas, solutions, and interviews.

Your information is secure – we don’t sell or rent your data to any third-parties.