AI is having a massive impact on the entire technology landscape. But knowing how to use it best—or even when not to use it—is critical, notes Jesse Reiss.
Reiss is co-founder and chief technology officer at Hummingbird, a Walnut, California-based company that provides a platform to help compliance teams fight financial crime. Reiss is responsible for developing and implementing the company’s product and technology strategy to enable financial institutions to maintain compliance with complex and evolving regulatory requirements.
He spoke with StrategicCIO360 about AI’s impact on data collection, the “two camps” of professionals responding to the technology and the questions to ask before you deploy AI.
Data has been called the “new oil”—a strategic resource that will differentiate companies. Is that still true? How do recent trends in AI impact the value of data?
The latest trends in AI highlight two things: It’s now possible to generate data that is fairly realistic. There has been discussion around using AI, for example, to conduct trials or political research—just ask the AI to act like a representative of a particular group and then ask it research questions. If we can leverage AI to generate training data, we don’t need the same volume of data to begin the training process.
With cutting-edge AI models, it’s possible to use prompt engineering or “fine-tuning” with relatively small amounts of data to achieve particular outcomes. For example, when considering text generation, we may want to ensure a particular style or voice.
AI can be prompted to achieve these different styles with a single example or even just a description of the style. Fine-tuning could get even better results, and you may need at most a few hundred past writing samples to fine-tune effectively.
How should technology professionals redirect their skills to capitalize on the advances of AI?
I foresee IT professions bifurcating into two camps. One group will focus on AI development itself. This group will build, train, scale and maintain AI models and systems. Being in this group will require knowledge of complex computer science concepts and a deep understanding of advanced math used in AI.
The second group will focus on incorporating AI into product solutions and customer use cases. There are a lot of opportunities for AI to impact a wide array of professions. Folks in this group will need to understand customer pain points and technology’s role to address them. Bringing AI’s power to support the business’ human element is vital in this new era.
What’s the potential downside of AI and what must leaders think about when testing and deploying the technology?
Cybersecurity and data privacy are top concerns. Many new AI models are owned and operated by third-party companies. A lot of data has to flow to those companies to fully leverage the AI model.
For example, OpenAI could be given access to sensitive identifiable information such as credit card numbers and social security numbers—or in our case, information about financial crime investigations. That’s a little concerning because these AI companies are collecting a huge amount of data and have yet to demonstrate that the data is perfectly protected from potential leaks.
A trend I’m seeing is companies wanting to run their own AI models and systems locally. This gives the company stronger control over data exposure to avoid security and privacy concerns.
Another concern with AI is “hallucination.” AI can sometimes return straight-up incorrect information. An example of this is if you challenge an AI model—e.g. tell the AI that 1 + 0.9 = 1.08 and not 1.09, it will often just agree with the user. AI models are non-deterministic systems, they are probabilistic. So to some degree, there’s a randomness to its answers, leading to potential incorrect information.
In our industry, the impact of hallucination can be potentially catastrophic. It could potentially dismiss an active crime or freeze an innocent customer’s funds. Sensitive situations like these highlight the importance of maintaining human involvement. Technology can be an effective augmentation to human capacity, but not a replacement.
A third concern is around bias. AI is trained from human output, and humans tend to have inherent tendencies such as unconscious biases. If that bias gets fed into the system, we reinforce the bias in the AI model. There are some interesting ways to potentially deal with that but we’re still learning. An option is to incorporate a reverse-bias into the AI training set to skew the training data and overcome the implicit bias.
How can technology leaders think about AI as a strategic advantage? What will set companies apart in a post-AI world?
What I’m seeing is that many companies that are leveraging AI really well today are companies that have existing products in place that collect data and can support AI-powered workflows. It feels a bit like adding spice to a dish. It’s not the dish itself.
Before deploying the technology, it’s worth asking the following questions: Does this technology make sense for my customer base? Is there a specific use case that will benefit from the technology? What could go wrong if the AI fails? These questions will help companies better understand the role AI should play in their offerings.