
Mike Capone, CEO of Qlik , driving the creation of a data-literate world where organizations tackle their most complex challenges with data. Over the last few years, no technology has been discussed in the enterprise as much as AI. Its proliferation from automation to predictive modeling to new generative AI applications has proven it’s a technology that’s here to stay.
Like most disruptive innovations, its adoption within the enterprise has been marked by equal parts confidence and fear. The fear-mongering, however, needs to be tempered with the vast potential good of what AI can do—across society and across the enterprise. With natural disasters, for example, nonprofit organizations can stay ahead of preparedness needs through AI analysis.
As additional wildfires raged in Los Angeles in January, one of my company’s customers, Direct Relief, used geospatial AI to show where healthcare facilities have been damaged or destroyed—and where the top priorities were to escalate relief response targeting the areas of greatest need. In business, on the other hand, we’ve seen how fast casual and even sit-down chain restaurants can protect their bottom line by applying AI-driven dynamic pricing . AI considers changing market prices for ingredients, average costs local to the restaurant and seasonably hot items to recommend pricing.
Dynamic pricing helps avoid food waste, as fresh items that spoil quickly can be priced to move toward the day’s end. By reacting quickly to market conditions through pricing and careful promotion, AI could help potentially turn around the fate of struggling chains. To achieve the upside of AI, the backbone of ethical development will be balancing AI regulation with innovation.
Considering the rate of generative AI development, putting guardrails around the specific size of any given model could feel arbitrary very soon. It is one part of the recently shut down AI Safety Bill that didn’t resonate. As we wade through AI regulation as a society, we should promote AI to have ethical guardrails without being too restrictive, particularly around ensuring that AI lifts society, not just corporations.
AI can indeed amplify both good and bad. Without flexible ethical frameworks, the risk is a tool that manipulates rather than informs. Much like a human employee who could be coached and learn from mistakes, AI should not be penalized for errors.
The key is improving it over time and addressing imperfections in the AI models. Accepting that AI will not be flawless is the first step in building guardrails that make sense and can endure the test of time. While AI needs regulation, it’s in the very early innings.
Regulation tends to lag behind technological advancement, and this is key to keep in mind. From polling to the numerous other AI safety bills being implemented across the world, an equal playing field needs to be created for innovation so the advantage of AI doesn’t belong only to the biggest players by default. In healthcare and life sciences, for instance, the biggest concerns that arise tend to be around data sovereignty and compliance.
Placing guardrails around usage versus the technology alone is what will ensure strong governance. We are at the tip of the iceberg with AI in all facets of life and business. We must tread carefully but purposefully on the issue of AI regulation.
I hold steadfast to my belief that flexible regulation will be key to avoiding stifling innovation. Furthermore, competitive advantage is achieved when organizations experiment with AI tools and capabilities to find new ways to combine them to drive new insights and value points. As a recent McKinsey report noted : “It’s as if everyone chose to use the same bricks to build a house that looks just like the one next door.
The value, however, comes not just from the bricks themselves but also from how they are put together—the vision and design for assembling those bricks into a home that people will want to buy.” AI is inherently different from other traditional technologies we’ve experienced, and treating it the same way will not lead to the most fruitful results. It is through better governance of AI applications, not only the algorithms or models, that innovation and public trust will be safeguarded.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?.