Enterprises were making slow but steady progress with artificial intelligence (AI) before November 2022. They had developed data foundations and models and were using AI for applications ranging from predictive and prescriptive analytics to chatbots and virtual agents.
Then OpenAI released ChatGPT, and the floodgates opened. C-suite leaders and teams quickly saw how generative AI was able to understand, learn, and create, optimizing a wide range of applications. Suddenly, everyone became interested in generative AI.
It’s not surprising that 70% of organizations are now exploring generative AI’s ability to boost innovation and worker productivity. In fact, McKinsey projects that generative AI could unlock up to $4.4 trillion annually in new value across 63 different use cases. Among business functions, sales, marketing, software engineering, customer operations, and product R&D stand to benefit the most, analysts say. While enterprises can use open-source platforms like ChatGPT, most will be building task-based or domain-specific models on top of them to optimize use cases and meet their unique requirements for personalization, accuracy, security, compliance, governance, and more.
Purpose of This Blog
The purpose of this blog is to provide IT leaders and teams with insights on how they can use advanced technology to transform IT operations.
To explore this topic in depth, podcast host Michelle Dawn Mooney welcomed Elzar Simon to The Hitchhiker’s Guide to IT to discuss the future of AI. Elzar Simon is a senior IT director for global infrastructure at New York University (NYU) and is the author of two books: A.I. Hacked: A Practical Guide to the Future with Artificial Intelligence and AI Hacked 2: Reimagine the Future. This blog covers part one of that conversation.
Simon has more than 35 years global IT leadership experience in industries including, international port and shipping, finance, health care, government, IT, and higher education and is the recipient of multiple awards. At New York University (NYU), he leads the global infrastructure team which supports the university’s campuses in New York City, Shanghai, and Abu Dhabi. Simon’s team spearheads the NYU IT Automation Center of Excellence, which provides governance and coordination for the university’s automation and AI initiatives. He is also part of a technology group at NYU that explores the use of AI in support of teaching and learning and research.
Defining AI and How it Differs from Other Technologies
Host Mooney asked Simon to define AI for listeners. Simon compared and contrasted an encylopedia’s definition with his own. Britannica states “AI is the ability of a computer or a computer controlled robot to perform tasks commonly associated with human beings because they require a certain level of human intelligence and discernment.”
Simon offered his own definition, saying “ AI is a machine that can think and act like a human being.” As a result, AI can be software, a device, an appliance, or a robot. While most technology helps people perform their work, AI can do it for them.
Another difference is that while computers follow pre-programmed instructions, AI algorithms enable computers to learn and solve related problems based on what they have been trained on.
AI tools aren’t all-knowing, said Simon. As a result, if models are trained on incorrect or limited information, they will produce inaccurate, biased, or even misleading information and responses.
AI Can Beat Humans at Complex Games
So, just how good is AI compared to human intelligence? Complex games can provide insights into this issue, said Simon. Google DeepMind Alpha Go defeated the world champion of Go, Lee Se-dol, who then announced his retirement. Go has 2.1×10170 board positions, making it impossible for humans to optimize every move the way AI can.
Several years before that, Watson, IBM’s AI platform, beat human contestants on Jeopardy. The AI engine was able to provide accurate answers in the form of a question, as the game requires.
Understanding AI’s Capabilities
Simon sketched out different AI capabilities. They include:
- AI tools that use natural language processing to talk or converse with humans either audibly or via text. For instance, the digital assistant Siri is an AI.
- Large language models (LLMs) like OpenAI’s ChatGPT 3.5, which was trained on 175 billion parameters and is fluent in 95 languages. Other LLMs include Anthropic’s Claude, Google’s Bard, and Meta’s Code Llama. LLMs use generative AI, which can understand, learn, and create to produce content and imagery. Simon said that based on earlier language training, Google Bard is able to learn the Bangladeshi language on its own, and AI may be able to crack the code of ancient languages that have disappeared.
Enterprises are using generative AI for a wide array of use cases, including personalizing marketing, developing software code, and automating processes across business functions.
Simon said that ChatGPT and BARD use machine perception to understand how humans use their senses to make decisions. For example, facial recognition technology enables computers to recognize individuals in photos, making categorization work exponentially faster. Machine perception technology is also used to develop autonomous vehicles and robots, optimize assembly line processes, and guide healthcare processes.