Articles

AI: The Good, The Bad & The Ugly for Businesses with Mike Saylor

AI: The Good, The Bad & The Ugly for Businesses with Mike Saylor

ChatGPT and other large-language models (LLMs) have created a “tsunami” of enterprise interest in artificial intelligence (AI), with nearly three in four companies piloting generative AI tools for high-value use cases last year. However, enterprises aren’t the only ones interested in AI. Bad actors leverage generative AI to personalize attacks, using business email compromise (BEC), phishing, voice impersonation, and other scams to trick employees into doing unsanctioned activities, such as sending wire transfers to non-approved banks or recipients. And according to a recent podcast guest, these attacks will only worsen. 

Host Michelle Dawn Mooney welcomed guest Mike Saylor to Device42’s podcast, The Hitchhiker’s Guide to IT, to discuss “AI, the good, the bad, and the ugly” in business. Saylor is the CEO of Blackswan Cybersecurity and a professor of cybersecurity at the University of Texas in San Antonio. The podcast episode is available for viewing here

The Purpose of This Blog

This blog aims to help enterprise employees increase their awareness of new AI-powered attack strategies to avoid being victimized. 

Host Mooney asked Saylor to share a bit about his background. Saylor said he has a graduate degree in criminal justice and is currently finishing a Ph.D. He has worked in IT and cybersecurity for 30 years and taught computer science and cyber forensics for 24 years. Saylor said he loves to educate others about cyber risks so they can protect themselves against the latest attack strategies and techniques. 

Discussing the Origins of AI 

Host Mooney asked Saylor to give a brief primer on AI and its origins. Saylor said that AI is not new; companies have been deploying machine learning (ML) models and using them to optimize decision-making and other processes. 

ML is a behavior-based tool using a baseline to determine deviations. Organizations have used ML to improve field-service fleet logistics and routing, identify crime patterns in communities, improve customer satisfaction, increase operational efficiency, reduce product waste, and achieve other goals. As such, ML model predictions represent a vast improvement over calculating formulas in a spreadsheet. 

He explained that AI is an application of machine learning. AI models can provide descriptive capabilities, analyzing past trends; predictive capabilities, forecasting what likely will happen; and prescriptive capabilities, providing scenarios and recommendations users can implement. 

“But just like machine learning, AI is rooted in the body of knowledge that already exists. The next evolution will be even scarier as it creates new things. We’re not there yet, though,” said Saylor. 

Beneficial Applications of AI 

Host Mooney asked Saylor to discuss positive applications of AI before delving into some of its more nefarious uses. 

Saylor said his team has used AI and ML to differentiate between good and bad behavior across client networks. He used the example of cybersecurity alerts that notify users that their email accounts have been hacked. Similarly, ML and AI help companies get ahead of the red flag by identifying malicious activity that’s about to happen. 

“From a military perspective, we call that getting left of the boom, getting ahead of the explosion, the threat, or the compromise,” Saylor said. To do this, enterprise cybersecurity teams must collect a lot of data, process and store it, and perform analytics. 

When users search on Google, they’re teaching the search engine’s AI to get smarter because the models evaluate what people are searching for and their online behavior. 

Similarly, cybersecurity teams leverage behavioral analytics, develop baselines of known good behavior, and begin searching for deviations — whether a machine, user, network, application, or search behavior is venturing where it doesn’t belong. By doing this, teams can get ahead of the exposure, much like the precrime team did in the movie Minority Report

Device42 Weekly Demo

How Bad Actors Use AI 

Generative AI has made it easy for more people to game systems, whether students use ChatGPT to write papers and cheat on assignments or bad actors manipulating technology. In addition, too many people accept generative AI output as truth without checking to see if the data is accurate. 

“And so that’s scary because I think there’s a huge population of people that put a ton of trust in this technology, and it’s misplaced,” says Saylor, due to generative AI’s tendency to hallucinate facts. 

How Hackers Use AI with Business Email Compromise Attacks

Bad actors are using techniques like BEC, which involves gaining the credentials of someone who handles financial transactions for the company. They may contact a shipping clerk or receptionist and then use social engineering to find out who handles finances, whether it is an accounting or accounts payable clerk or the CFO or the CEO of a company. 

“Bad guys … have all the time in the world. They don’t have day jobs. They wake up when they want and do as much work as they want, and they do it seven days a week. And so this could be months or years before they get to who they are interested in, and they will invest the time to do that,” said Saylor. These individuals often operate in criminal gangs or large groups funded by nation-states.

After gaining access to the right person, a bad actor will send an email and use authority and urgency to force a transaction. For example, the hacker might pretend to be the CEO at an offsite and unavailable by phone, requesting that the clerk make a large transaction. The bad actor may take over the CEO’s account or send an email from a similar-sounding domain so the clerk doesn’t detect the difference. 

If the clerk emails back, the bad actor answers and validates the request. Then, the clerk makes the transaction, not realizing it is fraudulent. 

BEC is growing in scope and severity. The US Federal Bureau of Investigation says BEC is a $50 billion problem globally, with nearly 280K incidents that have been documented. As a result, some companies now have policies that prevent personnel from making significant financial transactions due to email instructions. Instead, recipients have to make a phone call or physically walk down a hall and validate the request from the initiator. 

So, how are bad actors using AI for BEC, phishing emails, and other scams? Many bad actors operate overseas, and English isn’t their first language. They use Google Translate to craft emails, resulting in bad grammar, incorrect verb tenses or pronouns, poor formatting, or weird tones. Users can more easily detect these emails as fraudulent. As a result, scams that may have been 80% successful previously are now just hitting the mark 60% of the time, said Saylor. 

Now, scammers can ask generative AI tools to write phishing emails in the voice of a 50-year-old CEO who went to Harvard or someone who grew up in the South. The AI tool can even use content the target has previously authored, such as blogs or social media posts, to mimic their tone and language. As a result, these emails seem much more credible than their Google Translate-crafted predecessors. 

So, what about company policies that require transaction approvers to call their initiators? AI tools can sample individuals’ voices from any video or post online. Bad actors can take a snippet of someone’s voice and create an entire two-hour movie that sounds like them, said Saylor. These scammers can also use it to have real-time conversations in the voice of their target. 

As a result, the security control of requiring an email recipient to validate a financial request via a phone call no longer works. “It’s got to be in person, face to face. And I don’t mean face-to-face with a camera because AI can impersonate me this way, too,” said Saylor.

How Companies Can Protect Workers and Their Businesses in the Age of AI 

Host Mooney asked Saylor to give listeners guidelines to protect their companies and jobs. She mentioned that workers get blamed and can lose their jobs if they don’t follow the proper protocols to detect and prevent scams. So, what can individuals do to be more proactive?

Saylor said he had been breaking into buildings for over 20 years, including banks, power plants, collection companies, and manufacturing facilities, and has a 100% success rate. Like Robert Redford in the movie Sneakers, Saylor tests companies’ security controls to see if he can convince employees to subvert them. 

He and his team shoot video and audio of these exercises, where he uses tactics such as pretending to be a fire extinguisher inspector to gain access to facilities. After breaching the facilities, Saylor will conduct a debrief and show teams what they didn’t do, such as requesting ID, calling the number of his supposed inspection company, or escalating the request to a superior. 

Saylor offered advice to listeners. He said to:

  • Buy time: Saylor said one of the most important things people can do is slow down, ignore the urgency, and fully consider the request. For example, does it make sense that the company CEO would send an email in the middle of a business meeting and refuse to take a call? Is it logical that someone in the email recipient’s family is in trouble, and the solution is buying them Apple gift cards? Is it credible that someone a worker has never talked to is now relying on them to do something important? If not, then the request warrants further exploration.

  • Escalate the request to a superior: The email recipient can involve others, such as a manager or a security expert, in evaluating the request to determine if it is legitimate.

  • Try to contact the initiator another way: Don’t use the contact information provided by the requestor, such as emails, websites, and phone numbers. Use other channels to contact the supposed initiator of the action. For example, a mother might want to call a daughter who emails that she is in trouble before sending money. She could also contact her friends or teachers if she can’t reach her daughter.  

Using Data and AI in a Responsible Way 

Host Mooney asked Saylor for advice on how companies should store and use the customer data they leverage to train AI models. 

Saylor said that companies need to put boundaries around AI. They should define what data models have access to, how data is used to train models, and how models are used. These guardrails will differ depending on whether the solution is public, consumes externally available or internal data, and uses customer personally identifiable information (PII) and other sensitive data. 

Companies already comply with regulations governing how data is used for business processes. Now, they must ensure that the data AI consumes aligns with their corporate objectives and regulatory constraints. For example, healthcare organizations using AI to design a cancer drug may feed it with case studies, patient records, and doctor notes. As a result, they’ll want to keep data and model usage internal, for research purposes only, not put that information out in public. As users of tools like ChatGPT know, the questions they enter into these LLMs add to the corpus of knowledge that others can access. 

AI solution designers should address these issues upfront rather than retroactively to address risks. In addition, bolting security onto solutions after they’re developed costs significantly more than taking a DevSecOps approach to design. 

Taking Action by Improving Awareness of AI-Powered Scams 

Host Mooney asked Saylor to recommend resources for listeners that they can use to educate themselves about current scams and how bad actors are using AI. 

Saylor recommended the Stop. Think. Connect. campaign for its learning resources and CISA.gov, the website of the U.S. Government Cybersecurity and Infrastructure Security Agency. In addition, listeners can contact Saylor via LinkedIn or Blackswan Cybersecurity at 855-BLK-SWAN or BlackSwan-Cybersecurity.com.

“Don’t be scared; be diligent,” concluded Saylor. 

Watch the podcast

Share this post

Rock Johnston
About the author