Clicky

AI: The Good, The Bad & The Ugly for Businesses with Mike Saylor - Device42

AI: The Good, The Bad & The Ugly for Businesses with Mike Saylor

Your Title Goes Here

Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings.

Notes

In this episode of the Hitchhiker’s Guide to IT podcast, host Michelle Dawn Mooney discusses the nuances of modern IT management, focusing on artificial intelligence (AI), with guest Mike Saylor, CEO of Black Swan Cybersecurity. The conversation explores the evolution of AI, its positive applications such as enhancing cybersecurity, and the darker side involving cyber attacks, specifically business email compromise.

Mike details how attackers exploit AI to craft convincing phishing emails, impersonating high-level executives and manipulating employees into making financial transactions. He underscores the challenges in combating such attacks and the crucial need for awareness and diligence. The discussion also touches on the ethical considerations surrounding AI, emphasizing the necessity of setting boundaries to protect sensitive information, particularly in healthcare.

The episode concludes with a call for increased awareness, caution, and a shift away from over-reliance on technology. Mike advocates for a “Stop, Think, Connect” mindset, urging listeners to verify information before taking significant actions. He provides various resources for further learning and underscores the importance of ongoing education in the dynamic fields of AI and cybersecurity.

Transcript

Welcome to another episode of Hitchhiker’s Guide to IT podcast, brought to you by Device42. On this show, we explore the ins and outs of modern IT management and the infinite expanse of its universe. Whether you’re an expert in the data center or cloud, or just somewhat interested in the latest trends in IT technology, the Hitchhiker’s Guide to IT is your go-to source for all things IT. So, buckle up and get ready to explore the ever changing landscape of modern IT management.

(Host: Michelle Dawn Mooney)
Hello and welcome to the Hitchhiker’s Guide to IT podcast series. Today we’re talking about AI, the good, the bad, and the ugly in business. And I am really excited about this one. It’s going to be a great conversation. We have a wonderful guest. Mike Saylor is CEO of Black Swan Cybersecurity, Professor of Cybersecurity at University of Texas in San Antonio. And he has a lot of information to share with you today. So, Mike, thank you so much for joining us today.

(Guest: Mike Saylor) Hello. Thank you.

(Host: Michelle Dawn Mooney)
I want people to learn a little bit more about you before we get into this conversation. So can you give us a brief bio, please?

(Guest: Mike Saylor)
Oh, brief. My goodness. So I’ve been in IT and cyber for about 30 years. I’ve been teaching computer science, cyber, forensics, all the stuff you need to know to be good in cyber for about 24 years, both the business side and the criminal side. So my graduate degree is in criminal justice, and I’m actually working on my doctorate, which hopefully will be done this month. But I guess the summary is I’ve seen a lot. I’ve been around the block a few times, and I love to educate and share what I’ve learned with others so that they’re as capable and aware as possible when not only getting into cyber but also protecting themselves against bad actors in the cyberspace.

(Host: Michelle Dawn Mooney)
Yeah. And it really is about awareness, as you said. That’s the key word, because if we don’t know that these things are going on or we don’t know how to combat them, then we’re really at a loss. And when it comes to business, that can mean a lot of time, a lot of money. So as I said, I’m really excited to get into this conversation. So let’s start off here. Most of us know what AI is, unless you’ve been living under a rock, but artificial intelligence is pretty much everywhere. Let’s talk about this, though, the origin of AI, because some people may not realize how we kind of came to be using artificial intelligence, and then how it’s being used in a business sense.

(Guest: Mike Saylor)
Sure. So yeah, AI has been around for quite a while in one shape or another. I think we kind of previously knew it and used it by the term machine learning. And machine learning is very helpful. But I’ll say this, just like every other great tool, bad guys have figured out how to use those tools for evil and nefarious purposes. But machine learning, it’s all behavior-based, what’s the baseline, what are the deviations from that, what does that mean, how do we use statistics and metrics to become more efficient, kind of like the delivery companies, they don’t do left turns anymore, it’s all right turn, right turn, right turn. Well, that was machine learning. They put data into formulas, they did all this analysis, the machine learned how to give us the feedback and the direction to improve whatever it was we were working towards, whether that was reducing crime, improving customer satisfaction, efficiency, product, waste, whatever it was, machine learning was that first evolution from just someone doing formulas in a spreadsheet. And now we have AI, which is the application of machine learning and the evolution of that with some sense of independent thought, well, thought’s a bad word because that’s a human thing, we don’t want to get there yet. But AI is able to make decisions and do forecasts and an evolution of analysis that the fundamental machine learning wasn’t capable of. And so that’s what we’re starting to see today. But just like machine learning, AI is currently rooted in the body of knowledge that already exists. So that next evolution is going to be even scarier as it starts to create new things. We’re not there yet, though.

(Host: Michelle Dawn Mooney)
Well, you know, you talk about thought and not throwing that in because that is a human characteristic that we think of. But then you take a step back and you think of all of that information that artificial intelligence is derived from. And I don’t know about you, but I mean, just on any given day, whatever I’m putting into a Google search and the amount of things that pop up, there’s just so much information for the next level of technology to kind of take from. So let’s dive a little deeper into, as you talked about, there are the bad guys out there using it for nefarious reasons. So let’s talk about the good part, first of all, some of the pros that we see AI being used for. And then let’s flip the switch and talk about those bad things we’re also seeing as well.

(Guest: Mike Saylor)
Well, the good things and I experience it every day and in both of my day jobs. So from a cybersecurity perspective, we use AI and machine learning to help differentiate true, like known bad stuff. You know, someone hacking your account, you know, we get notified notifications that says someone’s hacking your account. Well, that’s obvious. That’s a red flag that we’re well accustomed to. However, machine learning and AI helps us get ahead of that. Someone hacked your account because machine learning and AI can identify the behavior of what’s about to happen. And so, you know, from a military perspective, we call that getting left of the boom, getting ahead of the explosion, the threat, the compromise. Well, how do you do that? Well, you have to do that with a ton of analysis. In order to do that analysis, you have to collect a lot of data. And you mentioned Google search. Well, you’re actually teaching Google’s AI how to be smarter because of what you’re searching for and your behavior. And so if we can apply that behavior aspect, develop some baselines and start looking for deviations from a machine, a

user, a network, an application, a search, we can start to identify things and get ahead of it’s kind of like Minority Report. How do we get ahead of crime? The movie Minority Report. So we can do that if we collect enough data. Well, that also requires a lot of processing and storage and analytics. And all of that stuff has developed very rapidly over the last probably eight to 10 years. But it’s becoming very effective. And that’s just from a cybersecurity perspective. Well, then, from a learning perspective, you know, my other job as an educator, I can use AI to help me in my research. Hey, AI, go find me five articles on this topic within the last two years. And then, man, very quickly, I now have a list of things that has narrowed my investment in time, has narrowed my search, and I’m able to very concisely get to what I’m looking for, hopefully. And I say hopefully because there are still flaws with AI. There’s this thing called hallucinations. So AI and we can get to that in a minute. But from a research perspective, there’s that. Well, then the use of AI for nefarious purposes, it’s not always bad guys. Sometimes it’s students. Right. Hey, write this paper for me. And so. When I opened by saying, you know, we need to be aware of AI, isn’t a great example is as an educator, I need to be aware that students may be turning in assignments generated by an AI. So how do I then become comfortable or aware enough with that capability to determine if this is original work or did they cheat, quote unquote, cheat by using a tool to do their work for them? And so I’ve had to adjust my grading and my analysis of assignments because of that. Well, then. I think another problem we have as people is that we rely on technology a lot. We rely on our phones to tell us what our next meeting is or what route to take our GPS. We rely on our computers to tell us the right answer. Google to give us the right results. Well, we’re also relying currently on AI. And trusting the results when we ask a question, a diagnosis, I have these symptoms, tell me what’s wrong with me, AI, or help me write this paper or give me these resources. Well, one of the things that you need to understand about AI is it’s currently designed to give you the best. Answer. It can find, well, if it can only find inaccurate answers, it’s going to give you the best inaccurate answer. And so those are called hallucinations, it thinks it’s right, but really it’s just the better of the wrong. And so that’s that’s scary because I think there’s a huge population of people that put a ton of trust in technology and it’s misplaced.

(Host: Michelle Dawn Mooney)
Absolutely. And getting back to my Google search, if you’re looking for, you know, what does this mean? What does the symptom mean or how do I do this? And as you said, there’s a lot of information to pull from. And if it is inaccurate information, then we can have some problems. So you’ve kind of just touched the surface of some of the negative aspects of AI. But I want to get into the nitty gritty here because there are really crazy ideas. I guess not so crazy if you’re the bad guys, but ways that especially from a business standpoint, that business are being attacked from a cybersecurity line and really just creative and ingenious, in my opinion. So can you give us some examples of ways that businesses are being tricked with the use of artificial intelligence?

(Guest: Mike Saylor)
Sure. One of those tactics is called business email compromise. And so historically or I’ll say traditionally, because it’s still not just this historic attack. It still happens today. But the way that it happens is we get the credentials for someone involved in financial transactions within a

company. It may start with a shipping clerk, you know, someone or a receptionist, but access to that account or other accounts. We figure out who the right people are and we eventually get the access to that accounting clerk or that accounts payable clerk or the CFO of a company or the CEO of a company. Bad guys will eventually get that access and they have all the time in the world. They don’t have day jobs. They wake up when they want and do as much work as they want and they do it seven days a week. And so this could be months, it could be years before they get to who they are interested in and they will invest the time to do that. So now they’ve got access to this, the right person. They will then send an email to the people involved in financial transactions and that email will say, hey, Bob, I know you are responsible for making wire transfers to pay a bill or changing account information to make sure we’re paying the right people. I need you to make this change or conduct this transaction. I’m the CEO and I’m off site in a meeting, so I can’t answer my phone, get this done. And they will get the email and they’ll go, OK, CEO, important person, this is urgent. I’m either going to just do it or maybe I’ll send an email back going, just confirming you made this request and that’s my diligence and I’m going to send an email to the true CEO and they’ll validate it. Yes, I asked you to do that. And so they make the transaction only to find out later that it wasn’t the CEO. It was a bad guy pretending to be the CEO, who had access to the CEO’s account. And there’s a ton of ways that this happens without the CEO, the real CEO even knowing. So they make the transaction after the fact, they figure out they shouldn’t have. Well, some ways to prevent that are now, there’s now company policy that says we will not simply rely on emails for significant financial transactions. I have to make a phone call. I have to walk down the hall and actually see somebody in person and get things done in an interactive, physically interactive way. All right, so now bad guys are like, oh man, now we’re not 80% successful anymore, we’re only 60% successful and that’s millions of dollars that’s hitting our business as bad guys. So how do we get better at that? And before I get to that, to the use of AI, the other diligence that you can put into spotting these types of crimes is a lot of times the email you get that says, I need you to do X, Y and Z is not always formatted well. It’s sometimes got bad grammar, past tense or wrong tense of verbs or pronouns, whatever it is. There’s something sometimes that’s just going to go, that’s not how Mike talks or that’s not, that doesn’t seem right. So sometimes there’s those little things that kind of trigger the thought that this might not be real. And it’s because most threat actors, especially in this space for this type of attack, are overseas, they’re foreign, English isn’t their first language, and they’re just trying to, maybe they’re using Google Translate to just type in what they want. And as we all know, translators aren’t always 100%. And so sometimes there’s those little red flags. Well, with the use of AI, I can now tell AI not just write this business, this phishing email for me, but write it as if I’m a 50 year old CEO that went to Harvard and I can tell it all of the background of the person that I want and the tone to use to develop the message in English. And maybe you can even put that they grew up in Georgia. And so now there’s that Southern flavor to how they talk. Or, hey, here’s a population of blogs and social media posts that this person has used in the past and use that as a basis for how you’re going to write this email for me. So you can get very specific and very creative with AI and personalize it based on the individual you’re trying to impersonate. All right, so now the email looks more legit and I believe the email. All right, so now, well, how do we get around the new company policy that says I have to call Mike, I have to get Mike on the phone and verbally confirm that this email is real? All right, well, now AI and there’s tons of publicly available free tools out there, AI can go and sample your voice from any

video or post that’s on the Internet, whether it was a news piece or something on your social media, you give AI that snippet of your voice and it can create an entire two hour movie that sounds like you. So you tell it what to say or there’s even AI where I’m talking just like I’m talking to you now, and the AI translates that into the person’s voice that I’m impersonating and you can have real time conversations that way. So now that control of call and verify, can’t trust that now either. It’s got to be in person face to face. And I don’t mean face to face like this with a camera because AI can impersonate me this way, too.

(Host: Michelle Dawn Mooney)
You’re blowing my mind, Mike, and you’re scaring me, which I know, you know, that the whole point of having this conversation is not to scare people. It is to provide information as we talked about awareness. I use that word. I think we both may have used that. Being aware of what’s going on, but we do want to give people some hope because it really is. It’s just getting harder and harder. And as technology, you know, it’s ever changing as it continues to go the way it goes. It will be harder and harder to differentiate the good guys from the bad guys sometimes with the modes of communication that we have. So please give us some hope. Please, please give us some guidelines and what we need to be wary of, how we can work around that and how we can be smart as business people protecting our companies, protecting our individual name, because you’re not only talking about the company as a whole, but I mean, an employee. And I’ve known this to be true just with people that I have been connected with where somebody can easily lose their job if they do not follow the right protocol to fish out the scammers and they’re the one is going to be to blame because they didn’t follow something that could have put the entire company in jeopardy. So what’s the good news? What can we learn from this here and what can we do to be more proactive?

(Guest: Mike Saylor)
Well, there’s a lot. And, you know, these are semester and two and four year programs that you’ve got to go through and some of its therapy. But the idea is not to scare you. It’s to develop an awareness that creates pause. So when you see something, you’re not just like, oh, I’ve got to get it done, I’ve got to click this link, I’ve got to send that thing, I’ve got to pay this bill, it’s stop, take a minute and and rationally think, would my CEO really send me an email in the middle of a important business meeting? And then say, you know, this is critical, but I’m not available for a phone call. Does that make sense? Does it make sense that someone in your family is in trouble and the solution to that is buying them Apple gift cards? Does it make sense? That someone you’ve never talked to within your organization is now relying on you to do something that’s very important. You’re the new hire or you’ve been in this job for a long time either way. Does that make sense? Does it give you pause and it should? I’ve personally been breaking into buildings and educating companies for 20 years on different types of things they should take pause with. And I can tell you in 20 years, I’m 100 percent successful getting into banks, power plants, collection companies, manufacturing, whatever it is.

(Host: Michelle Dawn Mooney)
And we do want to stop and say that you’re doing this for the good of helping people. You’re not the bad guy, because people can hear that one line and say, wait, what is he doing breaking into banks?

(Guest: Mike Saylor)
So they engage us to test their security. There’s a great movie called Sneakers, Robert Redford, back in the day where that’s what they did. A lot of people that understood security and human nature and this team and companies would hire them to test their security. And that’s what the case is here. So, no, I didn’t walk out with pockets full of cash or anything like that. But we also recorded video and audio of all of these exercises. And you can see it on employees’ faces when, you know, maybe I show up and I pretend to be a fire extinguisher inspector and they’re like, huh, that’s odd. But then they let us do what we pretend to do, and then in the after action, the debrief, we show them these videos and we point at the screen and go, look right there, you had a pause, but you didn’t do anything about it. You didn’t go verify, you didn’t, you didn’t even ask for my identification or the phone number to my company or or even escalate that to your boss. And if I can tell you one thing in cyber, if I can give you one piece of information is deflect. So if you’re asked for something to do something critical and you’re like, that just doesn’t seem right and I don’t really want to be the one on the hook for making a half million dollar transaction to a bank in Mexico when the company we’ve dealt with for years is in Colorado. Escalate that, talk to your supervisor, talk to your manager and go, it just doesn’t seem right and get some other people involved. So the hope is that there’s tons of resources out there today that are free that help educate people on the risks of cyber, the evolution of bad guy attacks and new trends. It’s all out there and those resources are free and some of them have been around for a long time. There was a campaign specifically for kids, but I think it’s relevant to anybody. It’s called Stop, Think, Connect. And the idea there was to stop. Think about the email you got or the attachment you’re about to open or the phone call you’re getting or the situation stop and think about it. Rationally apply context to it. Would that happen? Is this real? You know, my daughter sent me a note and said she’s in trouble. Well, why don’t I call her, call her on her phone, call the number, you know, and not rely on the email that that just doesn’t make sense and then connect physically, talk to somebody, go see somebody, get off, get out of, you know, get away from your phone, get out of, you know, get away from your desk and walk down the hall. I think this goes back to what I was a comment I made earlier is we’ve become a society so reliant on technology. We rely on email instead of phone calls. We rely on Zoom and teleconferencing instead of getting together in a conference room and because there’s and there’s value in that, you know, physical presence. And it’s not just from a social perspective. Creativity is different. Communication is different. Interaction is different. Productivity is different. And there’s good and bad for both of those. But I think, absolutely stop relying on technology as much and don’t trust technology as much. Take pause before doing anything significant. And verify, call the number, you know, when you get a text message or an email from your bank. Don’t rely on the information in those communications. Look at the back of your debit card. Call that number. That came from the bank. We know that came from the bank because you use it every day in your bank transactions. So there’s a lot of things that we can do that it’s really just got to pull people away from trusting technology as much as they do.

(Host: Michelle Dawn Mooney)
I know we’re kind of running out of time here because of such a great conversation, but I do quickly want to touch on when it comes to the positive side of using AI and technology. And, you know, it’s been amazing to hold on to so much information and to utilize that for good purposes within companies. But let’s, if you can, briefly talk about the ethics with, you know, taking a lot of personal information and how that’s housed and how that is used. Can you speak to that?

(Guest: Mike Saylor)
For sure. And I think I mentioned earlier, too, we need to put boundaries around AI. We don’t want it to think just yet because we don’t know how to control AI thought. And I’m sure there are instances where AI is thinking on its own. And Google even had that because their AI came up with their own language and they didn’t know what they were talking about. So they had to pull the plug. But when we put boundaries around AI, a lot of times those boundaries are just in how information is presented to us as consumers. They’ve yet, I think, to validly or verifiably put boundaries around AI’s learning. And what I mean by that is AI is out there connected to as much stuff as it can so that it can learn. Well, where’s the boundaries and the ethical boundaries around that? Because do we really want AI to learn about how to build the next nuclear weapon or someone’s health records or criminal record? So as far as ethics go, I think there needs to be boundaries around what we’re teaching AI and what it has access to. And there’s actually guides out there on how to hack AI and get around those safeguards and actually get some information you shouldn’t. But then also ethically, and so there’s different deployments of AI. There’s public AI and it has access to public information. And then there’s the private development of AI, which we only use for our internal purposes. So as a company, using AI could be very important and very valuable. But you’ve got to make sure that the data that that AI is consuming aligns with the objectives of the company and any regulatory constraints you may have. In other words, if you’re a health care organization and you’re using AI to help do diagnosis and help design the next cancer fighting drug and you’re feeding it information from case studies and patients and doctor notes, you don’t want that information out in the public. You want it confined to your research and your use. Well, if we’re not doing that, then the questions that we’re asking a public AI get put into the body of knowledge of that AI that other people may now have access to. So if I’m saying, hey, Mike’s got these problems, these conditions, and we need some analysis of that. Someone across the world can Google what symptoms did Mike have? And if those constraints and the AI aren’t designed well and they know how to ask the question, then they could have access to my health information. And so just things to think about there. And it’s no different than really any other technology, putting thought into how we should understand the objective and what we’re trying to build, but also thinking and engineers have a problem with this. How do we keep this from being used for the wrong reason? Engineers just want to build something that works. It’s usually after the fact and it’s usually a huge investment to modify technology to address the risks of it being used for the wrong purposes.

(Host: Michelle Dawn Mooney)
So much information given, Mike, and so many questions I know that I still have. I’m sure people listening still have. So let’s kind of wrap up with this. Where can people go if they want to learn more about what you’re talking about today? Any resources or can they reach out to you? Where can they go if they want to find out more about everything about AI and then more importantly, how we can be more cautious and have that pause to be smarter about using it efficiently?

(Guest: Mike Saylor)
Stop Think Connect campaign is still out there and there’s actually a lot of good learning resources, especially for kids. And I think that’s important because I think a lot of kids use technology more than we do these days. My kids are teaching me how to use my phone better. But so Stop Think Connect is the one that comes to mind, but there’s a ton, CISA.org or CISA.org is the U.S. Government Infrastructure Security Administration. They’ve got some good programs out there. I’m very reluctant to say Google it, but that is a term we’re all familiar with. You know, just start looking for resources and there’s a ton out there that are free and none of them really come to mind. But I’m also available to you if you want to ping me on LinkedIn. That’s really the only platform I use. I do have a Professor Saylor Twitter account, but I don’t interact much there. I’m really just using that to follow the Cybertruck evolution, but with Tesla or feel free to call into Black Swan Cybersecurity. We’ve got a ton of people, not as many as we used to because AI has helped us reduce and focus our workforce. But I’m joking there. 855-BLK-SWAN is our phone number. BlackSwan-Cybersecurity.com or again, hit me up on LinkedIn. Happy to collaborate and continue the conversation.

(Host: Michelle Dawn Mooney)
Mike Saylor, CEO of Black Swan Cybersecurity and Professor of Cybersecurity at University of Texas, San Antonio. Mike, as I knew it would be, really just engaging conversation, a little scary, exciting to hear about the technology that exists, but once again, a little scary. And that’s why we have this conversation to make people aware. So really appreciate your time in bringing these things to light, hopefully providing that awareness for people and giving them an option to learn more. So thank you for being here today.

(Guest: Mike Saylor)
Thank you, Michelle. And don’t be scared, be diligent.

(Host: Michelle Dawn Mooney)
Thank you for listening to the Hitchhiker’s Guide to IT podcast series, first brought to you by Device 42. And today we’re talking about AI, the good, the bad and the ugly in business. And as I said before, a lot of information that we’ve covered here and Mike gave some great resources that you can follow up with. And be sure to follow up on this podcast series and subscribe to this podcast and your podcast player if you like what you heard and like to hear more engaging conversations like the one you heard today. I’m Michelle Dawn Mooney. Thanks again for joining us. We hope to connect with you on another podcast soon.