Request A Consultation
Does ChatGPT Know Your Secrets?

Does ChatGPT Know Your Secrets?

Threats & Benefits of AI in Your Environment

By Chris Clements, VP of Solutions Consulting, CISO Global, Inc.

ChatGPT has been taking the world by storm, but it’s bringing with it issues around cybersecurity, data protection, and data privacy. IT leaders and business leaders are looking to create policies that will help protect their people and corporate assets, but so few people really understand the technology well enough to grasp and weigh the benefits, the concerns, and implications for the cybersecurity industry, in general. With a foundational understanding of AI research, algorithms, and development, I want to help us first get a bird’s eye view. We need to see the landscape and the context in order to ask the big questions like, “What does it mean to me, to my children, to my business?”

The truth is that ChatGPT is only the latest incarnation of natural language processing, which is a subset of capabilities within the field of AI. Its history goes back a long way, and it’s evolved quite a bit. Early on, it was more symbolic, limited primarily to Western countries in the 1950s. It was the height of the Cold War, so data scientists were trying to use natural language processing to quickly and accurately translate Russian into English. Then, it progressed to being more statistical in the 1990s. Now, it’s neural network based – meaning it processes data in a way that is inspired by, or seeks to mimic, the human brain. That doesn’t mean that it’s not probabilistic, because it is, but it’s more based on neural networks than statistical models. ChatGPT is a particular type of neural network called a generative pretrained transformer (GPT) – a standard type of large language model. 

This is not the first time a group of elite scientists figured out how to take what was only being used in labs and deliver it to everyday users. There’s a book called The Man Who Solved the Market, about the history of quant and how Jim Simmons applied it to the stock market in the late 1990s / 2000s, giving him the ability to predict trading. Essentially, he hired away a handful of IBM’s natural language processing experts to help them take the next leap, by providing enough information about irrational buying or selling to allow analysts to take advantage of the conditions they created. This opened the door to possibilities for monetizing AI capabilities, but what are the implications of those possibilities? From my perspective, the milestone as it relates to this conversation is that you see that AI is not a moral thing in, and of itself. It’s just an algorithm, and nobody anticipated this use case until he did it. He and his scientists figured out how to leverage technology that was already there. They didn’t invent it.

Similarly, you have the Bitcoin / blockchain craze, which people have all kinds of opinions about. However, when it hit the market, cryptography was nothing new. It had been around for a long time. The Bitcoin explosion is really just the result of some folks who found a way to put existing cryptography technology into a new use case- digital currency. What’s fascinating to me as a technologist is not just understanding the basis of neural networks and data training, but asking, What’s the application? What’s the use case? What are the societal and business implications? I think we don’t yet fully understand the extent of possible use cases. We have to think like all the people who could potentially use these technologies, understand them, get ahead of them, and then make policy decisions.

When people are talking about how we don’t really understand the implications of AI – ChatGPT, specifically, they really mean something else. Data experts get that it’s data in, data out. What they mean by saying we don’t understand the implications is that we haven’t yet defined what the data out means in this case. We have to figure out how people are going to start integrating and using it to answer that question. That’s the real challenge facing businesses today. 

As a cybersecurity practitioner, my peer and CISO’s Chief Technology Officer, Jerald Dawkins, Ph.D., asked on our recent live show, “How will the good guys use it to further cybersecurity practices and technology to thwart bad actors?” To answer that question, though, we need teams of ethical hackers like ours at CISO Global Hades Labs, who can experiment with methodology, think like attackers, and find ways to defend against them. Up-and-coming thought leaders will, no doubt, be experimenting with this very thing over the summer at the 2 biggest global hacker conferences – BlackHat and DefCon.

In short, we have to figure out how the “bad guys” are going to use it to make it easier to steal our assets, and our data, and our money – and that’s the part that we have to know and protect against with corporate policies and security controls. ChatGPT is an eager learner, and it can learn just about anything you want it to. One way to uncover that “anything” is simply to start using it, yourself. Play around with it, learn from the results, but do it carefully. If you put sensitive information into the tool, it has that information forever – meaning it could be integrated into the tool’s database later and made available to other users. If you don’t want to give away information, don’t put it into the tool. Now, think about the kinds of information in your corporate environment that could be accidentally or maliciously put into ChatGPT as people are trying to find faster, better ways to accomplish their work. If you have a solid data inventory that is very up-to-date, your security and compliance teams will have no problem implementing data loss protection (DLP) controls and creating policies to protect what is hierarchically most sensitive. 

One thing that many thought leaders are emphasizing right now is not to become so scared of new technology that you don’t support positive use cases in the workplace. How could it transform your business if people could create work products faster and more efficiently? Do you really want to limit profitability for fear’s sake? Or, is it better to begin your own research to understand what is coming your way and challenge your teams to get ahead of it?

If you would like outside support from cybersecurity experts who are doing that research and maintain the highest degree of expertise in managing Cybersecurity Strategy and Risk, our teams are here for you. Simply request a consultation, and we will get you in touch with someone who can help you create policies to support your business, security, and compliance goals.