Request A Consultation

Heard on the Street

Published: June 29, 2023

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

AI Regulation. Commentary by from Frederik Mennes, Director of Product Management & Business Strategy at OneSpan

“The regulation of generative AI is necessary to prevent potential harm stemming from malicious applications, such as hate speech, targeted harassment, and disinformation. Although these challenges are not new, generative AI has significantly facilitated and accelerated their execution. 

“The regulation of generative AI is necessary to prevent potential harm stemming from malicious applications, such as hate speech, targeted harassment, and disinformation. Although these challenges are not new, generative AI has significantly facilitated and accelerated their execution. 

Companies should actively oversee the input data used for training generative AI models. Human reviewers, for instance, can eliminate images containing graphic violence. Tech companies should also offer generative AI as an online service, such as an API, to allow for the incorporation of safeguards, such as verifying input data prior to feeding it into the engine or reviewing the output before presenting it to users. 

Additionally, companies must consistently monitor and control user behavior. One way to do this is by establishing limitations on user conduct through clear Terms of Service. For instance, OpenAI explicitly states that its tools should not be employed to generate specific categories of images and text. Furthermore, generative AI companies should employ algorithmic tools that identify potential malicious or prohibited usage. Repeat offenders can then be suspended accordingly.

While these steps can help manage risks, it is crucial to acknowledge that regulation and technical controls have inherent limitations. Motivated malicious actors are likely to seek ways to circumvent these measures, so upholding the integrity and safety of generative AI will be a constant effort in 2023 and beyond.”

What’s next for AI regulation? Commentary by Dr. Srinivas Mukkamala, Chief Product Officer, Ivanti 

“Properly designed federal regulation acts as an enabler—not an inhibitor—to unlocking the magnificent power of AI to benefit us all. However, the power of the technology is not without its potential drawbacks. Generative AI like ChatGPT is coming to the public square and it is gaining significant momentum which introduces the possibility of misinformation being created and spread at machine speed. Furthermore, the wider the use of AI spreads, the more prominent the risk of perpetuating data, human and algorithmic bias. We need to evangelize the importance of responsible AI to practitioners and work collaboratively with policymakers to construct proper guardrails for the industry.”

Navigate your data landscape with data mapping. Commentary by Rachael Ormiston, Head of Privacy at Osano

“From proprietary company and customer information to financial numbers, most organizations are drowning in data. To successfully manage and secure all that data, privacy professionals are turning to data mapping. This process of connecting one source’s data field to another source’s data field allows you to understand and contextualize your entire data landscape by identifying what data you have, why you have it, where it’s coming from and who has access to it.

Inside Big Data Logo

A comprehensive overview of your data landscape facilitates data management and analysis, allowing you to glean insights and help with decision-making. Data mapping also makes it easier to ensure you’re complying with data privacy regulations, laws and security requirements by giving you better visbility to assess the risks associated with the data you have. As privacy professionals continue improving the consistency of how they operationalize their data privacy programs, data mapping will be invaluable for managing data across its entire lifecycle.”

ChatGPT owner nears record 1bn unique users per month. Commentary by Stefan Katanic, CEO of Veza Digital 

“The ChatGPT phenomenon spread like wildfire at the end of 2022 and we expect it to soon break all records of being the fastest ever website to reach 1 billion monthly active users in such an incredibly short space of time. This is indicative of a clear public interest in AI-powered solutions, which legislators are rushing to regulate before it spirals into unchartered territories, like artwork copyright and ethical challenges. Debates about AI are divisive, but one thing we can probably all agree on is that AI is no longer the future – it is the present. 

We believe that AI will play a big role in over 50% of businesses in the next five years, as such we are even looking to embrace this technology advancements in our daily operations as well as strategically geo-positioning of our company.” 

Addressing the Security Implications and Concerns of ChatGPT. Commentary by Jerald Dawkins, PH.D., Chief Technology Officer, CISO Global

“It’s true, ChatGPT comes with risks – just like all new technology does. Do we embrace the fear and shut down workplace innovation? If so, we also lose the ability to help our teams work better, faster. If we want to enable people to leverage technology to work smarter, what we need to do is understand how these tools work, think through their use cases, define risks, and put some protections in place that allow them to be used wisely. Once you understand that ChatGPT is designed to understand vast amounts of data quickly, and it uses all the data you give it as part of its cache. Now, think about problems people might want to solve with a quick, accurate search functionality. Dev ops teams might want suggestions for their code (see Samsung). Your IT team might want help creating a software rollout plan that doesn’t miss steps. You get the idea. Ask yourself – is the information my teams would want to feed into this tool something that can be shared publicly? Is the information coming out trustworthy? How can I ensure we allow for cases where the answer is “yes”, and how do we mitigate the ones where the answer is “no”?

Now let’s think about the risks of using a “large language model” open AI tool. A cyber attacker half-way around the world could use chat AI to write better phishing emails. Executives give public speeches and publish articles online regularly, leaving transcripts and records of their typical wording style, tone, and more. I asked ChatGPT to write me an email request for an invoice in the tone of JFK, Sr., and the results were shockingly accurate. So, without any social engineering or language lessons, a bad actor could create a pretty convincing email that sounds like your executive, requesting that teams take an action or click a malicious link. In another use case, disinformation could be fed into the tool to train it on biased or malicious data, increasing the risk of untrustworthy outputs. My recommendation for companies evaluating the tool is not to make blanket policies that disallow ChatGPT, but to proactively review and understand it, your users, build security and privacy controls around sensitive corporate data, and make sure people know how to validate the answers they are getting. Then you have benefits of AI in the workplace, but you’ve mitigated risk.”

Click here to read more