Since the launch of sophisticated AI-driven tools such as ChatGPT and Google’s Bard, reports have emerged that indicate these tools could help hackers steal passwords and phish sensitive information even more effectively than before.
In order to learn how much of a threat this poses to the average American, in April, PasswordManager.com surveyed 1,000 cybersecurity professionals.
Key findings:
When survey respondents were asked to rate their level of concern when it comes to people using AI tools to hack passwords, 56% say they are ‘somewhat’ (26%) or ‘very’ (30%) concerned about this possibility.
Similarly, 58% of respondents say they are ‘somewhat’ (26%) or ‘very’ (32%) concerned about people using AI-powered tools to create phishing attacks.
“ChatGPT is a tool with many excellent capabilities, and there is no discussion about that. But many people don’t know it is also a powerful tool that hackers or scammers can use,” comments Marcin Gwizdala, Chief Technology Officer at Tidio. “One of the threats that appeared by using AI, in general, is phishing scams. ChatGPT can be easily mistaken for an actual human being because it can converse seamlessly with users without spelling, grammatical, and verb tense mistakes. That’s precisely what makes it an excellent tool for phishing scams,” he explains.
“Of course, as we know, those attacks require immediate attention and actionable solutions,” Gwizdala continues. “The best way to do that is to equip your IT team with tools that can determine what’s ChatGPT-generated vs. what’s human-generated, explicitly geared toward incoming “cold” emails.”
Survey respondents were then asked to give their opinions on whether or not ChatGPT and similar tools have made it overall easier for hackers to steal passwords and other sensitive information.
Fifty-two percent of cybersecurity professionals surveyed say AI tools have made it ‘somewhat’ (27%) or ‘much easier’ (25%) for people to steal sensitive information, while 51% say it has made it ‘somewhat’ (28%) or ‘much easier’ (23%) for people to hack passwords.
“The threat of AI as a tool for cybercriminals is dire,” says Steven J.J. Weisman, Esq. Weisman is a lawyer, author, professor specializing in white collar crime, and one of the country’s leading authorities on scams, identity theft, and cybersecurity.
“Phishing and spear phishing emails are a large part of how cybercrimes, data breaches and scams begin and now these phishing and spear phishing emails and text messages will be able to be made more believable,” Weisman explains. “In particular, many scams originate in foreign countries where English is not the primary language and this is often reflected in the poor grammar and spelling found in many phishing and spear phishing emails and text messages coming from those countries.”
“Now however, “ he continues, “through the use of AI, those phishing and spear phishing emails and text messages will appear more legitimate. In addition, the ability to use AI to clone voices with only a small sample of the voice of the person to be impersonated is another danger. Calls may appear to come from trusted sources within a company, but are made by a scam artist.”
When asked how much of a threat hackers using these tools to steal passwords poses, 36% say they pose a ‘medium-level’ (22%) or ‘high-level threat’ (14%) to the average American individual, while the plurality also says this situation poses a ‘medium-’ (20%) or ‘high-level’ (16%) threat to the average American company.
Similarly, 39% of respondents say AI tools used to create phishing scams pose a ‘medium-’ (21%) or ‘high-level’ (18%) threat to individuals and 36% say this poses a ‘medium-’ (19%) or ‘high-level’ (18%) threat to companies.
“Most hacks are caused by human error. People need to educate themselves on best practices for keeping their information safe online,” explains Zo DiGiovanni, president of Remi IT Solutions. “They should also employ security tools like password managers and next generation antivirus software. Everyone can be proactive and invest in an identity theft solution that will help safeguard their identity and warn them should a breach occur,” he says.
“Businesses need to create a security-minded culture where every employee plays a role in keeping the business safe. Businesses should develop a cybersecurity plan and conduct regular training and awareness programs,” DiGiovanni continues. “It’s up to businesses to protect themselves by conducting regular vulnerability assessments and employing the latest defense technologies available to their industry.”
“It will not be uncommon to see businesses using AI against these dark agents,” he adds. “Investing in AI-powered defense solutions will help ensure compliance and can detect and respond to threats in real-time. AI can be used to automate routine cybersecurity tasks, such as network monitoring and vulnerability assessments, freeing up employees to focus on more complex tasks. All businesses need to get serious about cybersecurity thanks to AI.”
When we asked survey respondents to give examples of AI-generated scams they had seen circulating, responses included:
PasswordManager.com’s Subject Matter Expert, Daniel Farber Huang, offers the following tips for individuals and businesses to keep themselves safe from AI-generated scams.
“With the speed at which AI applications are being developed and released, an overabundance of caution is prudent for those concerned with digital security and potential exploitation by bad actors. Here are 5 healthy habits to keep in mind,” he writes.
This survey was commissioned by PasswordManager.com and conducted online by the survey platform Pollfish on April 27, 2023. In total, 1,000 participants in the U.S. completed the full survey. All participants had to meet demographic criteria ensuring they were age 25 or older, currently self-employed or employed for wages, had a household income of $50,000/year or more, and have a career in security, software, information, or scientific/technical services.
Additionally, respondents were screened to include only those who specifically identified their job role as cybersecurity, worked full-time in this job role, and were somewhat or very familiar with AI tools such as ChatGPT and Bard.
The survey used a convenience sampling method, and to avoid bias from this component Pollfish employs Random Device Engagement (RDE) to ensure both random and organic surveying. Learn more about Pollfish’s survey methodology or contact [email protected] for more information.