We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies, revised Privacy Policy and Terms of Service.
You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.
If you wish to go to ZacksTrade, click OK. If you do not, click Cancel.
Cybersecurity ETFs Set to Gain from AI's Usage in Scams
Read MoreHide Full Article
Artificial Intelligence is a doubt-edged sword for cybersecurity. For example, a key talking point at the RSA Conference 2023, as cited on techtarget.com, was the multifaceted impact of OpenAI's GPT-4 on cybersecurity. The conference's speakers explored the potential duality of ChatGPT's use in cybersecurity, forecasting a surge in code reuse and attacks.
While some industry experts noted that AI’s positive applications in enhancing cybersecurity measures, it's undeniable that the language model has introduced novel cybersecurity threats, including disinformation generation and designing social engineering attacks.
Weighing the Potential of ChatGPT in Cybersecurity
Notably, generative Ais like ChatGPT introduces a unique cybersecurity threat, especially through AI-powered phishing scams. A report from Harvard Business Review suggests that IT teams need tools to detect AI-created emails, training for employees on cybersecurity prevention skills, and government oversight for AI usage in cybersecurity.
There's a potential for hackers to exploit ChatGPT to generate hacking code, a novel cybersecurity threat. As such, the need for cybersecurity professionals to acquire AI skills and stay updated with the technology is more vital than ever.
Scammers, using AI, are engaging in deceptive practices, posing as prospective partners, friends, or even government agents, making fraud detection more challenging. Experts warn of an escalating issue as fraud becomes nearly indistinguishable and urge individuals to stay vigilant.
Navigating Cybercrime 3.0
Haywood Talcove, CEO of LexisNexis Risk Solutions, labels this new age of fraud as "crime 3.0", where advanced technologies have the potential to bypass conventional security measures, as quoted on Yahoo Finance. In fact, recent data from the Federal Trade Commission suggests that fraud increased by 19% in 2022, totaling roughly $8.8 billion in losses, said the same souce.
However, Kathy Stokes, AARP’s director of Fraud Watch Network, points out that the real scope of fraud is likely much larger, as many online scams go unreported. She emphasizes that while the generative AI has increased the sophistication of scams, it's not just older adults who are targeted; younger individuals are also falling prey to fraud.
What Holds for Future?
Software developers are urged to create generative AI specifically for human-operated Security Operations Centers (SOCs) and there are calls for stricter regulations regarding AI usage. The recently released "Blueprint for an AI Bill of Rights" by the Biden administration becomes even more crucial in the wake of ChatGPT's launch.
ETFs to Gain
Whether positively or negatively, the emergence of advanced generative AI could catalyze a boost in cybersecurity stocks. Investors are likely to keep an eye on ETFs such as the ETFMG Prime Cyber Security Fund (HACK - Free Report) , First Trust NASDAQ Cybersecurity ETF (CIBR - Free Report) , iShares Cybersecurity & Tech ETF (IHAK - Free Report) , and WisdomTree Cybersecurity Fund (WCBR - Free Report) .
(Disclaimer: This article has been written with the assistance of Generative AI. However, the author has reviewed, revised, supplemented, and rewritten parts of this content to ensure its originality and the precision of the incorporated information.)
See More Zacks Research for These Tickers
Normally $25 each - click below to receive one report FREE:
Image: Bigstock
Cybersecurity ETFs Set to Gain from AI's Usage in Scams
Artificial Intelligence is a doubt-edged sword for cybersecurity. For example, a key talking point at the RSA Conference 2023, as cited on techtarget.com, was the multifaceted impact of OpenAI's GPT-4 on cybersecurity. The conference's speakers explored the potential duality of ChatGPT's use in cybersecurity, forecasting a surge in code reuse and attacks.
While some industry experts noted that AI’s positive applications in enhancing cybersecurity measures, it's undeniable that the language model has introduced novel cybersecurity threats, including disinformation generation and designing social engineering attacks.
Weighing the Potential of ChatGPT in Cybersecurity
Notably, generative Ais like ChatGPT introduces a unique cybersecurity threat, especially through AI-powered phishing scams. A report from Harvard Business Review suggests that IT teams need tools to detect AI-created emails, training for employees on cybersecurity prevention skills, and government oversight for AI usage in cybersecurity.
There's a potential for hackers to exploit ChatGPT to generate hacking code, a novel cybersecurity threat. As such, the need for cybersecurity professionals to acquire AI skills and stay updated with the technology is more vital than ever.
Scammers, using AI, are engaging in deceptive practices, posing as prospective partners, friends, or even government agents, making fraud detection more challenging. Experts warn of an escalating issue as fraud becomes nearly indistinguishable and urge individuals to stay vigilant.
Navigating Cybercrime 3.0
Haywood Talcove, CEO of LexisNexis Risk Solutions, labels this new age of fraud as "crime 3.0", where advanced technologies have the potential to bypass conventional security measures, as quoted on Yahoo Finance. In fact, recent data from the Federal Trade Commission suggests that fraud increased by 19% in 2022, totaling roughly $8.8 billion in losses, said the same souce.
However, Kathy Stokes, AARP’s director of Fraud Watch Network, points out that the real scope of fraud is likely much larger, as many online scams go unreported. She emphasizes that while the generative AI has increased the sophistication of scams, it's not just older adults who are targeted; younger individuals are also falling prey to fraud.
What Holds for Future?
Software developers are urged to create generative AI specifically for human-operated Security Operations Centers (SOCs) and there are calls for stricter regulations regarding AI usage. The recently released "Blueprint for an AI Bill of Rights" by the Biden administration becomes even more crucial in the wake of ChatGPT's launch.
ETFs to Gain
Whether positively or negatively, the emergence of advanced generative AI could catalyze a boost in cybersecurity stocks. Investors are likely to keep an eye on ETFs such as the ETFMG Prime Cyber Security Fund (HACK - Free Report) , First Trust NASDAQ Cybersecurity ETF (CIBR - Free Report) , iShares Cybersecurity & Tech ETF (IHAK - Free Report) , and WisdomTree Cybersecurity Fund (WCBR - Free Report) .
(Disclaimer: This article has been written with the assistance of Generative AI. However, the author has reviewed, revised, supplemented, and rewritten parts of this content to ensure its originality and the precision of the incorporated information.)