ChatGPT could create polymorphic malware, cybersecurity researchers warn
Updated: Jan 25
In a new report, the CyberArk Labs team discusses the various loopholes that can be used to cause harm through exploiting the popular program
It’s hard to believe that ChatGPT has only been with us a couple of month. Everyone seems to be using it for a variety of tasks, from negotiating deals to creating artwork from pop music lyrics. But this thrilling new technology also hold great potential for danger. Israeli cybersecurity company CyberArk issued a warning against the program’s advanced ability to write sophisticated malware.
“ChatGPT could easily be used to create polymorphic malware. This malware’s advanced capabilities can easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary,” say the company’s researchers, Eran Shimony and Omer Tsarfati.
The researchers discuss how ChatGPT can be harnessed for good – and for not so good, via content filters and bypasses as well as mutations, and how threat actors can expllit the system and avoid detection.
Read the full post here.
CyberArk will participate at Cybertech Global Tel Aviv 2023. For additional information and registration, please visit the official conference website at https://www.cybertechisrael.com/