FraudGPT makes it easier for Cybercriminals

Concerns associated with generative AI have materialized into a new threat – „phishing as a service“. A model named FraudGPT, recently discovered by Netenrich security researcher Rakesh Krishnan, is circulating in darknet forums and Telegram channels and is available for a subscription fee.

Key Points:

  • FraudGPT can be used to generate malicious code to exploit vulnerabilities in computer systems, applications, and websites. It can also create undetectable malware that bypasses traditional security measures.
  • Another capability of FraudGPT is the identification of Non-Verified by Visa (Non-VBV) bins, allowing hackers to conduct unauthorized transactions without additional security checks.
  • FraudGPT can automatically generate convincing phishing pages, mimicking legitimate websites, thereby increasing the success rate of phishing attacks.
  • FraudGPT can also generate content to aid in learning coding and hacking techniques, providing resources to improve cybercriminals‘ skills.
  • The introduction of generative AI models has drastically changed the threat landscape. These new tools are available to all and could serve as a launchpad for inexperienced attackers.

For more information, visit the full article on the Analytics India Magazine website.

Data protection overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helps our team understand which sections of the website are most interesting and useful to you.