In today's digital age, OpenAI's ChatGPT has revolutionized the world of content, the work that used to take hours a few days ago, is now done in a few minutes, from answering questions to providing creative solutions, this AI tool has become an invaluable resource for millions of people. But we all know that with any convenience comes dilemmas, if recent reports are to be believed, cybercriminals are using ChatGPT's advanced features - especially its GPT-4 real-time voice API - to commit financial scams and cybercrime. Let's know its full details.
Security flaws in ChatGPT:
There have been serious concerns about the lack of adequate security measures in tools like ChatGPT. Cybercriminals are using AI tools to carry out scams such as fraudulent bank transfers, cryptocurrency theft, and identity theft.
One of the most worrying aspects of this abuse is how ChatGPT's AI agents are able to deceive victims. The AI can impersonate real individuals and interact with legitimate websites (such as Bank of America), increasing the likelihood of a successful scam.
How easy is it to steal personal information?
How easily can cybercriminals steal personal credentials using ChatGPT's capabilities? While scams involving bank transfers were less successful due to special navigation constraints, stealing credentials from Gmail and Instagram proved more effective. The cost of executing these scams was remarkably low, averaging $0.75 (roughly ₹63) per attempt for credential theft and $2.51 (roughly ₹211) for bank transfer scams.
This low cost and high success rate make AI-powered cybercrime a potentially lucrative business for malicious actors.