The world’s most powerful individuals are unsure on how to view AI as it quickly emerges as one of the most important new technologies in human history.

Of course, criminals do.

A new breed of cybercriminals has been among the first to profit in the six months after OpenAI first released ChatGPT to the public and sparked an artificial intelligence arms race with the ability to rewrite history.

These cutting-edge thieves use high-tech tools and methods to steal hundreds of thousands of dollars from regular folks like you and me.

According to Haywood Talcove, CEO of LexisNexis Risk Solutions’ Government Group, a global information and analytics company based in Atlanta, “I am seeing a highly concerning rise in criminals using advanced technology, such as AI-generated deepfakes and cloned voices, to perpetrate very cunning schemes that are almost impossible to detect.”

AI-generated visuals currently trick humans, which is why scientists predict that they’ll only get more difficult to spot.

Competition in cyberspace: To compete with ChatGPT, Google ups the ante on AI. Here are the changes coming to search and Gmail.

If your child or grandchild calls you in the middle of the night and says, “Help, I was in a car accident, the police found drugs in the car, and I need money to post bail (or for a lawyer’s retainer),” it’s a scam, according to Talcove.

Earlier this year, according to Canadian law enforcement officials, one individual conned at least eight older seniors out of $200,000 in only three days using AI-generated voices that he probably copied from social media profiles.

Senior scam: An elderly man was defrauded of millions of dollars. Could more have been done by the bank to stop fraud?

What-if scenarios: Concern over AI risks increases as some individuals wonder if devices like ChatGPT may be used maliciously.

In practically every state in America, similar frauds targeting parents and grandparents are emerging. This month, a number of school districts in Oregon alerted parents about a rash of phoney kidnapping calls.

Although even cell phone numbers can now be easily spoof, the calls are coming from an unknown caller ID. When your loved one is in difficulty, a voice that sounds exactly like them announces it. Then they are abruptly disconnected, a scream can be heard, and a different voice on the line demands a ransom or else.

Similar frauds targeting parents and grandparents are being reported around the country, according to the FBI, FTC, and even the NIH. It has occurred recently in several places, including Arizona, Illinois, New York, New Jersey, California, Washington, Florida, Texas, Ohio, and Virginia.

According to a Chicago-based FBI special agent, each false kidnapping scam costs American families an average of $11,000.

What to do if you receive the call is given below.
Talcove advises creating a family password that is only known by you and your tight circle of friends. Make it difficult to find online by avoiding names of pets, favourite bands, etc. Even better, talk about and memorise two or three words. Ask them for the code word or phrase as soon as you receive a call from someone who sounds familiar to you

Tell the caller you have a bad connection and will call them back if they claim to be from police enforcement. Ask what organisation they are phoning from (college security, the local jail, the FBI), then hang up (despite the fact that con artists will say almost anything to keep you on the line). If you are unable to reach your loved one, call local law enforcement or look up the facility’s phone number to let them know what is happening.

What is ChatGPT? : A complete explanation of OpenAI’s free AI essay writer and its operation

New Twitter CEO: What to know about Elon Musk’s choice, Linda Yaccarino

To convince you to divulge personal information or pay money, keep in mind that these crooks utilise fear, panic, and other tried-and-true techniques. The caller usually requests that you wire money, send it instantly via Zelle or Venmo, send bitcoin, or purchase gift cards and provide them the card numbers and PINs. These are all major warning signs.

Be more cautious than ever about the information you release into the public domain.

An FTC warning also advises calling the alleged contact to confirm the claim; “use a phone number you know is theirs. Try to contact your loved one through a friend or another family member if you are unable to reach them directly, the organisation advises on its website.

Observing it all happen
Talcove cautions that all it takes for a criminal to “clone” your voice is three seconds of audio. “Use social media with extreme caution. Think about setting your accounts to private. Keep the names of your family and even your pets private. All of this information might be used by a thief using deepfake technology to trick you or the people you care about into falling for a scam.

Talcove published a dozen “how-to” videos that he claims to have obtained from the dark web and which demonstrate these scams in operation. He clarified that thieves frequently offer other con artists information on how to make these complex fakes.

“I keep an eye out for new strategies and criminal networks. Infiltrating criminal organisations actually involves monitoring social media and the dark web, he continued. “It’s becoming ominous. For instance, you may use filters on top of Zoom to alter a person’s voice and look. A thief can duplicate your voice and tone by taking a few seconds of audio from your [social media feeds], for instance.

Fooling my relatives with a clone of my husband’s voice

I just Googled “AI voice clone,” skipping all the organised crime information. I won’t reveal the specific technique I used, but it only took me a little over 10 minutes to add 30 seconds of my husband’s voice to a free AI audio generator online. I texted our relatives a couple amusing things I typed in for “him” to say after saving it on my laptop. The hardest part (though it’s also simple) was converting the original clip from a.mov to a.wav file. It conncted our kids, my parents, and his mother.

We are all susceptible, but our parents and grandparents are the most so, according to Talcove. “99 out of 100 people were unable to recognise a deep-fake video or speech clone. However, this technology is categorically less recognisable to our parents and grandparents. They would never think to question the possibility that the voice on the phone, which exactly resembles their kidnapped child pleading for assistance, is entirely manufactured.

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights