iTSTL Blog

iTSTL has been serving the Missouri area since 2015, providing IT Support such as technical helpdesk support, computer support and consulting to small and medium-sized businesses.

In the Wrong Hands, AI is Dangerous

In the Wrong Hands, AI is Dangerous

Artificial intelligence, or AI, is a technology that many industries have found themselves benefiting greatly from, especially in the domains of cybersecurity and automation. Unfortunately, for every one great use of something, hackers will find two bad uses for it. AI has dramatically changed the landscape of cybersecurity and, more interestingly, cybercrime. Let’s take a look at why these threats are so concerning.

Deepfakes

The word “deepfake” comes from the words “deep learning” and “fake media.” A deepfake uses false imaging or audio to create something that appears authentic on the surface, but it is totally fake underneath. Deepfakes can be extremely dangerous and harmful when used under the right circumstances, like a news article showing off a fake video or image. AI-generated deepfakes have even been used in extortion schemes and misinformation scandals.

Deepfakes using AI can generate realistic videos, particularly when there is a lot of source material to call upon, like in the case of famous people or high-profile individuals with a large web presence. These videos can be so convincing that they can show the celebrity or even a government official saying or doing just about anything, creating misinformation and distrust.

AI-Supported Hacking Attacks

AI has been known to help cybercriminals with everyday hacking attacks, too, like breaking through a password or finding their way into a system. Hackers can use machine learning or artificial intelligence to analyze and parse password sets, then use the information learned to piece together potential passwords with shocking accuracy. These systems can even account for how people adjust their passwords over time.

There are also cases where hackers use machine learning to inform and automate their hacking processes. These systems can find weak points in infrastructures and penetrate them through the weaker links. These systems can then autonomously improve their functionality over time with great effectiveness.

Human Impersonation and Social Engineering

AI can also impersonate human beings by imitating their online behaviors. Automated bots can be used to create fake accounts capable of doing most of the everyday online activities that a user might (for example, liking posts on Instagram, sharing status updates, etc). These bots can even use these tactics to make money for the hacker.

Suffice to say that AI systems as a threat represent quite a dangerous future, should they be leveraged properly. These threat actors should be monitored both now and in the future.

To ensure that your organization doesn’t let hackers get the better of you, iTSTL can help. To learn more, reach out to us at (314)828-1234.

You Can Tell a Lot About Your Business by Monitori...
A 5-Step Guide to Troubleshooting Technology Probl...
 

Comments

No comments made yet. Be the first to submit a comment
Guest
Already Registered? Login Here
Guest
Thursday, 25 April 2024

Captcha Image

Customer Login


News & Updates

iTSTL is proud to announce the launch of our new website at www.itstl.com. The goal of the new website is to make it easier for our existing clients to submit and manage support requests, and provide more information about our services for ...

Contact us

Learn more about what iTSTL can do for your business.

iTSTL
10 Fenton PLZ Suite #1665
Fenton, Missouri 63026