TH | EN
TH | EN
HomeTechnologyChatGPT : AI for good or AI for badcamp

ChatGPT : AI for good or AI for badcamp

Science, technology and all its components have strongly benefited the human race over generations. By definition, it is the search for new knowledge, how can this be bad? Everything has the potential of being good or bad depending on the people who are behind.

Our relentless quest to mimic and decipher the human mind has today ushered an era of Artificial Intelligence. ChatGPT, a text-based artificial intelligence (AI) bot has become the latest tool making headlines for its viral use of advanced AI. From accurately fixing a coding bug, generating cooking recipes, creating 3D animations, to composing entire songs, ChatGPT has showcased the mindblowing power of AI to unlock a world of incredible new abilities.

On the flip side, AI has always been considered as a double edged sword. For years, there has been worldwide speculation of Artificial Intelligence (AI) and its looming takeover of the world.  Today, users have AI-powered security tools and products that tackle large volumes of cybersecurity incidents with minimum human interference. However, it can also allow amateur hackers to leverage the same technology to develop intelligent malware programs and execute stealth attacks.

Is there a problem with the new chatbot?

Since the launch of ChatGPT at the end of November, tech experts and commentators worldwide have been concerned about the impact AI-generated content tools will have, particularly for cybersecurity. Can AI software democratise cybercrime?

A team representing Singapore’s Government Technology Agency at the Black Hat and Defcon security conferences in Las Vegas demonstrated how AI crafted better phishing emails and devilishly effective spear phishing messages than humans.

Researchers using OpenAI’s GPT-3 platform in combination with other AI-as-a-service products focused on personality analysis generated phishing emails that were generated customised to their colleagues’ background and character. Eventually, the researchers developed a pipeline that groomed and refined the emails before hitting their targets. To their surprise, the platform also automatically supplied specifics, such as mentioning a Singaporean law when instructed to generate content for people in Singapore.

The makers of ChatGPT have clearly suggested that the AI-driven tool has the in-built ability to challenge incorrect premises and reject inappropriate requests. While the system apparently has inbuilt guardrails designed to prevent any kind of criminal activities, however, with a few tweaks, it generated a near flawless phishing email which sounded ‘Weirdly Human’.

Why is ChatGPT making waves in the AI market?

How to tackle the challenges?

The average ransom demand in cases worked by Unit 42 incident responders rose 144% in 2021 to $2.2 million, while the average payment climbed 78% to $541,010, according to the 2022 Unit 42 Ransomware Threat Report. Thailand ranks 6th in the JAPAC region in the number of ransomware attacks.  This trend is only expected to rise as the availability of tools on the dark web for only less than $10, emergence of ransomware-as-a-service models and AI based tools such as ChatGPT will lower the barrier of entry for cybercriminals.

Considering the looming threats of an ever smarter and technologically advanced hacking landscape, the cybersecurity industry must be equally resourced to fight such AI-powered exploits. In the long run, the industry’s vision cannot be that a swarm of human threat hunters try to sporadically fix this with guesswork.

The need of the hour is to take intelligent action to neutralise these evolving threats. On the positive side, Autonomous Response is today significantly addressing threats without human intervention. However, as AI-powered attacks become a part of everyday life, businesses, governments and individuals impacted by such automated malware must increasingly rely on emerging technologies such as AI and ML to generate their own automated response.

Amity launches new product with ChatGPT’s technology to empower Thai businesses

Using AI tools more responsibly and ethically

Businesses face a number of challenges in navigating the AI cybersecurity landscape, from technical complexities to human components. In particular, there is considerable focus on the balance between machines, humans and ethical considerations.

Establishing corporate policies is critical to doing business ethically, while improving cybersecurity. We need to establish effective governance and legal frameworks that enable greater trust in AI technologies being implemented around us to be safe, reliable, and contributing to a just and sustainable world. The delicate balance between AI and humans will therefore emerge as a key factor towards successful cybersecurity in which trust, transparency and accountability supplements the benefits of machines.

Article By Sean Duca, Vice President and Regional Chief Security Officer for Asia Pacific & Japan at Palo Alto Networks

5 public sector technology predictions for 2023

AIS Fibre partners Huawei launch 1 Gbps at single point of connection

STAY CONNECTED

0FansLike
0FollowersFollow
0SubscribersSubscribe

Lastest News

MUST READ