More AI guidelines on the way

The Office of the National Cyber Security Agency (NCSA) is developing artificial intelligence (AI) security guidelines, expected to be completed by August, to shape AI tech adoption in the country.

featured-image

The Office of the National Cyber Security Agency (NCSA) is developing artificial intelligence (AI) security guidelines, expected to be completed by August, to shape AI tech adoption in the country. The move complements the existing Generative AI Governance Guideline for Organisations and AI Governance Guideline for Executives, which were developed by a team of experts from the AI Governance Center. AVM Amorn Chomchoey, secretary-general of NCSA, said the draft of the AI Security Guideline focuses on security alignment with the Cybersecurity Act and Personal Data Protection Act (PDPA).

The guideline deals with the rapidly growing use of generative AI, or GenAI, amid risks like privacy, data security, and the potential impacts on employees and society. "As GenAI adoption rises in the digital age, clearer guidelines for the most proper way to adopt has to be available," he said. AVM Amorn said the AI governance guidelines have a broader range of contents than the draft of the AI Security Guideline as they cover understanding of GenAI, benefits and limitations, risks, and guidelines for applying good governance.



He said there are several types of inefficient uses of AI that could create threats and other problems. These are critical concerns for AI adoption in 2025. Possible threats can emerge when enterprises or people are increasingly using AI, but there was no framework for introducing personal data or information into AI services, and this could lead to data leaks.

"Therefore, even though using Gen AI can increase efficiency, there must be a proper approach to use," said AVM Amorn. Another concern is the use of AI for development without a mechanism to evaluate the use. Success with using AI differs from other IT tools because the latter can be measured, showing return on investment.

The use of AI without reaching good efficiency may lead to incorrect answers, and users may be unable to promptly recognise mistakes, he said. Many organisations often outsource their IT work and lack proper back-up systems and maintenance. In many cases, outsiders know more about the system than internal staff, leading to data leaks, said AVM Amorn.

Many small businesses operate without protocols adhering to the PDPA, and several do not have a data protection officer, he said. According to the ThaiCERT report 2024, there were 1,827 cases of cyber-attacks, of which 124 cases were in the private sector. The top five types of cyber-attacks included fake websites or URLs, data theft, and disruption of services using distributed denial-of-service attack via vulnerabilities in the system.

The top five attacked sectors were commerce, finance and banks, foreign commerce, retail, IT and telecom. Digital Economy and Society (DES) Minister Prasert Jantararuangtong said a recent survey found AI tech adoption in the country had risen significantly. A total of 17.

8% of respondents admitted they applied AI technology, increasing from 15.2% in 2023. Some 73.

3% of respondents said they were considering applying AI tech, compared with 56.6% in the survey last year. Meanwhile, only 8.

9% of respondents said they were not interested in AI adoption, compared to 28.2% in 2023. Mr Prasert said the DES Ministry determines policies and directions for digital development, including AI governance according to the national AI action plan (2022-2027).

.