One AI educator explains why we need to revamp tech and moral education systems quickly before advances in AI run amok

With all the recent global attention focused on the latest versions of ChatGPT and similar competing generative-AI chatbots, talents in every industry are now more likely than ever to accept AI as the determinant of the career future.

Like it or not, workers and entrepreneurs alike will have to accept the challenge to embrace AI even as they fear its ominous potential. In Singapore, the co-founder of one AI training firm even looks forward to some of the unsavory disruptions that generative AI could bring overnight.

DigiconAsia.net tapped the brain of Kong Yu Ning, Heicoders Academy for the rationale behind his sassy enthusiasm.

DigiconAsia: Could you share some insights on the rise of AI to ChatGPT’s level of sophistication, and on some of the current controversies impacting how it is used? 

Kong Yu Ning (KYN): ChatGPT is a natural language processing (NLP) model with human-level performance in wide-ranging tasks, due to these key advancements in versions beyond v3.5:

    • Transfer learning: The technique of ‘pre-training’ a model on a large dataset before fine-tuning it for a specific task. This approach allows ChatGPT to leverage on quality pre-trained models that need only a smaller task-specific dataset to achieve good performance.
    • Transformer architecture: This is a neural network architecture introduced in 2017 that has since contributed to state-of-the-art NLP models. Transformers are particularly well-suited for processing sequenced data, such as sentences or paragraphs of text.
Kong Yu Ning, Product Manager, Cake DeFi, Co-Founder, Heicoders Academy

Collectively, these breakthroughs have contributed to GPT-4’s human-level performance on academic and professional tasks such as taking exams, and passing Google’s entry-level software engineering assessment.

The underlying issue behind controversies is that our systems and institutions are lagging behind the accelerated advancement of AI applications that will soon displace conventional rote learning and mechanical skills.

Just as it was inconceivable for students in the 70s to use calculators during math exams, today’s cohorts now use such aids routinely. Therefore it is imperative for institutions to forge a new education paradigm that complements AI technology, to train up creative and non-mechanical problem solvers for the real world.

DigiconAsia: How will the evolving technological implications/cyber trends arising from such advanced AI affect how AI education curricula is developed quickly enough to produce ethical programmers and technology managers?

KYN: Broadly, there are three areas in which advancements in AI will and should affect education curricula:

    • Grounded in ethics: AI has the potential to both improve and harm society, and it is essential that trainee programmers and future tech managers understand the ethical implications of their work. Therefore, AI curricula must place a strong emphasis on ethical considerations and provide students with a deep understanding of the potential societal impacts of AI technology.
    • Coverage of latest advancements: As AI is a rapidly evolving field, it is critical that AI education curricula keep pace with the latest trends and advancements in the field. This requires ongoing fine-tuning and curation to ensure students are receiving the most up-to-date and relevant education.
    • Emphasis on interdisciplinary education: AI is a highly interdisciplinary field that spans multiple fields including computer science, psychology, mathematics, and statistics. Therefore, AI education curricula must be designed to provide students with a broad range of knowledge and skills to ensure that programmers and technology managers have a deep understanding of the complex issues involved in AI development and management.

DigiconAsia: What global AI training resources can educators rely on to achieve the three areas you listed?

KYN: One of them is Kaggle,the largest online data science community and platform that hosts machine learning competitions for data scientists and machine learning enthusiasts. Participants can submit their models to compete for prizes, recognition, and job opportunities. 

Here at my academy, we encourage trainees to make full use of Kaggle by hosting competitions for them, based on real world datasets and problems such as credit default prediction.

Teams would submit AI-generated predictions to a live leaderboard, where they are ranked real-time against the other teams on their model performance. We find that this modality drives students to learn beyond what was taught in the course, and also provides a meaningful way to measure students’ learning outcomes.

DigiconAsia: Without regard for the law or ethics, fraudsters, cybercriminals and state-sponsored actors are likely to outpace the mainstream AI community in the war of Good AI vs Malicious AI. What are your perspectives on this trend?

KYN: The abuse of AI and ML technologies by state-sponsored actors, fraudsters, and cybercriminals is a significant concern. These actors can leverage AI and other advanced technologies to amplify their attacks, automate their operations, and evade detection. They can also use AI to create deep-fakes, generate phishing emails, and craft sophisticated social engineering attacks.

However, given that AI was developed to assist humans and the field has been progressing at breakneck speed, it can be challenging — even futile — to stifle AI development.

A potential approach to regulate AI development is to establish international standards and best practices for the development and deployment of AI technologies. This requires close collaboration between governments, the commercial tech industry, and research communities to develop a shared understanding of the ethical and legal implications of AI technologies.

It is also worthwhile to explore how to employ AI technology itself to detect cybercrimes and consequences of AI abuses. As AI continues to advance, mere regulatory frameworks will not be enough to address AI abuse. So, what better way to prevent AI abuse than with AI itself?

DigiconAsia thanks Yu Ning for sharing his insights about generative AI disruptions.