With the introduction of ChatGPT at the end of 2022, artificial intelligence (AI), once considered a distant concept, has now become a technology accessible to everyone and integrated into daily life. Its rapid growth—reaching 100 million users within its first two months—was no coincidence[1]. Following this, tech giants such as Google, Amazon, Microsoft, and Apple also launched their own generative AI platforms. Today, AI assistants that speak, write code, and create images have become part of our lives. Using Midjourney to generate visuals, writing code with Cursor, generating text with ChatGPT, or following YouTube’s recommendations have become commonplace. But how prepared are we for this rapid transformation?
According to OpenAI CEO Sam Altman, AI is the most powerful technology humanity has ever developed[2]. A study by Noy and Zhang from MIT found that white-collar workers using AI completed projects 40% faster and reported greater job satisfaction[3]. However, another MIT study warns that despite increases in productivity, long-term AI use may reduce cognitive activity and creativity[4]. It is perhaps no coincidence that the Oxford Dictionary declared 'brain rot' as the Word of the Year in 2024.[4]. In summary, while AI makes our work easier, it may also overshadow original thought. This raises an important question: Is AI supporting us, guiding us, or limiting us?
This growing interest, and sometimes unease, is understandable. For instance, a study by KPMG revealed that while 38% of Canadians reported having a moderate understanding of AI, the global average stands at 52%. However, only 24% of people worldwide reported having received any AI training[5]. Another study involving 3,002 participants from the U.S., Germany, and China found that the majority could not detect when the content presented to them had been generated by AI[6]. Looking specifically at students, a global survey involving about 3,800 students from 16 countries found that 86% used AI tools like ChatGPT in their studies. However, 58% stated they lacked sufficient knowledge and skills in AI, and 48% felt unprepared for an AI-driven work environment. Moreover, the vast majority of students believed AI skills were important[7]. According to a UK-based survey, 92% of students reported using AI tools in their academic work (up from 53% in 2024)[8]. On the other hand, the younger generation, referred to as digital natives, use these tools quite naturally but often lack understanding of how they work, the underlying data structures, potential bias risks, and privacy issues. This highlights a gap in education related to AI literacy.
Even more concerning is the fact that these systems do not always produce accurate information. This phenomenon is known as "hallucination", when an AI generates fabricated or misleading content. For example, when asked for historical information or academic references, models like ChatGPT may present non-existent sources, fictional authors, or distorted facts. A study by Stanford RegLab/HAI reported that some legal AI systems produced incorrect or fabricated responses in 1 out of every 6 cases[9]. UNESCO emphasizes that AI’s unique ethical and societal challenges require a type of literacy that goes beyond traditional ICT skills[10]. In this context, digital literacy alone is not sufficient—we also need AI literacy. This form of literacy entails not only knowing how to use the tools, but also questioning them, understanding their decision-making processes, and considering productivity in together with ethical boundaries. There appears to be not just a difference in productivity, but also a difference in awareness between those who use AI actively and those who use it passively.
Therefore, AI literacy is not merely about learning to code. It is the ability to question how these systems work, what data they are trained on, who makes the decisions, and what risks they entail. In this regard, it is crucial to begin by defining artificial intelligence and explaining core concepts such as machine learning and deep learning, exploring the historical development of AI and its applications in daily life (e.g., maps, voice assistants, recommendation systems). Technical knowledge should then be provided on how data-driven systems function, the roles of algorithms and models, methods such as supervised and unsupervised learning, as well as model training and inference processes. Practical applications can be explored through domains such as natural language processing, large language models, image recognition, and recommendation systems. In terms of social impact, topics such as algorithmic bias, data privacy, the impact of automation on labor, and controversial applications (e.g., facial recognition) should be analyzed in depth. Additionally, ethical use of AI (e.g., 18% of students report directly using AI-generated content in their assignments[8]), misinformation (hallucination), human-centered design, and responsible use of generative AI (e.g., plagiarism, deepfakes) are other critical issues. Finally, essential 21st-century skills such as critical thinking, evaluating the reliability of tools, working with multimodal systems, and lifelong learning should be included in both educational and research agendas. Addressing these topics will prepare individuals to use technology consciously, ethically, and effectively.
Yunus Can Bilge , June 2025