On 27 November 2025, I attended a meetup at The Hague University of Applied Sciences, titled Pie and AI. I’ve biked past this place many times, but this was the first time I went in, and I must say it was quite nice.
It was an interesting meetup, and my intention in writing this post is to record what I learned there. The topic of the meetup this time around was Re-imagining Education in the Age of AI. Now is perhaps a good time to discuss this. The field of AI is not new — John McCarthy coined the term Artificial Intelligence in the 1960s, and Turing came up with the Turing test even earlier. But with the advent of Large Language Models, and the ever-growing easy access to them, with companies rushing to release the “best” models first, maybe the way we learn is changing. The meetup had three computer science educators come together to discuss education in the age of AI.
The panel was moderated by Fiona Passantino. Joining the panel were Sonia de Jager, a researcher working on the philosophy of (natural) language modulation; Felienne Hermans, Professor of Computer Science Education at VU Amsterdam; and Steffen Steinert, Assistant Professor in the Ethics and Philosophy of Technology at TU Delft. It was a good turnout, with more than fifty people in the audience.
The host, Vikram Radhakrishnan, gave an introduction about the meetup — it has been running since 2017 with a diverse set of topics each time by invited speakers. This also gives the institute opportunities to nurture industrial relationships. Following this, Fiona, the moderator, gave an introduction to the day’s discussion. Fiona is a trainer working in the field of AI education. She made the argument that at one point the calculator was thought to be a device that slowed down one’s arithmetic abilities and was not allowed in exams — but that is no longer the case, as calculators have become extremely common, even in exams. The internet at one point was not widely accessible, and people believed it might hinder learning because one could look up information easily.
She posed the question: are we at a similar juncture of skepticism with AI and its effect on education and learning? She argued that industry has already begun moving toward heavy use of AI, and perhaps students coming out of university are not fully prepared, as they do not get the chance to develop these skills during their education. She mentioned that the EU AI Act has mandated that employers are responsible for ensuring AI literacy among the workforce — but what about students? She added that China has committed to achieving 100% AI literacy among its citizens.
This was followed by introductory talks from the panelists. Dr. Felienne Hermans started by describing how radical change has been a common theme across history — from radio, TV, video, the internet, and MOOCs to now GenAI. The argument that GenAI provides a personalised learning experience is also not new, as Teaching Machines were proposed as early as the 1920s.
Following this, Dr. Sonia de Jager began by posing a question: What and whom is education for? She argued that you would want to provide reliable information to students, and not all information produced by LLMs is necessarily fact-checked. She concluded by posing another question: while GenAI will inevitably exist, what about the commodification of learning, gatekeeping of learning, and scientific integrity?
The last talk before the panel discussion, by Dr. Steffen Steinert, covered Values and AI in Education. He began by highlighting general ethical issues with AI — bias, massive energy use, intellectual property concerns, and the clickwork needed to enable it. But the bigger question is whether AI aligns with educational values: teacher autonomy, personal development, inclusion, socialization, etc. He noted that the relevance of these values is changing and that new values, such as efficiency, are emerging, possibly leading to value conflicts.
There was a panel discussion next, with Fiona and later the audience posing questions to the panelists. The first question was about their opinions on students using AI. A fitting question to start with, given all of them are educators in CS. They responded that while they don’t necessarily condone it, they don’t actively encourage it either, and they provide a disclaimer to students to use AI at their own discretion at the beginning of the course. Dr. Hermans also highlighted that one cannot be a creator if they continue to be only a consumer. A parallel was made between social media and LLMs — one thing to note here is that there is scientific evidence showing social media’s harm, whereas there aren’t many studies in that direction for LLMs yet. Overall, all of the panelists were not very pro-AI and, in particular, did not have a very charitable opinion about LLMs.
Casual conversations with attendees and speakers after the session were also interesting. There was lots of thought-provoking discussion about how AI is affecting how we learn and how we work. A fellow attendee shared that her daughter, who is studying for an undergraduate degree, went through multiple “AI interviews” for internships without actually talking to a human. Later, there were conversations about the sustainability of large AI companies and how they plan to use more energy and set up more and more data centres, and that too more frequently. All this made me think about the topic not only from a learning point of view but also from an environmental impact point of view. It remains to be seen how LLMs evolve, with large tech companies rushing to release the best models.
Overall I really enjoyed being there and look forward to attending the event again.