AI and all that Jazz

Meeting report 23 September.

– 23 Sep 2025: AI by Justin Flitter

Justin founded AI New Zealand in 2017, years before the generative AI boom. His background includes tech marketing and early work in natural language processing (NLP) — things like predictive text, sentiment analyses, etc.
He describes how large language models (LLMs) like ChatGPT work: trained on huge datasets to predict next words. Also notes that past “AI winters” were due to lack of data and compute, which the modern era has remedied.

A person standing in front of a projector screenAI-generated content may be incorrect.
AI is disrupting traditional workflows: research, consulting, reporting, presentation creation etc. Tasks that used to require large teams or hours of work can now often be done quickly via AI tools. 
Preserving institutional/organizational knowledge is a major use case: e.g. transcribing meetings, capturing founders’ tacit knowledge, and helping companies retain what makes them unique. Also, using chatbots for compliance or on-site worker safety.


Emerging AI modalities: text-to-image, text-to-video, digital avatars. Interfaces shifting toward voice & vision rather than keyboard/mouse.
Caution & ethics: Justin refers to warnings from AI researchers (e.g. Geoff Hinton) about short-term profit motivations, misuse of AI, and risk of machines growing too controlling.
Practical examples: automating mundane tasks. E.g., making an online order via shopping list; generating architectural renders via AI; asking questions about past data rather than digging through documents manually.
Interactive content & voice/vision enabled tools: asking AI questions mid-podcast/audio, photo/vision apps to analyse landmarks, changing accents/voices in AI content, etc.

An audience member asked risk risks and concerns related to AI:
Hallucinations: Flitter explained that AI "hallucinations"—generating factually incorrect or nonsensical information—occur because the models are "incentivized to give an answer" even when there are gaps in their training data. This is a core aspect of how they're designed.
Job Displacement: AI is already replacing entry-level roles and some complex, senior-level work, creating a "punching pressure" in the workforce and making it harder for graduates to find jobs.
Threat to Business Models: The business model of consultancies is under threat because AI tools can now perform the research and report generation that previously required a team of people.
Misinformation and Disinformation: With AI-generated video and news, the lines between reality and fabrication are "going to become very, very gray very quickly." Flitter noted that this contributes to a bombardment of "quite convincing narratives" that manipulate what people think.
Safety and Privacy: Flitter addressed the concern that information typed into AI tools could become publicly available. While he stated that models typically don't work that way, he acknowledged this is a worry for many people.
Long-Term Existential Risk: An audience member raised the concerns of AI pioneer Geoffrey Hinton, who has cautioned that companies are developing AI for "short-term gain" and that it could one day start "controlling us." Flitter responded by saying "we're a while away from that" and that current models are not advanced enough to become an "artificial general intelligence" (AGI).

For a Video of the full meeting use the link below to view in YouTube

https://youtu.be/lMB55HyFcVE?si=N2qCdtrUqwSQ1I4N

Web Site for AI New Zealand

https://newzealand.ai/