An Overview of Generative AI & LLM - Data Sorcerer Event 2025

Ekky Armandi
An Overview of Generative AI & LLM - Data Sorcerer Event 2025

On July 3rd, 2025, I had the privilege of speaking at the Data Sorcerer Event about Generative AI and Large Language Models (LLM). This blog post summarizes the key insights from my presentation for those who couldn’t attend.

What is Generative AI?

Generative AI represents a paradigm shift in artificial intelligence. Unlike traditional AI systems that analyze and classify data, Generative AI creates entirely new content - text, images, code, audio, and even video - based on patterns learned from training data.

Traditional AI vs Generative AI

The fundamental difference lies in their purpose:

  • Traditional AI: Focuses on classification, prediction, and pattern recognition. It uses rule-based systems, decision trees, and neural networks to output labels, scores, and recommendations.

  • Generative AI: Creates original content from learned patterns using models like GPTs, VAEs, GANs, and diffusion models. It outputs new text, images, music, and code.

Key Drivers of Generative AI Adoption

  1. Creative Automation: Generates content at scale that previously required human expertise
  2. Natural Interaction: Conversational interfaces make AI accessible to non-technical users
  3. Cost Reduction: Automates knowledge work, reducing time from hours to seconds
  4. Rapid Prototyping: Enables quick iteration on ideas without specialized skills
  5. Cross-Industry Applicability: Transforms everything from customer service to drug discovery

What is a Large Language Model (LLM)?

LLMs are a specific type of Generative AI focused on text generation. These AI systems are trained on massive text datasets to understand and generate human-like text. Frontier models like Claude or GPT-4 are trained on over 10TB of training data.

How LLMs Work

The process can be simplified as:

Text: "The cat sat"

[Tokenize] → ["The", "cat", "sat"]

[Embed] → [1.2, 0.8, 0.5] (convert to numbers)

[Attention] → "Which words matter most?"

[Transform] → Context-aware representations

[Predict] → Next word: "on" (70% probability)

Key Terminology

  • Tokenization: Breaking text into smaller units that the model can process
  • Prompt: The input text/instructions you give to the LLM
  • Inference: The process of running the model to generate outputs
  • Zero-shot Learning: Model performs tasks without examples
  • Few-shot Learning: Model learns from a few examples in the prompt

AI Adoption Across Industries

The adoption of AI has been exponential across all regions:

  • North America leads with 82% adoption in 2024
  • Europe follows closely at 80%
  • Developing markets show the highest growth rate (49% to 77%, a 57% increase)

In Indonesia specifically, AI adoption has seen explosive growth, with 8 in 10 Indonesians already acquainted with AI tools. This growth is driven by:

  • Mobile affordability
  • Local-language interfaces
  • Platform integration in e-commerce, banking, and ride-hailing apps

Why Should You Learn AI?

1. Career Resilience & Opportunities

  • AI literacy is becoming as essential as computer skills
  • New roles emerging: AI prompt engineers, AI ethicists, AI trainers
  • Ability to augment your current role, making you 2-10x more productive

2. Personal Productivity Multiplier

  • Automate repetitive tasks (writing, data analysis, research)
  • Create content, code, and solutions faster with AI assistance
  • Make better decisions using AI-powered insights

3. Economic Advantage

  • Freelancers and entrepreneurs can compete with larger organizations
  • Early adopters can monetize AI skills through consulting or building AI products
  • Understanding AI helps identify opportunities and avoid scams

Ethics & Challenges

As AI becomes more powerful, ethical considerations become crucial:

For Creators/Developers:

  • Design systems to minimize bias in training data
  • Implement privacy safeguards and data protection
  • Build transparency and explainability features
  • Respect intellectual property in training datasets
  • Consider environmental impact of model training

For Users:

  • Verify AI-generated content for accuracy
  • Disclose when using AI-generated content
  • Avoid using AI for deceptive purposes (deepfakes, misinformation)
  • Respect copyright when prompting with others’ work
  • Maintain critical thinking rather than blind trust

Shared Responsibilities:

  • Consider impact on jobs and society
  • Advocate for responsible AI policies
  • Understand AI limitations
  • Prioritize human agency and decision-making

Audience Q&A Session

The presentation sparked engaging discussions with thoughtful questions from the audience:

“How to avoid getting addicted to using AI as a student?” - Nur Laila Sari

This is an excellent question that many students face today. Here’s my advice:

  1. Use AI as a learning tool, not a crutch: Think of AI as a tutor that helps you understand concepts, not someone who does your homework
  2. Set boundaries: Allocate specific times for AI assistance and times for independent work
  3. Practice the fundamentals: Ensure you can solve problems without AI first, then use AI to explore advanced solutions
  4. Document your learning: Keep notes on what you learned from AI interactions, not just the answers
  5. Challenge yourself: Regularly attempt tasks without AI to maintain and build your skills

Remember: AI should enhance your learning journey, not replace it. The goal is to become more capable, not more dependent.

”How can vectors be turned into images so quickly?” - Sobirin Nur Imam

Great technical question! The speed of modern image generation comes from several innovations:

  1. Efficient Architecture: Modern models use optimized neural network architectures that process information in parallel rather than sequentially

  2. Latent Space Operations: Instead of working with raw pixels, models work in compressed “latent spaces” where vectors represent complex features. This is like working with a blueprint instead of building brick by brick

  3. Hardware Acceleration: GPUs and TPUs are designed for parallel matrix operations, processing thousands of calculations simultaneously

  4. Pre-training: Models are pre-trained on massive datasets, so generation is just “steering” the learned knowledge rather than creating from scratch

  5. Clever Algorithms: Techniques like diffusion models work by gradually refining noise into images through learned denoising steps, which can be optimized for speed

The transformation process simplified:

Text → Embeddings → Latent Vectors → Denoising Steps → Final Image
     (milliseconds)  (milliseconds)    (1-2 seconds)    (output)

Live Demo: Using Claude Code

During the event, I demonstrated how to use Claude Code (Claude CLI) to create this blog post from presentation slides. This practical example showed how AI can streamline content creation workflows while maintaining quality and accuracy.

Conclusion

The key takeaways from this presentation:

  • Generative AI creates new content from learned patterns
  • LLMs are specialized Gen AI models for human-like text
  • Practical applications span across all industries
  • Ethical use requires transparency, accuracy, and human agency

The Future is Collaborative

AI is a tool, not a replacement for human creativity and judgment. Understanding AI capabilities helps us leverage it effectively in our daily work. The key is finding the right balance between automation and human expertise.

As we move forward, remember to:

  • Stay curious and keep learning
  • Adapt responsibly to new technologies
  • Use AI to augment, not replace, human capabilities

This presentation was delivered at the Data Sorcerer Event on July 3rd, 2025. If you’re interested in learning more about AI implementation or joining Data Sorcerer, feel free to reach out or visit their Instagram account @datasorcerers!

#AI #Technology #Speaking