To The Top!
Banner1 for slider
Tutorial
Large Language Models: A Deeper Dive into Architectures, Applications, Adaptation, and Security Risks
Dr. Antonio Emanuele CinĂ 
  • DIBRIS, University of Genoa, Italy
Abstract
Generative AI, especially large language models (LLMs), has recently gained significant attention due to advancements and widespread media coverage, primarily driven by successful commercial products. These models, trained on vast datasets, have demonstrated remarkable abilities in generating diverse content, often mimicking human-like capabilities. As a result, LLMs are increasingly integrated into industrial applications, such as automated customer service, content creation, and advanced data analysis. Despite their impressive performance, LLMs remain vulnerable to adversarial attacks, which can lead to data leaks or misuse. Even the latest models, like GPT-4, can consistently struggle to follow developers' initial guidelines.
This tutorial provides an introduction to Generative AI with a focus on LLMs. It begins by discussing foundational concepts, such as the difference between Discriminative and Generative AI, key components like tokenizers and transformers, and practical applications of LLMs. The tutorial then addresses security and privacy concerns, including adversarial examples, prompt injection attacks, and data extraction risks. Finally, it explores techniques for enhancing LLM performance, such as few-shot learning, prompt engineering, fine-tuning, and Retrieval Augmented Generation (RAG).
Biography
Antonio Emanuele CinĂ  has been an assistant professor (RTDA) at the University of Genoa, Italy, since June 2023. He received his Ph.D. (cum laude) in Computer Science from Ca' Foscari University of Venice in 2023, defending a thesis on the vulnerabilities and emerging risks arising from the malicious use of training data in AI. His research interests encompass all aspects of AI system security and the study of their trustworthiness, with primary expertise in training (poisoning) and inference-time (evasion) attacks. Recently, he has been investigating the capabilities of Generative AI models (LLMs), exploring the security aspects of these cutting-edge systems and how this technology can be integrated to optimize user applications.
Copyright 2025 ICMLC & ICWAPR. All rights reserved.