- Publisher
Mercury Learning and Information - Published
6th December - ISBN 9781501523564
- Language English
- Pages 1012 pp.
- Size 6" x 9"
Library E-Books
We are signed up with aggregators who resell networkable e-book editions of our titles to academic libraries. These editions, priced at par with simultaneous hardcover editions of our titles, are not available direct from Stylus.
These aggregators offer a variety of plans to libraries, such as simultaneous access by multiple library patrons, and access to portions of titles at a fraction of list price under what is commonly referred to as a "patron-driven demand" model.
- Publisher
Mercury Learning and Information - ISBN 9781501520938
- Language English
- Pages 1012 pp.
- Size 6" x 9"
E-books are now distributed via VitalSource
VitalSource offer a more seamless way to access the ebook, and add some great new features including text-to-voice. You own your ebook for life, it is simply hosted on the vendor website, working much like Kindle and Nook. Click here to see more detailed information on this process.
- Publisher
Mercury Learning and Information - ISBN 9781501520952
- Language English
- Pages 1012 pp.
- Size 6" x 9"
This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought (ToT) prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture’s attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.
FEATURES:
- Covers the full lifecycle of working with LLMs, from model selection to deployment
- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization
- Teaches readers to enhance model efficiency with advanced optimization techniques
- Includes companion files with code and images -- available from the publisher for downloading (with proof of purchase)
1: The Generative AI Landscape
2: Prompt Engineering (1)
3: Prompt Engineering (2)
4: Well-Known LLMs and APIs
5: Fine-Tuning LLMs (1)
6: LLMs and Fine-Tuning (2)
7: What is Tokenization?
8: Attention Mechanism
9: LLMs and Quantization (1)
10: LLMs and Quantization (2)
Index
Oswald Campesato
Oswald Campesato specializes in Deep Learning, Python, Data Science, and generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury Learning).