Technology and Innovation Community

 View Only

Stanford Cheatsheets on Transformers-LLMs 

04-09-2025 16:13

Stanford's CME 295 Transformers & Large Language Models cheatsheet includes:

  • Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention)
  • LLMs: prompting, finetuning (SFT, LoRA), preference tuning, optimization techniques (mixture of experts, distillation, quantization)
  • Applications: LLM-as-a-judge, RAG, agents, reasoning models (train-time and test-time scaling from DeepSeek-R1)

Statistics
0 Favorited
5 Views
1 Files
0 Shares
3 Downloads
Attachment(s)
pdf file
cheatsheet-transformers-large-language-models.pdf   1.41 MB   1 version
Uploaded - 04-09-2025

Related Entries and Links

No Related Resource entered.