<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<oembed><version>1.0</version><type>rich</type><width>560px</width><height>140px</height><title>Ep. 30 &amp;#8211; LLM Capabilities: Scale, Precision, and Performance Trade-offs - This academic paper explores the impact of model scale and quantization on the performance of Large Language Models (LLMs), specifically the Llama 2-Chat and Mistral families, ranging from 7 billion to 70 billion parameters.</title><url>https://hockeymikey.mn/2025/07/11/ep-30-llm-capabilities-scale-precision-and-performance-trade-offs/</url><author_name>Mikey's research podcast - A really nice podcast generated using AI and love</author_name><author_url>https://hockeymikey.mn</author_url><thumbnail_url>http://hockeymikey.mn/wp-content/uploads/2025/06/M_Logo_beta-scaled.png</thumbnail_url><html>&lt;iframe width="560px" height="140px" src="https://hockeymikey.mn/2025/07/11/ep-30-llm-capabilities-scale-precision-and-performance-trade-offs/?standalonePlayer"&gt;&lt;/iframe&gt;</html></oembed>
