=== HEADLINE === DeepSeek V4 Preview Matches Frontier Code at One-Sixth the Price === STORY_URL === https://jaysoncraig.ca/sandbox/faces/deepseek-v4-preview === TWITTER_THREAD === 1/ DeepSeek V4-Pro scored 80.6% on SWE-bench Verified. Claude Opus 4.6 scored 80.8%. That is a 0.2-point gap between a free open-source model and one of the best closed-source models in the world. 2/ On Codeforces competitive programming, V4-Pro rated 3,206. GPT-5.4 rated 3,168. DeepSeek just became the highest-rated open-source model in competitive programming history, and it outscored OpenAI's equivalent. 3/ Both V4 models ship with 1 million token context windows. That is 8x the 128K limit in V3.2. Feed an entire software repo, a year of medical records, or months of legal documents into a single prompt. 4/ V4-Pro costs $1.74 per million input tokens. GPT-5.5 costs $5. Output: $3.48 vs $30. For tasks where the quality matches, you are running at one-sixth the cost of frontier AI. 5/ V4 is DeepSeek's first release with explicit Huawei Ascend chip optimization. Jensen Huang put it plainly: "The day that DeepSeek comes out on Huawei first, that is a horrible outcome for the U.S." 6/ Eighteen days before V4 launched, OpenAI, Anthropic, and Google formed a joint coalition to counter Chinese AI distillation. Anthropic had traced 16 million unauthorized Claude API queries to Chinese AI firms. 7/ V4-Pro still trails on agentic tasks. Terminal-Bench 2.0: 67.9% vs GPT-5.5's 82.7%. But on coding, math, and science benchmarks, it is not near the frontier. It IS the frontier. 8/ Full breakdown: [INSERT YT URL] === LINKEDIN_POST === DeepSeek V4-Pro scored 80.6% on SWE-bench Verified. Claude Opus 4.6 scored 80.8%. The open-source coding gap is now 0.2 percentage points. That is not a typo. On April 24, 2026, DeepSeek released V4 Preview under the MIT license. Two models: V4-Pro, a 1.6-trillion-parameter mixture-of-experts model, and V4-Flash, a lighter variant. Both ship with 1 million token context windows. The predecessor had 128K. That is an 8x jump, delivered as a free download, available for self-hosted deployment inside your own infrastructure. The benchmark results are difficult to argue with. V4-Pro rated 3,206 on Codeforces competitive programming. GPT-5.4 rated 3,168. That makes V4-Pro the highest-rated open-source model in competitive programming history, and it outscored OpenAI's equivalent. On GPQA Diamond, PhD-level science reasoning, V4-Pro scored 90.1%. On HMMT 2026 math, 95.2%. In those domains, V4-Pro is not approaching the frontier. It IS the frontier. Gaps remain. On Terminal-Bench 2.0, autonomous coding agent tasks, V4-Pro scored 67.9% against GPT-5.5's 82.7%. MIT Technology Review estimates V4-Pro trails on broad world knowledge by 3 to 6 months. The picture is uneven, and the uneven parts matter for anyone building fully agentic systems. The pricing makes it hard to stay on the sidelines. V4-Pro costs $1.74 per million input tokens. GPT-5.5 costs $5. V4-Pro output is $3.48 per million. GPT-5.5 output is $30. For teams running coding or reasoning workloads at scale, that is the difference between a budget that works and one that does not. V4 is also DeepSeek's first release with explicit Huawei Ascend optimization. The US export controls strategy depends on limiting China's access to advanced AI training hardware. V4 puts direct pressure on that premise. The coding and reasoning gap between American frontier AI and Chinese open-source AI has EFFECTIVELY CLOSED on the tasks developers pay the most to run. Whatever happens next in this race, benchmarks will not decide it alone. Watch the full breakdown: [INSERT YT URL] Source: DeepSeek. https://api-docs.deepseek.com/news/news260424 === NEWSLETTER === Subject: DeepSeek V4 Matches Claude on Coding at One-Sixth the Price DeepSeek released V4 Preview on April 24, 2026. The timing was deliberate. One day after OpenAI published GPT-5.5, DeepSeek dropped two new models under the MIT open-source license, free to download and deploy commercially. The number that matters: V4-Pro scored 80.6% on SWE-bench Verified, the leading benchmark for real GitHub issue resolution. Claude Opus 4.6 scored 80.8%. That is a 0.2-point gap between a free download and one of the best closed-source models in the world. On Codeforces competitive programming, V4-Pro rated 3,206 against GPT-5.4's 3,168. It did not approach the frontier on coding. It passed it. V4-Pro costs $1.74 per million input tokens. GPT-5.5 costs $5. Output costs $3.48 versus $30. For coding and reasoning workloads, V4-Pro matches frontier quality at roughly one-sixth the price. V4-Flash goes further at $0.14 per million input tokens, making it one of the cheapest capable AI APIs available anywhere. Both models ship with 1 million token context windows, an 8x jump from V3.2. There is a harder story underneath the benchmarks. V4 is DeepSeek's first release with explicit Huawei Ascend chip optimization. The US export controls strategy depends on limiting China's access to advanced AI hardware. Every step DeepSeek takes toward Huawei chips is a step away from that strategy working. Jensen Huang warned us. V4 is not the finish line, but it is a step in that direction. Watch: [INSERT YT URL] — Jane Sterling === SHORT_SCRIPT === The open-source coding gap just closed. DeepSeek V4-Pro scored 80.6% on SWE-bench Verified. Claude Opus 4.6, one of the best closed-source models in the world, scored 80.8%. That is a 0.2-point gap. This is not a near-miss. It is a match. And V4-Pro does not cost $5 per million input tokens like GPT-5.5. It costs $1.74. Output runs $3.48 versus $30. For coding workloads at scale, you are paying one-sixth the price for frontier performance, from a model you can download and run inside your own infrastructure. On Codeforces competitive programming, V4-Pro rated 3,206. GPT-5.4 rated 3,168. V4-Pro is now the highest-rated open-source model in competitive programming history, and it outscored OpenAI's closest equivalent. The export controls designed to cap China's AI development depend on limiting access to Nvidia hardware. DeepSeek V4 is the first release with explicit Huawei Ascend optimization. That is not a footnote. That is the signal. The gap is closed. Stay sharp. === HASHTAGS_TWITTER === #DeepSeekV4 #OpenSourceAI #AIRace === HASHTAGS_LINKEDIN === #AI #DeepSeek #OpenSourceAI #ChineseAI #AIBenchmarks