Menu
Go back
Content
Copywriting
Email assistant
General writing
Paraphraser
Prompts
SEO
Social media assistant
Go back
Business
Customer support
Education assistant
Fashion
Finance
Human resources
Legal assistant
Presentations
Productivity
Real estate
Sales
Startup tools
AI Categories
Go back
Content
Copywriting
Email assistant
General writing
Paraphraser
Prompts
SEO
Social media assistant
AI Pricing Model
Go back
Content
Copywriting
Email assistant
General writing
Paraphraser
Prompts
SEO
Social media assistant
AT Tags
Go back
Content
Copywriting
Email assistant
General writing
Paraphraser
Prompts
SEO
Social media assistant
San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.
The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of Claude’s context-handling capabilities.
Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing.
Claude 2.1 is available over API in our Console, and is powering our https://t.co/uLbS2JNczH chat experience. pic.twitter.com/T1XdQreluH
— Anthropic (@AnthropicAI) November 21, 2023
With the ability to process lengthy documents like full codebases or novels, Claude 2.1 is positioned to unlock new potential across applications from contract analysis to literary study.
The 200K token window represents more than just an incremental improvement—early tests indicate Claude 2.1 can accurately grasp information from prompts over 50 percent longer than GPT-4 before the performance begins to degrade.
Claude 2.1 (200K Tokens) – Pressure Testing Long Context Recall
We all love increasing context lengths – but what’s performance like?
Anthropic reached out with early access to Claude 2.1 so I repeated the “needle in a haystack” analysis I did on GPT-4
Here’s what I found:… pic.twitter.com/B36KnjtJmE
— Greg Kamradt (@GregKamradt) November 21, 2023
Anthropic also touted a 50 percent reduction in hallucination rates for Claude 2.1 over version 2.0. Increased accuracy could put the model in closer competition with GPT-4 in responding precisely to complex factual queries.

Additional new features include an API tool for advanced workflow integration and “system prompts” that allow users to define Claude’s tone, goals, and rules at the outset for more personalised, contextually relevant interactions. For instance, a financial analyst could direct Claude to adopt industry terminology when summarising reports.
However, the full 200K token capacity remains exclusive to paying Claude Pro subscribers for now. Free users will continue to be limited to Claude 2.0’s 100K tokens.
As the AI landscape shifts, Claude 2.1’s enhanced precision and adaptability promise to be a game changer—presenting new options for businesses exploring how to strategically leverage AI capabilities.
With its substantial context expansion and rigorous accuracy improvements, Anthropic’s latest offering signals its determination to compete head-to-head with leading models like GPT-4.
(Image Credit: Anthropic)
See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, anthropic, artificial intelligence, chatbot, claude 2.1, genai, generative ai, large language model, llm
Source: https://www.artificialintelligence-news.com/2023/11/22/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/
Copyright © 2024 | All rights reserved.