Anthropic rolls out interactive data visualisations inside Claude's chat interface
Anthropic has launched a significant upgrade to Claude, its AI assistant, enabling it to generate interactive charts, diagrams, and visualisations directly inside chat conversations. The feature, now available in beta, marks a meaningful shift in how conversational AI communicates complex information, moving beyond walls of text towards dynamic, exploratory visuals that evolve as the discussion develops.
The capability builds on "Imagine with Claude", a concept Anthropic first previewed in late 2025 as a way to generate visuals without requiring users to write a single line of code. That preview has now matured into a live, in-conversation feature available across Claude's chat products.
By The Numbers
- Beta launch date: 12 March 2026, rolling out to Claude chat users globally
- Zero code required: Users trigger visualisations using natural language, such as "draw this as a diagram" or "visualise how this might change over time"
- App integrations: Claude now supports direct interaction with Figma, Canva, and Slack inside conversations
- Feature is on by default for Claude chat users, with Claude autonomously deciding when a visual aids understanding
- Recent format updates include structured recipe cards, weather visuals, and purpose-designed layouts for specific query types
What Claude's visualisation feature actually does
The new capability is deliberately distinct from Claude's existing Artifacts feature. Artifacts are polished, permanent outputs, think full documents or standalone tools, saved to a side panel and designed for sharing or downloading. The new visualisations behave differently. They appear inline within the conversation itself, serve an explanatory purpose in the moment, and are intentionally temporary. As the conversation evolves, they change or disappear accordingly.
This is an important design distinction. Anthropic is not trying to replace Artifacts. It is adding a conversational layer of visual reasoning, one that adapts fluidly rather than sitting as a fixed output.
Claude can create custom charts, diagrams and other visualizations in-line in its responses, and then tweak and modify its creations as the conversation develops." , Anthropic, Product Announcement, March 2026
In practice, the use cases are intuitive. Ask Claude to explain compound interest and it generates an interactive curve users can manipulate directly. Ask about the periodic table and it builds a clickable visualisation where each element reveals further detail. The AI decides autonomously when a visual will aid comprehension, though users can explicitly request one using plain English.
Inline versus Artifact: a quick comparison
| Feature | Inline Visualisations (New) | Artifacts (Existing) |
|---|---|---|
| Purpose | Aid understanding during conversation | Polished, shareable outputs |
| Placement | Inline within chat | Side panel |
| Persistence | Temporary, evolves with conversation | Permanent, can be downloaded |
| Code required | None | None (but often code-based output) |
| Interactivity | Yes, user can manipulate values | Varies by Artifact type |
Part of a broader Claude response overhaul
The visualisation launch is not a standalone announcement. It is the latest in a series of intentional upgrades to how Claude structures and presents responses. Earlier in 2026, Anthropic introduced purpose-designed formats for specific query types. Recipes now display with ingredients and steps formatted clearly. Weather queries produce a visual layout rather than a prose paragraph. These changes reflect a deliberate move away from the generic text block as Claude's default output mode.
Equally notable is the expansion of in-conversation app integrations. Users can now interact directly with Figma, Canva, and Slack from within Claude conversations, enabling workflows that previously required switching between multiple tools. For teams using Claude for design feedback, content production, or communication, this represents a meaningful reduction in friction.

This trajectory reflects a broader industry shift towards what researchers and practitioners increasingly call agentic AI, where models do not merely answer questions but actively construct, iterate, and integrate. If you want to understand what that shift looks like in practice, our explainer on what agentic AI actually means and why it matters is a solid starting point.
The Asia-Pacific Picture
For Asia-Pacific, the timing of this feature is particularly relevant. The region's AI adoption curve is steep and accelerating, with enterprises, educators, and governments across markets from Japan to India rapidly integrating AI assistants into daily workflows. The ability to generate interactive visualisations without technical expertise directly addresses a persistent barrier: the gap between AI fluency and data literacy.
In education-heavy markets such as South Korea, Singapore, and Taiwan, interactive visual explanations of complex subjects, from chemistry to financial modelling, have clear and immediate applications. Singapore's AI governance framework, one of the most developed in the region, has already begun addressing how AI tools should communicate with end users in transparent and interpretable ways. The move towards inline, contextual visualisations aligns with that interpretability agenda. You can read more about Singapore's first agentic AI governance framework and how it sets the tone for the region.
For Southeast Asia more broadly, where AI ambitions have sometimes outpaced the underlying data infrastructure, tools that make complex outputs legible to non-technical users are significant. As we have reported previously, Southeast Asia's AI ambitions have hit a data wall, and accessible, visual AI outputs could help bridge the comprehension gap even where raw data pipelines remain immature.
The Asia-Pacific AI market is projected to account for a significant share of global AI investment by 2027, driven by demand across education, finance, and enterprise productivity sectors , Gartner, Regional AI Forecast 2025
Japanese enterprises, known for rigorous adoption timelines and emphasis on usability, have shown strong interest in AI tools that do not require developer intervention. Claude's no-code visualisation approach fits squarely within this preference. Similarly, India's fast-growing startup ecosystem, particularly in fintech and edtech, stands to benefit from AI tools that can generate financial charts, growth curves, and interactive product diagrams without dedicated engineering resources.
Implications for how we learn, work, and build
The deeper implication of Claude's new capability is about the changing role of the AI assistant itself. When an AI can generate a manipulable compound interest curve mid-conversation, it stops being a retrieval tool and starts functioning as a dynamic thinking partner. Users are no longer reading about a concept; they are exploring it.
This has real consequences for knowledge work. Analysts who previously needed dedicated data visualisation tools, or at minimum a data team, can now generate exploratory visuals in seconds. Educators can build interactive explanations on the fly. Strategists can visualise scenario modelling within the same conversational thread in which they are thinking through assumptions.
For developers and creators, the integration with Figma and Canva inside Claude conversations opens up genuinely new workflow possibilities. Design iteration, content mockups, and collaborative brainstorming can now happen without context-switching out of the AI interface. This connects directly to broader trends we have tracked in how vibe coding is reshaping software development, where the gap between idea and execution continues to narrow.
It is also worth noting what this is not. Claude's inline visualisations are not a replacement for dedicated enterprise data platforms or specialised analytics software. They are conversational aids, designed for rapid comprehension rather than publication-grade reporting. The distinction matters for organisations thinking about where Claude fits in their tooling stack.
- Education: Interactive science and maths visualisations without requiring specialised software
- Finance: On-the-fly charts for compound growth, scenario analysis, and portfolio comparisons
- Product and design: Rapid diagram iteration integrated with Figma and Canva workflows
- Strategy: Visual mapping of complex ideas within the same thread used to develop them
- Research: Exploratory data visualisation without requiring a data science background
What to expect as the beta develops
The feature is in beta, which means Anthropic is actively gathering feedback on how users interact with inline visualisations. Historically, Claude betas have moved relatively quickly towards general availability once core user experience signals are positive. The company has not specified a timeline for full rollout, but the decision to enable the feature by default suggests confidence in its stability.
Watch for expansion in the types of visualisations Claude can generate autonomously. Currently, the strongest demonstrated use cases involve mathematical and scientific concepts, charts tied to data inputs the user provides, and interactive reference materials like the periodic table example. Over time, expect richer support for geospatial data, real-time feeds, and more sophisticated multi-variable modelling.
The Figma, Canva, and Slack integrations are also worth monitoring. If Anthropic extends direct app integration to additional platforms, Claude's positioning shifts further from AI chatbot towards AI operating system layer for knowledge workers. That is a significant strategic ambition, and one that will attract close attention from competitors across the model landscape. For a broader look at how transformation initiatives can stall, our analysis of why AI transformation keeps failing remains essential reading for enterprise teams planning adoption.
Frequently Asked Questions
How do I get Claude to generate a visualisation in my chat?
The feature is enabled by default. Claude will autonomously decide when a visual aids comprehension, or you can request one directly using plain English, for example: "draw this as a diagram" or "show me how this changes over time". No coding or technical knowledge is required.
What is the difference between Claude's inline visualisations and Artifacts?
Inline visualisations appear within the conversation itself, are temporary, and evolve as the discussion progresses. Artifacts are polished, permanent outputs saved in a side panel, designed for sharing or downloading. The two features serve different purposes and operate simultaneously.
Is Claude's visualisation feature available in Asia-Pacific?
The beta launch on 12 March 2026 is described as a global rollout across Claude's chat products. Users in Asia-Pacific with access to Claude's chat interface should see the feature enabled by default, subject to Anthropic's standard regional availability.
The AIinASIA View: Inline, no-code visualisations are exactly the kind of friction-removing capability that will accelerate AI adoption among non-technical users across Asia-Pacific, particularly in education and SME finance. The real test is whether Anthropic can maintain quality as it scales the feature beyond its current use cases.
If you have already tried Claude's new visualisation beta, how is it holding up against the data and diagram tools you currently rely on? Drop your take in the comments below.l
YOUR TAKE
We cover the story. You tell us what it means on the ground.
What did you think?
Share your thoughts
Be the first to share your perspective on this story
This is a developing story
We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.


