Attention ISN’T all you need?! New Qwen3 variant Brumby-14B-Base leverages Power Retention technique
When the transformer architecture was introduced in 2017 in the now seminal Google paper “Attention Is All You Need,” it became an instant cornerstone of modern artificial intelligence. Every major large language model (LLM) — from OpenAI’s GPT series to Anthropic’s Claude, Google’s Gemini, and Meta’s Llama — has been built on some variation of […]
98% of market researchers use AI daily, but 4 in 10 say it makes errors — revealing a major trust problem
Market researchers have embraced artificial intelligence at a staggering pace, with 98% of professionals now incorporating AI tools into their work and 72% using them daily or more frequently, according to a new industry survey that reveals both the technology’s transformative promise and its persistent reliability problems. The findings, based on responses from 219 U.S. […]
Forget Fine-Tuning: SAP’s RPT-1 Brings Ready-to-Use AI for Business Tasks
SAP aims to displace more general large language models with the release of its own foundational “tabular” model, which the company claims will reduce training requirements for enterprises. The model, called SAP RPT-1, is a pre-trained model with business and enterprise knowledge out of the box. SAP calls it a Relational Foundation Model, meaning it […]
Inside Zendesk’s dual AI leap: From reliable agents to real-time intelligence with GPT-5 and HyperArc
Presented by Zendesk Agentic AI is currently transforming three key areas of work — creative, coding, and support — says Shashi Upadhyay, president of engineering, AI, and product at Zendesk. But he notes that support presents a distinct challenge. “Support is special because you’re putting an autonomous AI agent right in front of your customer,” […]
Snowflake builds new intelligence that goes beyond RAG to query and aggregate thousands of documents at once
Enterprise AI has a data problem. Despite billions in investment and increasingly capable language models, most organizations still can’t answer basic analytical questions about their document repositories. The culprit isn’t model quality but architecture: Traditional retrieval augmented generation (RAG) systems were designed to retrieve and summarize, not analyze and aggregate across large document sets. Snowflake […]
Strengthening Our Core: Welcoming Karyne Levy as VentureBeat’s New Managing Editor
I’m thrilled to announce a fantastic new addition to our leadership team: Karyne Levy is joining VentureBeat as our new Managing Editor. Today is her first day. Many of you may know Karyne from her most recent role as Deputy Managing Editor at TechCrunch, but her career is a highlight reel of veteran tech journalism. […]
AI coding transforms data engineering: How dltHub’s open-source Python library helps developers create data pipelines for AI in minutes
A quiet revolution is reshaping enterprise data engineering. Python developers are building production data pipelines in minutes using tools that would have required entire specialized teams just months ago. The catalyst is dlt, an open-source Python library that automates complex data engineering tasks. The tool has reached 3 million monthly downloads and powers data workflows […]
The beginning of the end of the transformer era? Neuro-symbolic AI startup AUI announces new funding at $750M valuation
The buzzed-about but still stealthy New York City startup Augmented Intelligence Inc (AUI), which seeks to go beyond the popular “transformer” architecture used by most of today’s LLMs such as ChatGPT and Gemini, has raised $20 million in a bridge SAFE round at a $750 million valuation cap, bringing its total funding to nearly $60 […]
Moving past speculation: How deterministic CPUs deliver predictable AI performance
For more than three decades, modern CPUs have relied on speculative execution to keep pipelines full. When it emerged in the 1990s, speculation was hailed as a breakthrough — just as pipelining and superscalar execution had been in earlier decades. Each marked a generational leap in microarchitecture. By predicting the outcomes of branches and memory […]
Large reasoning models almost certainly can think
Recently, there has been a lot of hullabaloo about the idea that large reasoning models (LRM) are unable to think. This is mostly due to a research article published by Apple, “The Illusion of Thinking” Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is […]
