How KPMG is redefining the future of SAP consulting on a global scale
Presented by SAP SAP consulting projects today involve a vast amount of documentation, multiple stakeholders, and compressed timelines, which often require manual knowledge retrieval from online SAP documentation. At the same time, cloud ERP programs now demand faster design cycles, continuous enhancements rather than big-bang rollouts, and near-real-time decision-making. Joule for Consultants, SAP’s conversational AI […]
Claude Code 2.1.0 arrives with smoother workflows and smarter agents
Anthropic has released Claude Code v2.1.0, a notable update to its “vibe coding” development environment for autonomously building software, spinning up AI agents, and completing a wide range of computer tasks, according to Head of Claude Code Boris Cherny in a post on X last night. The release introduces improvements across agent lifecycle control, skill […]
Databricks’ Instructed Retriever beats traditional RAG data retrieval by 70% — enterprise metadata was the missing link
A core element of any data retrieval operation is the use of a component known as a retriever. Its job is to retrieve the relevant content for a given query. In the AI era, retrievers have been used as part of RAG pipelines. The approach is straightforward: retrieve relevant documents, feed them to an LLM, […]
MiroMind’s MiroThinker 1.5 delivers trillion-parameter performance from a 30B model — at 1/20th the cost
Joining the ranks of a growing number of smaller, powerful reasoning models is MiroThinker 1.5 from MiroMind, with just 30 billion parameters, compared to the hundreds of billions or trillions used by leading foundation large language models (LLMs). But MiroThinker 1.5 stands out among these smaller reasoners for one major reason: it offers agentic research […]
Why AI feels generic: Replit CEO on slop, toys, and the missing ingredient of taste
Right now in the AI world, there are a lot of percolating ideas and experimentation. But as far as Replit CEO Amjad Masad is concerned, they’re just “toys”: unreliable, marginally effective, and generic. “There’s a lot of sameness out there,” Masad explains in a new VB Beyond the Pilot podcast. “Everything kind of looks the […]
Nous Research’s NousCoder-14B is an open-source coding model landing right in the Claude Code moment
Nous Research, the open-source artificial intelligence startup backed by crypto venture firm Paradigm, released a new competitive programming model on Monday that it says matches or exceeds several larger proprietary systems — trained in just four days using 48 of Nvidia’s latest B200 graphics processors. The model, called NousCoder-14B, is another entry in a crowded […]
Artificial Analysis overhauls its AI Intelligence Index, replacing popular benchmarks with ‘real-world’ tests
The arms race to build smarter AI models has a measurement problem: the tests used to rank them are becoming obsolete almost as quickly as the models improve. On Monday, Artificial Analysis, an independent AI benchmarking organization whose rankings are closely watched by developers and enterprise buyers, released a major overhaul to its Intelligence Index […]
New ‘Test-Time Training’ method lets AI keep learning without exploding inference costs
A new study from researchers at Stanford University and Nvidia proposes a way for AI models to keep learning after deployment — without increasing inference costs. For enterprise agents that have to digest long docs, tickets, and logs, this is a bid to get “long memory” without paying attention costs that grow with context length. […]
How Ralph Wiggum went from ‘The Simpsons’ to the biggest name in AI right now
In the fast-moving world of AI development, it is rare for a tool to be described as both “a meme” and AGI, artificial generalized intelligence, the “holy grail” of a model or system that can reliably outperform humans on economically valuable work. Yet, that is exactly where the Ralph Wiggum plugin for Claude Code now […]
TII’s Falcon H1R 7B can out-reason models up to 7x its size — and it’s (mostly) open
For the last two years, the prevailing logic in generative AI has been one of brute force: if you want better reasoning, you need a bigger model. While “small” models (under 10 billion parameters) have become capable conversationalists, they have historically crumbled when asked to perform multi-step logical deduction or complex mathematical proofs. Today, the […]
