Forget Fine-Tuning: SAP’s RPT-1 Brings Ready-to-Use AI for Business Tasks

SAP aims to displace more general large language models with the release of its own foundational “tabular” model, which the company claims will reduce training requirements for enterprises.  The model, called SAP RPT-1, is a pre-trained model with business and enterprise knowledge out of the box. SAP calls it a Relational Foundation Model, meaning it […]

Snowflake builds new intelligence that goes beyond RAG to query and aggregate thousands of documents at once

Enterprise AI has a data problem. Despite billions in investment and increasingly capable language models, most organizations still can’t answer basic analytical questions about their document repositories. The culprit isn’t model quality but architecture: Traditional retrieval augmented generation (RAG) systems were designed to retrieve and summarize, not analyze and aggregate across large document sets. Snowflake […]

AI coding transforms data engineering: How dltHub’s open-source Python library helps developers create data pipelines for AI in minutes

A quiet revolution is reshaping enterprise data engineering. Python developers are building production data pipelines in minutes using tools that would have required entire specialized teams just months ago. The catalyst is dlt, an open-source Python library that automates complex data engineering tasks. The tool has reached 3 million monthly downloads and powers data workflows […]

Moving past speculation: How deterministic CPUs deliver predictable AI performance

For more than three decades, modern CPUs have relied on speculative execution to keep pipelines full. When it emerged in the 1990s, speculation was hailed as a breakthrough — just as pipelining and superscalar execution had been in earlier decades. Each marked a generational leap in microarchitecture. By predicting the outcomes of branches and memory […]

Large reasoning models almost certainly can think

Recently, there has been a lot of hullabaloo about the idea that large reasoning models (LRM) are unable to think. This is mostly due to a research article published by Apple, “The Illusion of Thinking” Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is […]