Insider Brief PRESS RELEASE — Origin Lab, the technology platform turning licensed game worlds into structured training data for world models and multimodal AI, announced an $8M seed round led by Lightspeed Venture Partners. The financing will accelerate Origin Lab’s software, capture, enrichment, QA, search, and delivery systems, while expanding its applied research work in […]
The tyranny of software is almost over. Since the first computer programmers wrote the first computer programs, we, the users of that software, have been forced to live in the worlds those programs create. The features are the features. The design is the design. Want something else, something better? Learn to code, I guess.
Until now, the people making a given piece of software - mostly well-paid professional developers - have rarely been the same as the ones using it: lawyers, doctors, churches, schools, me. (Where they overlap most directly is with developer tools, which are often the best and most passionately designed software you'll …
Read the full story at The Verge.
A new paper out of arXiv this week describes an AI system that builds, improves, and deploys its own specialist agents. Here is what that actually means for engineers and technical teams.
If you haven’t heard of Arm, you haven’t been paying attention to how ubiquitous the chipmaker has become. Arm’s processor designs power Macs, iPhones, and every other major smartphone line. Queries made through ChatGPT, Gemini, or Claude pass through an Arm-based chip at some point.
For more than 40 years, Arm’s focus was on chip design. Major device and AI chip makers then licensed those designs and turned them into hardware.
But the company’s focus is changing: Arm is now making hardware using its own AGI CPU, which OpenAI and Meta will use and which will allow the chipmaker itself to compete with the likes of Apple, Intel, Nvidia, Amazon and Google.
Arm’s envisions its new Performix software suite using “recipes” and AI insights to help engineers identify suspect code and CPU hotspots.
Alex Spinelli, who leads Arm’s software initiatives as senior vice president for AI and developer platforms, is as AI-native an engineer as you’ll find; he played a central role in the TensorFlow st
GRASP is a new gradient-based planner for learned dynamics (a “world model”) that makes long-horizon planning practical by (1) lifting the trajectory into virtual states so optimization is parallel across time, (2) adding stochasticity directly to the state iterates for exploration, and (3) reshaping gradients so actions get clean signals while we avoid brittle “state-input” gradients through high-dimensional vision models.
Large, learned world models are becoming increasingly capable. They can predict long sequences of future observations in high-dimensional visual spaces and generalize across tasks in ways that were difficult to imagine a few years ago. As these models scale, they start to look less like task-specific predictors and more like general-purpose simulators.
But having a powerful predictive model is not the same as being able to use it effectively for control/learning/planning. In practice, long-horizon planning with modern world models remains fragile: optimi
The system’s power is comparable to others – but it still has frightening implications for the future of hacking
Last month, Anthropic made a remarkable announcement about its new model, Claude Mythos Preview: it was so good at finding security vulnerabilities in software that the company would not release it to the general public. Instead, it would only be available to a select group of companies to scan and fix their own software.
The announcement requires context – but it contained an essential truth.
Continue reading...