SoftBank profits surge on $25bn gain for OpenAI stake
Japanese group books net income of $11.6bn in fourth quarter, vastly ahead of analyst expectations
ComputerWorld AI·

It feels like the world’s longest and most public divorce: In late April, Microsoft and OpenAI once again renegotiated the slow-motion breakup that has been playing out between the two over the last several years. At first glance, it looks like a win-win. In the broadest terms, OpenAI gets more freedom to set its own course — it can sell its models to Microsoft competitors such as Amazon and Google, for example — while Microsoft gets a better revenue deal and first rights to the newest OpenAI technologies into the next decade. But in truth, one company got a better deal than the other. Who came out ahead? To figure that out, we first need to look at the most important details of the new agreement. A new deal after a lot of rancor Keep in mind that this new agreement didn’t arise from thin air. It’s a direct result of Microsoft’s threats in March to sue OpenAI when inked a $50 billion deal with Amazon that makes the latter company the only third-party cloud provider for OpenAI’s ent
Read full articleJapanese group books net income of $11.6bn in fourth quarter, vastly ahead of analyst expectations
OpenAI's secondary share sales highlight the growing trend of private companies leveraging employee equity to maintain talent and drive valuations. The post OpenAI employees sell up to $30M in shares amid AI boom appeared first on Crypto Briefing.
The lawsuit raises critical questions about AI liability, potentially influencing future regulations and ethical standards in AI deployment. The post Family of Florida mass shooting victim sues OpenAI in US court appeared first on Crypto Briefing.
OpenAI's proactive EU collaboration may set new cybersecurity standards, influencing AI's role in regulated sectors and investor sentiment. The post EU confirms OpenAI offers access to cybersecurity model, Anthropic lags behind appeared first on Crypto Briefing.
Google and its AI lab DeepMind are bearing down OpenAI and Anthropic
OpenAI's Daybreak could redefine cybersecurity strategies, enhancing defense capabilities and potentially shifting industry standards. The post OpenAI launches Daybreak, a cybersecurity platform built for defense teams appeared first on Crypto Briefing.
Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable. The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review. They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform. And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors
Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable. The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review. They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform. And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors