Richard Socher's new $650 million startup wants to build an AI that can research and improve itself indefinitely — and he insists it will actually ship products.
The post Richard Socher Raises $650M For Recursive Superintelligence: AI That Improves Itself appeared on BitcoinEthereumNews.com.
Richard Socher Raises $650M For Recursive Superintelligence: AI That Improves Itself Skip to content
Home AI News Richard Socher Raises $650M for Recursive Superintelligence: AI That Improves Itself
Source: https://bitcoinworld.co.in/richard-socher-recursive-superintelligence-650m-funding/
Industry body says energy consumption driven by AI up 15% globally in two years as it warns of societal backlash
Datacentres are consuming 6% of electricity in the UK and US, with the growing strain of AI on energy supplies prompting community resistance, according to research.
The proportion of electricity used by vast warehouses stacked with microchips to power AI and the internet has risen 15% worldwide in the past two years as annual global investment in datacentres approaches $1tn (£740bn) – nearly 1% of the global economy, according to the International Data Center Association (IDCA).
Continue reading...
Time series data is common across finance, operations, engineering, and research. These five Python scripts cover the analysis tasks that come up repeatedly.
Most AI agents are stuck in their ways. Built once, they repeat the same patterns regardless of the task at hand. But new research suggests a smarter path forward: agents that get sharper with every challenge they face...
World is approaching point where no one can shut down a rogue AI, says director of body behind research
It’s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers.
In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or paving over the world with solar panels.
Continue reading...
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC.
The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths.
For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false.
On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.