MRC (Multipath Reliable Connection) is a new open networking protocol developed by OpenAI in partnership with AMD, Broadcom, Intel, Microsoft, and NVIDIA that improves GPU networking performance and resilience in large-scale AI training clusters by spreading packets across hundreds of paths simultaneously, recovering from network failures in microseconds, and enabling supercomputers with over 100,000 GPUs to be built using only two tiers of Ethernet switches.
The post OpenAI Introduces MRC (Multipath Reliable Connection): A New Open Networking Protocol for Large-Scale AI Supercomputer Training Clusters appeared first on MarkTechPost.
AI technology is leapfrogging, yet that doesn’t mean we always want a revolutionary feature out of it. What most users would want more of are simple capabilities within AI that can help with their everyday tasks, whether in the office, at home, or anywhere else. On those lines, OpenAI may have just come up with […]
The post ChatGPT is Now Inside Excel and Google Sheets: Here is How to Use it appeared first on Analytics Vidhya.
Parloa leverages OpenAI models to power scalable, voice-driven AI customer service agents, enabling enterprises to design, simulate, and deploy reliable, real-time interactions.
In the second week of the trial pitting Elon Musk against OpenAI CEO Sam Altman, former board member Shivon Zilis took the stand before judge and jury. Zilis is romantically involved with Musk as he is the father of her four children. In this edition, we look back at what pushed the tech magnate to file this lawsuit to begin with and put in context Zilis's testimony that Musk wanted OpenAI to be a subsidiary of Tesla. Also in this segment: FIFA boss Gianni Infantino defends the 2026 World Cup's high ticket prices.
Elon Musk’s AI ambitions are converging on multiple fronts simultaneously. SpaceX is considering spending up to $119 billion on a semiconductor facility in Grimes County, Texas, dubbed “Terafab” — a vertically integrated chip manufacturing complex developed alongside Tesla and Intel. The facility is intended to produce chips for AI servers, satellites, autonomous vehicles, and SpaceX’s proposed orbital […]
Zyphra releases ZAYA1-8B, a reasoning Mixture of Experts model with only 760M active parameters that outperforms open-weight models many times its size on math and coding benchmarks — closing in on DeepSeek-V3.2 and surpassing Claude 4.5 Sonnet on HMMT'25 with its novel Markovian RSA test-time compute method. Trained end-to-end on AMD Instinct MI300 hardware and released under Apache 2.0, it sets a new standard for intelligence density in the small language model weight class.
The post Zyphra Releases ZAYA1-8B: A Reasoning MoE Trained on AMD Hardware That Punches Far Above Its Weight Class appeared first on MarkTechPost.
The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.
According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”
The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.
An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety In