Parloa builds service agents customers want to talk to
Parloa leverages OpenAI models to power scalable, voice-driven AI customer service agents, enabling enterprises to design, simulate, and deploy reliable, real-time interactions.
ComputerWorld AI·

The ongoing shift from generative AI (genAI) to agentic AI provides an opportunity for enterprises to move to more nimble and less expensive forms of computing, according to analysts. Early AI models were largely built on expensive GPUs from Nvidia and AMD that offered raw processing power. But newer agentic AI tools, rooted in business process and workflow management, can run on more efficient, cost-effective hardware. As a result, IT decision-makers who still think they require GPUs for anything AI-related need to reconsider their hardware options in terms of both cost and capabilities, analysts said. “A better way of thinking about this is the cost of AI compute and now agentic AI platform services or systems,” said Leonard Lee, principal analyst at Next Curve. “’AI computing’ or ‘accelerated computing’ has clearly transcended the GPU as an inference accelerator.” The new hardware options include CPUs and specialized AI chips, also known as ASICs in semiconductor parlance. Although
Read full articleParloa leverages OpenAI models to power scalable, voice-driven AI customer service agents, enabling enterprises to design, simulate, and deploy reliable, real-time interactions.
MRC (Multipath Reliable Connection) is a new open networking protocol developed by OpenAI in partnership with AMD, Broadcom, Intel, Microsoft, and NVIDIA that improves GPU networking performance and resilience in large-scale AI training clusters by spreading packets across hundreds of paths simultaneously, recovering from network failures in microseconds, and enabling supercomputers with over 100,000 GPUs to be built using only two tiers of Ethernet switches. The post OpenAI Introduces MRC (Multipath Reliable Connection): A New Open Networking Protocol for Large-Scale AI Supercomputer Training Clusters appeared first on MarkTechPost.
Zyphra releases ZAYA1-8B, a reasoning Mixture of Experts model with only 760M active parameters that outperforms open-weight models many times its size on math and coding benchmarks — closing in on DeepSeek-V3.2 and surpassing Claude 4.5 Sonnet on HMMT'25 with its novel Markovian RSA test-time compute method. Trained end-to-end on AMD Instinct MI300 hardware and released under Apache 2.0, it sets a new standard for intelligence density in the small language model weight class. The post Zyphra Releases ZAYA1-8B: A Reasoning MoE Trained on AMD Hardware That Punches Far Above Its Weight Class appeared first on MarkTechPost.
A familiar pattern has emerged in robotics and autonomous systems: a flagship demo runs beautifully on stage, the same system stumbles in a live warehouse two weeks later, and the post-mortem blames “reality” for being messier than the test environment. Some voices in the field argue the missing layer is hardware — better grippers, force-torque […]
The protocol is designed to improve GPU performance as AI compute ramps up.
IHS's top cyber official says AI will help security teams be more efficient, allowing analysts to work on things "we really don't want agentic AI to do.”
The companies will build optical fiber manufacturing plants to meet growing industry demand.
Akhil Docca, head of robotics product marketing at Nvidia, on how the vendor is looking to accelerate physical AI adoption.