Groq is a real-time AI inference company and the creator of the LPU Inference Engine, the fastest language processing accelerator on the market. It is architected from the ground up to achieve low latency, energy-efficient, and repeatable inference performance at scale. Customers rely on the LPU Inference Engine as an end-to-end solution for running Large Language Models and other GenAI applications at 10x the speed.