A new player has entered the machine learning arena. While big players such as Google, Intel and NVIDIA have been slugging it out, Amazon has thrown its hat into the ring with its newly-launched Inferentia chip.
Just like Google, which introduced its Tensor Processing Unit (TPU) in 2016, Amazon now has its own chip to power its cloud computing services. It may no longer need to rely on chips from Intel and NVIDIA.
Inferentia is a machine learning inference chip designed to deliver high performance at low cost. It will support the TensorFlow, Apache MXNet, and PyTorch deep learning frameworks, as well as models that use the ONNX format.
According to Amazon, some inference workloads require an entire graphic processing unit (GPU) or have extremely low latency requirements. Solving this challenge at low cost requires a dedicated inference chip.
In April, e-commerce giant Alibaba announced plans to design its own artificial intelligence (AI) processors.
These moves, which can be costly to the tech companies, point to the growing impact of AI in the days ahead.