While FP16 operations can be used for games (and in fact are somewhat common in the mobile space), in the PC space they are virtually never used. That said, there are whole categories of compute tasks where the high precision isn’t necessary deep learning is the poster child right now, and for Vega Instinct AMD is practically banking on it.Īs for gaming, the situation is more complex still. Which is why fast FP64-capable GPUs are a whole market unto themselves. In the compute world especially, precisions are picked for a reason, and compute users can be quite fussy on the matter. The reason why that matters is two-fold: virtually no existing programs use FP16s, and not everything that is FP32 is suitable for FP16. It requires API support and it requires compiler support, but above all it requires code that explicitly asks for FP16 data types. Taking advantage of this feature, in turn, requires several things. In this respect fast FP16 math is another step in GPU designs becoming increasingly min-maxed the ceiling for GPU performance is power consumption, so the more energy efficient a GPU can be, the more performant it can be. Processing data at a higher precision than is necessary unnecessarily burns power, as the extra work required for the increased precision accomplishes nothing of value. The purpose of integrating fast FP16 and INT16 math is all about power efficiency. This is an extension of AMD’s FP16 support in GCN 3 & GCN 4, where the company supported FP16 data types for the memory/register space savings, but FP16 operations themselves were processed no faster than FP32 operations. If a pair of instructions are compatible – and by compatible, vendors usually mean instruction-type identical – then those instructions can be packed together on a single FP32 ALU, increasing the number of lower-precision operations that can be performed in a single clock cycle. This is similar to what NVIDIA has done with their high-end Pascal GP100 GPU (and Tegra X1 SoC), which allows for potentially massive improvements in FP16 throughput. Which is AMD’s name for packing two FP16 operations inside of a single FP32 operation in a vec2 style. Rapid Packed Math: Fast FP16 Comes to Consumer Cards (& INT16 Too!)Īrguably AMD’s marquee feature from a compute standpoint for Vega is Rapid Packed Math.
0 Comments
Leave a Reply. |