Nvidia chief executive Jen-Hsun Huang holds up one of the first
graphics cards based on the new Fermi chipset
While the new ATI Radeon HD5850 is winning accolades all over the Internet for its great performance, Nvidia isn’t one to be left far behind. At its GPU Technology Conference in San Jose, California, the company announced yesterday that it has created a next-generation graphics chipset, which it is code-naming ‘Fermi’.
Fermi has more than 3 billion transistors and has 512 parallel processors, or more than twice the number from last year’s chip, said Jen-Hsun Huang, chief executive of Nvidia. The 3 billion transistors are a significant leap over the ATI 5850’s 2.15 billion transistors!
Here are some of the salient features of the new technology:
- C , complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
- ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
- 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
- 8x the peak double precision arithmetic performance over NVIDIA’s last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
- NVIDIA Parallel DataCache – the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
- NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)
- Nexus – the world’s first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio
For the geeks who would like extreme details about the new technology, PC Perspective has a great article going in depth. In layman’s terms, the new Fermi chipset will introduce the ‘GT300’ series – the big brother of the currently reigning GT200 series, with heavy performers like the GTX285 and GTX295.
The Fermi also introduces a new technical base to the graphics chipset, which provides it with a massive boost in computing. While the GT200 series is extremely powerful, it’s still loosely based on the old G80 architecture; Fermi, on the other hand, will bring in a whole new way of making the chips.
The new graphics chip will also be tied heavily into Nvidia’s CUDA technology, taking advantage of it for parallel graphics processing. The Nvidia chip is big in part because it exploits Nvidia’s CUDA programming environment, which uses a graphics chip’s processors for non-graphics tasks.
Huang believes that CUDA is the path for Nvidia’s chips to be used much more pervasively in a variety of purposes, such as medical imaging and video encoding. So apart from gaming, Fermi has major implications for academic and scientific pursuits.
As for the big question of availability, it looks like enthusiasts will have to wait at least a few months for graphics cards based on the Fermi chipset to materialise. So, will you wait for the GT300 series or are you going to snap up a Radeon HD5850? Leave a comment to let us know what you think
[Source:- Digit]