News and news from the world of silicon, microprocessors, new models


nanoscale, artificial intelligence, artificial intelligence tools, new software for AI. Nanoscale of new microprocessor, investments, special news.


The GPU (Graphic Processing Unit) works almost the same as the processor, but is used in the calculation of very complex geometric figures and mathematical calculations.

A GPU is a printed card and is composed of its heart which is the graphics processor, we find the RAM that have the same function as the RAM allocated to the PC or to contain data that the video card needs very frequently, we find the power supply tracks or where the cables that start from the power supply and arrive at the video card are connected and finally we find the PCI Express connection or the part that connects MoBo and GPU.
INTEGRATED AND DEDICATED GPU DIFFERENCE

DEDICATED: The dedicated GPU is a component that is installed in the computer and works only for the calculations of geometric figures
INTEGRATED: The integrated GPU, or those of Intel, are directly integrated into the processor and therefore are of much lower power than the dedicated ones because being together with the processor they heat up much more and therefore are limited by the factory so that the processor does not merge together.

Nvidia: specialized in the production of video cards and also produces graphics accelerators and are the Nvidia GTX or the Tesla

Intel: specialized in the production of processors, it also associates with the latter an integrated graphics not at the level of dedicated cards and are the Intel HD Graphics XXX (XXX stands for the model / series), in addition to the Intel HD Graphics Intel has developed new graphics processors Intel Iris X, Xe included in all latest generation laptops such as LG Gram, Asus VivoBook and many more.

AMD: produces the famous RX 6900/6800/6700 XT and recently introduced Ryzen based APUs (e.g. Ryzen 5 5600G or Ryzen 7 5700U laptops)

While Google has used 1,000 PCs to simulate its neural network with built-in kitten recognition, NVIDIA claims to be able to reduce that number to a handful thanks to the use of GPU accelerated units (used in this case for GPGPU calculations) based on the Kepler architecture. and commonly available commercially at a (relatively) low price.

NVIDIA has collaborated with researchers at Stanford University to replicate (increasing by 6.5 times) the computational capabilities of Google’s setup, but in this case the neural network is powered by the brute power of 16 machines equipped with GeForce GTX 680 graphics cards – in the version with 4 Gigabytes of RAM memory.

The new artificial network super-powered by GPUs has collaborated Andrew Ng, a researcher previously involved in Google’s kitten-dependent neural network project unveiled last year and now outclassed by the new design based on GeForce cards conceived by NVIDIA.

Whether it’s CPU-based, GPU-based or hybrid-design machines, however, artificial neural networks already have a variety of application potentials and more are being added all the time. Microsoft, for example, used this kind of technology to double Bing’s speech recognition speed on Windows Phone gadgets and improve recognition accuracy by 15 percent.

OpenAI presents Triton 1.0, an opensource language for creating neural networks using GPUs

What is Triton and what is it for, a language that opens up new scenarios in the field of artificial intelligence and neural networks.

It’s called Triton and it’s a new programming language designed to implement artificial intelligence algorithms on GPUs and build neural networks.

It was presented by the technicians of OpenAI, a startup based in San Francisco and supported by Microsoft, who first of all wanted to highlight the advantages in terms of ease of use compared to the NVidia CUDA programming tool.

OpenAI’s goal is to make Triton a viable alternative to CUDA for deep learning. The new development solution, in fact, is designed for researchers and engineers who deal with machine learning and who are not familiar with GPU programming despite having good skills on the software side.

The fact Triton was conceived by OpenAI, a company that developed the well-known software for the processing of natural language GPT-3, is already a guarantee of the results that can be obtained and the impetus that the new solution can give to the sector of applications based on artificial intelligence.