Intel and Facebook team up to make AI more affordable

Intel and Facebook team on new AI chip

What happens when two of the world’s top tech companies team up? The future of AI. Intel and Facebook have teamed up to develop an AI chip that is cheaper and more efficient.

The device currently being developed will reportedly run pre-trained machine-learning algorithms more efficiently. This means that less hardware and less energy will be required for the AI to perform helpful functions.

Inference with Insight

Facebook is a huge company that uses AI in almost every aspect of their service. From tagging people in photos to translating posts between languages, Zuckerberg’s company takes advantage of specialized machine-learning algorithms. If run on generic hardware, this can be a significantly expensive endeavor.

Now, Facebook has direct access to Intel, one of the world’s top chipmakers. Though far behind the market leader for AI hardware, Nvidia, this new project will help keep Intel relevant in a changing marketplace.

Complex computations required for AI algorithms run inefficiently on general-purpose computer chips. However, if run on a chip that splits up computations, like the graphics processors Nvidia specializes in (hence their advantage), things run much smoother. By developing a specialized core, Intel will give Facebook the ability to run their algorithms more efficiently with less hardware.

Beating the Competition

There has been a dramatic rise in the development and demand for AI hardware in the past few years. Small start-ups are chasing riches in the race to develop chips for AI. They are joined by other tech giants like Google and Amazon. The latter are developing chips to power their cloud AI services.

The collaboration between Facebook and Intel has been kept under wraps fairly well so far. However, Intel will make the chip available to other companies later in 2019.

Once this chip hits the market, it will help smaller companies benefit from AI by empowering them to use it more efficiently. On a larger scale, less hardware and energy demand means that this chip could help AI-integrated devices get smaller as time goes on.