Microsoft expects to cut the number of servers associated with its Bing search queries in half thanks to custom-designed programmable chips working in conjunction with Intel's Xeon processors. The Redmond, Washington, software giant has designed field-programmable gate arrays, or FPGAs, that it hopes to use to speed up Bing searches and plans to put them in production soon, according to a story in Wired.

The story highlights Project Catapult, a network of machines that Microsoft's Research team is building to process Microsoft's search algorithms that help determine what pages to list based on your search query. They are being tested now, but Microsoft expects to use them to handle your actual search queries next year. From the story:

Using FPGAs, Microsoft engineers are building a kind of super-search machine network they call Catapult. It's comprised of 1,632 servers, each one with an Intel Xeon processor and a daughter card that contains the Altera FPGA chip, linked to the Catapault network. The system takes search queries coming from Bing and offloads a lot of the work to the FPGAs, which are custom-programmed for the heavy computational work needed to figure out which webpages results should be displayed in which order. Because Microsoft's search algorithms require such a mammoth amount of processing, Catapult can bundle the FPGAs into mini-networks of eight chips.

Like some of the custom chips designed for mining Bitcoins, the key to the FPGAs, which are expensive, is that they can be programmed to do one set of tasks extremely well. And if that task is a large enough consumer of compute power, like mining Bitcoins or crunching search algorithms, then designing a highly specialized processor to do it can pay off. It's counterintuitive if we're viewing the world from the old-school enterprise IT framework where the computers had to run many apps well because each enterprise had to support a stable of of them. Only, now there's a twofold shift that changes how big computing customers view their hardware.

The first is that large webscale providers can segment workloads and thus develop specialized hardware for specific apps spread across a variety of users. The second is that these companies provide the infrastructure as a service, which means the cost of computing is the primary cost of their business. Thus, the incentive to invest in capital-intensive products to improve their bottom line can pay off in a big way.

For example, it can cut the number of servers and associated costs of running search inquiries. According to Doug Burger, the Microsoft Research employee quoted in the Wired story, the the FPGAs are 40 times faster than a generic Xeon CPU when it comes to running Microsoft's algorithms. While that won't lead to a an equivalent reduction in time to deliver results or even in terms of machines to process those results, he does think it could cut the number of servers needed in half. And every server you cut means a reduction in the cost of powering that server and a similar reduction in cost of cooling that server -- a big deal across millions of servers.

That explains why Microsoft, and other webscale giants from Amazon to Google are investigating different chip architectures for their servers. And Microsoft's decision to test FPGAs is doubly interesting because they can actually be re-programmed when the company's algorithms change, making them a costly, but flexible option. And if there's one thing we know about the cloud, it's that flexibility trumps cost.

This may be a problem for Intel, which isn't losing a customer, but is losing out if using FPGAs means Microsoft buys half the number of servers (and the Xeon chips inside them). But Intel is also doing its best to design custom chips in response to the needs of its webscale clients such as eBay, so perhaps it will offer a tweaked design that either offers better performance with the FPGA at a higher margin or ends up eventually supplanting them. We can ask Diane Bryant, senior vice president and general manager of the Data Center Group for Intel, at our Structure event this week.