[ad_1]

(3dcombinat/Shutterstock)
Early adopters of NeuroBlade’s processing-in-memory (PIM) architecture, called XRAM, have seen throughput improvements of 10x to 60x for large SQL workloads. But with the company keeping safety at the forefront of growth, we don’t expect the analytics-enhancing appliances to ship in bulk this year.
Elad Sity and Eliad Hillel co-founded NeuroBlade in 2018 to address I/O bottlenecks between processors and memory during some data-intensive workloads. They found that standard RAM could not move data to the CPU fast enough to keep the pipeline full, leaving processor cycles for tables and analysts waiting for queries to complete. I noticed
Sity originally sought speed using Intel’s Optane technology. It worked fine for a while, but eventually I discovered that the same level of performance could be achieved by tweaking a standard disk drive, so I looked elsewhere for better numbers.
Ultimately, Sity and co-founders decided to try a custom silicon route. Building his custom-designed RISC processor and mounting it directly into memory, called the PIM architecture, allows NeuroBlade to offload work from the main processor so it runs more efficiently and gets more work done. can.
In addition to the XRAM modules, NeuroBlade uses NVMe drives in its Hardware Enhanced Query System (HEQS). Each HEQS appliance can hold about 100 TB of data, and up to 6 HEQS units can be chained together to provide a total capacity of 600 TB, augmenting the data lake next to his HEQS on LAN.

Each HEQS unit in NeuroBlade stores approximately 100 TB of data (Source: NeuroBlade)
Early results are promising, with early adopters seeing 10- to 60-fold reductions in processing time, the company says. NeuroBlade has worked closely with early adopters (who tend to be large financial services companies operating their own equipment) to ensure HEQS delivers the benefits they expect.
The technology and packaging are promising, but NeuroBlade is still in its early stages and Sity wants to ensure each customer gets the full attention of the company and succeeds with the product.
“Today’s focus is on high-end customers,” says Sity. Data Nami“It will take a few years to get the GA system. [to the fact] We are still a startup. It can’t support that many different use cases. You still learn a lot from every engagement you have. You have to speak from both a product perspective and a support perspective. ”
NeuroBlade is “plug and play” from the customer’s point of view, which is basically just putting HEQS next to their existing data lake, but there’s still a lot of complexity going on behind the scenes. Sity estimates that it takes about a month and a half of software development to build the necessary integrations to support a specific database, file system, or object store with her NeuroBlade API.
“When we started talking to our customers, what we really built was a new infrastructure for data analytics that allowed for storage, networking, compute, and of course specific accelerators for analytics. I noticed,” he says. he says. “And the most important part is the multitude of software that coordinate all of the above.”
100% sure that NeuroBlade has not changed the customer’s SQL as it is directly in the query path and executes certain queries in XRAM technology while other queries are handled by the normal query engine processor resources need to do it. That takes a lot of work, says Sity.

NeuroBlade’s HEQS architecture (Source: NeuroBlade)
“You can think of it as very complex software,” he explains. Data Nami“Take a query. Analyze it, write code that implements the query, compile it, download it to hardware.”
NeuroBlade does not support all data lake or data warehouse setups on the market. In fact, it’s pretty selective when it comes to choosing which environment to run in. So far, it has been primarily used in cloud-native data lake environments running on-premises using the Presto, Trino, Spark, and Dremio query engines. Its architecture is not suitable for use in traditional data warehouse environments, where the compute and storage layers are tightly linked.
“Being able to connect to the database’s query engine is not a small technical problem, as the machine plans need to change from time to time,” says Sity. “Highest priority: don’t change the query.”
Early adopters are very happy with the results so far, says Sity. They typically migrate their most critical queries to his NeuroBlade, which he says represents 10% to 50% of their overall analytical query base. Early adopters can take advantage of the efficiency gains to accelerate the analyst’s generation of results from his SQL queries, or significantly reduce the size of existing analytics setups to save costs. .
Founded in Israel, the company is growing selectively. He raised over $110 million in A and B rounds and opened a US headquarters in Palo Alto last year to help it tackle the lucrative North American market.
The semiconductor industry’s supply chain problem has not been fully resolved, which does not help companies like NeuroBlade building custom silicon. Still, Sity says the company is doing well. “Not a lot,” he says. “Sometimes we pay more…but we can manage.”
A bigger priority for the company’s future, Sity said, is ensuring each customer is successful with their product.
“We have some big deals right now. But when the right opportunity presents itself, we’re going to be very opportunistic,” he says. I think the world has proven that you don’t need to grow up very healthy.”
Related products:
NeuroBlade tackles memory and bandwidth bottlenecks with XRAM
Bridging the Constant Gap in the “Big Memory” Era
The Past and Future of In-Memory Computing
Big data, Dremio, Elad City, hardware accelerators, PIM, presto, in-memory processors, Spark, SQL analytics, Trino, XRAM
[ad_2]
Source link