Databricks and NVIDIA to optimise data and AI workloads

Databricks is working with NVIDIA to optimise data and AI workloads on Databricks’ Data Intelligence Platform.

Organisations are using the platform to build and customise generative AI (GenAI) solutions trained on their data and tailored to their business and domain.

Databricks Mosaic AI and NVIDIA are collaborating on model training and inference to advance the state of building and deploying GenAI models on Databricks’ end-to-end platform.

For GenAI model training, Databricks Mosaic AI relies on NVIDIA H100 Tensor Core GPUs, which are optimised for developing large languag models (LLMs). Mosaic AI harnesses the power of NVIDIA accelerated computing and offers an efficient and scalable platform for customising LLMs.

For model deployment, Databricks leverages NVIDIA accelerated computing and software throughout the stack. A key component of Databricks’ Mosaic AI Model Serving is NVIDIA TensorRT-LLM software.

“From analytics use cases through AI, NVIDIA has already powered our foundational model initiatives, and with our mutual work on query acceleration, we’ll be able to demonstrate value for more enterprises,” said Ali Ghodsi, Co-founder and CEO of Databricks.

“By accelerating data processing, NVIDIA and Databricks can supercharge AI development and deployment for enterprises seeking greater insights and better outcomes with more efficiency,” said Jensen Huang, Founder and CEO of NVIDIA.

Photo: Mikhail Nilov

Tagged with: