Tech

Google Cloud strives to make data ‘infinite’ with BigLake and new data cloud alliance


biglake-jpg.png

Image: Google

Google Cloud has announced a preview of its data lake storage engine, BigLake, as part of its aim to remove all “data limits” and break down the barrier between data lake and warehouse.

As Google Cloud data analytics product manager Sudhir Hasbe explained, BigLake has been designed to provide a unified interface across every storage layer, including data lakes and data warehouses, regardless of any format.

“That’s so you don’t have to duplicate data, move it across your object stores, like in Google Cloud Storage, S3 or Azure in a multi-cloud environment, and you have a single place to access it. all his data,” he told media in a pre-meeting meeting.

Hasbe added BigLake can support all open file formats like Parquet, along with open source processing engines like Apache Spark or Beam, and various table formats including Delta and Iceberg.

“It’s completely open,” he said.

“We’re taking innovation from Google, extending it to the open source world, and making it more open to all of our customers.”

BigLake is set to be the hub for all future Google Clouds investments.

“We will make sure all the different tools and components work seamlessly with BigLake going forward,” Hasbe said.

In addition, Google announced the creation of the already established Data Cloud Alliance with other founding partners including Confluent, Databricks, Dataiku, Deloitte, Elastic, Fivetran, MongoDB, Neo4j, Redis, and Starburst.

Under the alliance, members will provide infrastructure, APIs, and integration support to ensure portability and data accessibility across multiple platforms and products across multiple environments. They will also collaborate on new, industry-popular data models, processes, and platform integrations to increase data portability.

“We’re committed to removing barriers to data lock-in. We’re committed to ensuring data can be accessed and processed across products, and we’re committed to putting our customers in the center.” at the heart of our shared innovation,” declared Google Database, Data Analytics, Looker’s general manager, Gerrit Kazmaier.

As part of the Data Cloud Summit, the tech giant also introduced the Vertex AI Workbench to bring data and ML systems into a single interface, so teams can have different toolsets. general tools for data analytics, data science, and machine learning. It has been designed to integrate directly with the full suite of data and AI products, including BigQuery, Serverless Spark, and Dataproc.

“This capability allows teams to build, train, and deploy ML models in a simple, portable environment that can enhance and make it up to 5x faster than other tools that don’t work. they can use,” said Harry Tappen, product manager for Google Cloud AI.

The company also announced the Vertex AI Model Registry. Now in preview, the Model Register has been designed to be “the central repository for discovering, using, and managing machine learning models, including those stored in BigQuery ML” , Tappen said.

“As this functionality makes it easier for data scientists to share models and the application developers that use them, teams will be more empowered to turn data into real-time decisions.” “, added Tappen.




Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button