Tech

Realize Opportunities at the Edge with Distributed Cloud Databases


Businessman using computer with computing edge Modern IT technology on virtual screen concept
Image: Deemerwha studio / Adobe Stock

The hype around edge computing is growing, and rightfully so. By bringing computing and storage closer to where data is generated and consumed, such as IoT devices and end-user applications, organizations can deliver a low-latency, worthwhile experience Reliable and highly available for even the most data-intensive, bandwidth-intensive applications.

While delivering a fast, reliable, immersive, seamless customer experience is one of the key drivers of technology, another reason that is often underestimated is that edge computing helps organizations comply Strict data privacy and governance practices hold businesses accountable for transferring sensitive information to the host central cloud.

Improved network resilience and bandwidth costs also encourage adoption. In short, without breaking the bank, edge computing can enable applications that are compliant, always on, and always fast – anywhere in the world.

UNDERSTAND: Research: Digital transformation initiatives focused on collaboration (TechRepublic Premium)

Unsurprisingly, market research firm IDC expects edge networks to represent over 60% of all deployed cloud infrastructure by 2023 and global spending on edge computing. will reach $274 billion by 2025.

Plus, with the proliferation of IoT devices – State of IoT Spring 2022 report It is estimated that around 27 billion devices will be connected to the internet by 2025 – businesses have the opportunity to leverage technology to innovate to their advantage and differentiate themselves from competitors.

In this article, I will walk you through the edge computing deployment process and discuss ways to develop edge strategies for the future.

From on-premises to the cloud

The early images of edge computing deployments were custom hybrid clouds. Powered by a cloud data center, applications and databases run on on-premises servers that are deployed and managed by a single company. In many cases, a basic batch file transfer system is often used to move data between on-premises servers and supporting data centers.

Between capital and operational costs, scaling and managing on-premises data centers can be out of scope for many organizations. Not to mention, there are use cases like offshore oil rigs and aircraft, where setting up on-site servers simply isn’t feasible due to factors like space and power requirements.

To address concerns about the cost and complexity of managing distributed edge infrastructure, it is critical for the next generation of edge computing workloads to leverage on-premises solutions. managed edge infrastructure provided by major cloud providers, including AWS Outpost, Google’s Distributed Cloudand Azure Private MEC.

Instead of having multiple servers on-premises storing and processing data, these edge infrastructure services can do the job. Organizations can save money by reducing the costs associated with managing distributed servers, while benefiting from the low latency brought by edge computing.

Furthermore, services such as AWS Wavelength enable edge deployment to take advantage of the high bandwidth and low latency features of the 5G access network.

Leveraging managed cloud edge infrastructure and access to high-bandwidth edge networks solves part of the problem. A key element of the advanced technology stack is the database and data synchronization.

In the example of an edge implementation using the legacy file-based data transfer mechanism, edge applications run the risk of operating on stale data. Therefore, it is important for organizations to build an edge strategy that takes into account a database that is suitable for today’s distributed architectures.

Use edge ready databases to reinforce edge strategies

Organizations can store and process data at multiple tiers in a distributed architecture. This can happen in central cloud data centers, cloud edge locations, and on end-user devices. Service performance and availability get better with each tier.

So embedding the database with the on-device application provides the highest reliability and responsiveness, even when the network connection is unreliable or non-existent.

However, there are cases where local data processing is not sufficient to obtain relevant insights, or when devices are not capable of storing and processing data locally. In such cases, applications and databases delivered to the edge network can process data from all devices at the downstream edge while taking advantage of low latency and high bandwidth pipelines. of the edge network.

Of course, hosting a database in central cloud data centers is essential for long-term stability and data aggregation across edge locations. In this multi-tier architecture, by processing large volumes of data at the edge, the amount of data restored across the internet to a central database is minimized.

With the right distributed database, organizations can ensure consistent and synchronized data at all levels. The process is not copying or duplicating data on each level; instead, it’s just transferring relevant data in a way that’s not affected by network disruptions.

Take retail as an example. Only store-related data, such as in-store promotions, will be moved down to store-side locations. Promotions can be synchronized in real time. This ensures store locations only work with store location related data.

UNDERSTAND: The Microsoft Power Platform: What you need to know about it (free PDF) (TechRepublic)

It is also important to understand that in a distributed environment, data governance can become a challenge. At the edge, organizations often deal with transient data, and the need to enforce policies around data access and retention at the level of edge location granularity makes things incredibly complicated.

That is why organizations planning their competitive strategy should consider a data platform that can only grant access to specific subsets of data to authorized users and execute them. implement data retention standards across levels and devices, and ensure sensitive data never leaves boundaries.

An example of this would be a cruise line that grants access to trip-related data to a sailing ship. At the end of the trip, data access is automatically revoked from the shipping line employees, with or without an internet connection, to ensure data is protected.

Go ahead, get ahead

The right edge strategy enables organizations to take advantage of the growing amount of data emitted by edge devices. And with the number of applications growing, organizations looking to be at the forefront of innovation should extend their central cloud strategy with edge computing.

Priya Rajagopal
Priya Rajagopal, Director of Product Management at Couchbase

Priya Rajagopal is the director of product management at Couchbase, (NASDAQ: BASE) leading provider of modern databases for enterprise applications on which 30% of the Fortune 100 list depends. With over 20 years of experience in building software solutions, Priya is a co-inventor of 22 technology patents.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button