Much noise has been generated on the integration of SAP ecosystems with the cloud. The synergy, for instance, between SAP Enterprise Apps or S/4HANA with that of hyperscaler cloud platforms such as AWS cannot be over-emphasized in the modern era. Both Cloud Service Providers (CSPs) and SAP help businesses generate billions of dollars in annual revenues focusing on this invariably complex yet unprecedentedly transformative intertwined relationship.
Is it all ‘jumping on the bandwagon’ or a necessary practice?
Smart Data Management – A Major Driver to SAP Integration with AWS
A massive inspiration for firms to invest in integrating their SAP landscapes with AWS lies in the efficient utilization of ‘dark data’. Per Gartner Glossary, dark data refers to the gargantuan amounts of information generated by business processes daily only to be never leveraged in the long run. By leverage, we mean intelligent decision-making analytics, data-powered business generations, and so, on.
While SAP HANA databases serve as the information collection sources for business transactional and analytical workloads via the SAP apps running at the frontend, the same could be extracted to a custom-designed Amazon S3 data lake at the back. Run the structured and unstructured sourced data in the S3 lake through native tools such as AWS Glue, Lambda, Kinesis, Redshift, etc and unlock smart insights with deep decision intelligence. Amazon S3 works as a centralized yet highly cost-effective, resilient data storage pool for the firm’s entire data lifecycle across all operations and ecosystems.
However, before we plunge into the process, let’s address a common dilemma.
Is SAP HANA In-Memory Databases Not Enough - Why tune it with Amazon S3?
Over the last few years, SAP has significantly improved their data intelligence and analytics suite to match the evolving customer requirements. SAP HANA in-memory databases are renowned for their unflinching scalability, multi-model data management, and embedded machine learning for advanced insights generation. However, modern enterprises are rapidly adopting hybrid architectures; multiple ERP and enterprise platforms stacked to an underlying hyperscaler cloud/numerous clouds via multiple APIs.
Hence, it makes sense for firms to leverage a centralized, smart storage like Amazon S3 for all their dataflows, apps, workloads, and assets. With SAP HANA data-scapes safely integrated to the S3 pool along with info-bases from all other sources, enterprises can seamlessly run end-to-end analytics (visualizations, decision insights, revenue hotspots) at zero complications. Alternatively, enterprises might also leverage Amazon S3 as an efficient storage layer to their structured and unstructured SAP dataflows while leveraging SAP HANA as the intelligent analytical solution. Below are some insider perks of Amazon S3:
- Large amount of data can be stored, archived and protected without up-front construction costs.
- No worries on access bandwidth and payments based on actual usage.
- 99.99999999% availability and data durability without data loss concerns.
- Through file lifecycle management, one can set the storage lifetime for a particular dataset before it is automatically cleansed.
That being established, let’s unravel the safest and smartest integration procedure available!
Integrating SAP HANA Data with Amazon S3: The Process
SAP HANA Integration with Amazon S3 can be achieved via multiple approaches. However, the best way is to achieve with minimal configurations and null coding, SDKs, or backend programming activities at play. For an overview, SAP HANA smart data integration, in batch or real-time, from a variety of application sources can be completed with pre-built, custom adapters. One method is by installing a Data Provisioning Agent to house adapters and connecting the source system with the Data Provisioning server, housed in the HANA system. One can then seamlessly create replication tasks or sync the files with the Shared Storage Space.
A key feature of Amazon S3 is its object-based storage architecture, applicable to any type of structured and unstructured data from applications, workloads, ecosystems, and assets. This makes it easy for enterprises to create a centralized repository connected to multiple IT architectures and run end-to-end analytics on the same. The data objects are stored in buckets accessible via the Amazon S3 console, programmatically via Amazon SDKs, or the S3 REST API. Objects can be as big as 5 TB in size, perfect for SAP HANA workloads and databases addressing full-stack frontend enterprise operations, and are very easy to govern via AWS managed Storage Services.
Now back to business.
The AWS Storage Gateway Magic
The easiest approach to connecting SAP HANA databases with Amazon S3 without any coding complications is via the AWS Storage Gateway Solution. The latter works over the HANA Smart Data Integration (SDI) File Adapter. Below is a step-by-step procedure to seamlessly perform the integration:
1. Configure AWS Storage Gateway via the AWS Console
2. Mount S3 File System on Data provisioning Agent VM over AWS Storage gateway
3. Configure Remote source with SDI file Adapter with S3 File system
4. Grant SDI Virtual Table Access to S3 Files
STEP 1: Configure AWS Storage Gateway via the AWS Console:
AWS Console, as obvious, is the intuitive portal that allows all AWS management activities across the length and breadth of the enterprise landscape. Now, configure the AWS Storage Gateway solution on the Console: Create/Activate Storage gateway for S3 Files Share object of File gateway type. Perform the following steps:
- Assign S3 Storage Gateway Managed Service to EC2 Instance
- White list the IP details to be allowed to access S3 file objects
- Name the Storage server and make it active.
Source: SAP Blogs
STEP 2: Mount S3 File System on Data Provisioning Agent VM over AWS Storage Gateway
The previously shared data access now must be mounted as a directory under the Data Provisioning (DP) Agent root installation directory. Below are the procedural details:
1. DP Agent Root Installation on our setup.
2. As root user:
- Create a mount point for the directory on the VM
- Mount AWS Storage Gateway, created by the from the file share gateway
- If the S3 bucket contains files, check these are visible
Sample files stored in S3
Source: SAP Blogs
STEP 3: Configure and Create a Remote Source using the SDI File Adapter
Now configure Remote source with HANA Smart Data Integration File Adapter to the S3 File system:
- Use File Adapter via AWS Gateway
- Virtualize data access, can federate or copy data into HANA
- Read S3 data and Write S3 data too
- No code required
Once completed, move over to HANA Studio and drill down DB provisioning to create a new remote source connection. The Directory of the file format indicates where the configuration files located.
STEP 4: Grant SDI Virtual Table Access to S3 Files
- Below is a simple scenario to grant access to Amazon S3 files via the greetings virtual table in HANA:
- Read the S3 Files in HANA (via virtual table)
- Write to S3 files from SQL (via virtual table)
- Validate the data in S3 (via the AWS Console)
One can also validate the files that have been created using the command line and finally authenticate the results from the greetings virtual table in HANA:
Source: SAP Blogs
Envisioning Beyond: The Bigger Picture
Et voilà! SAP HANA integration with Amazon S3. Now brace the power of cloud to enjoy highly scalable and available data management combined with cutting-edge analytics, data intelligence. Data Modernization surely is at the heart to crafting a future-ready enterprise.
That being thumped, however, the promise of SAP integration with AWS cloud cannot be undermined with specific applications. Today’s hyper-volatile markets crave for an agile and modernized enterprise that thrives on constant innovation and adaptation. Unleashing the true power of cloud by integrating all business processes to the former grants firms the necessary respite, uninterrupted continuity. Once that confidence is achieved, it’s but an open canvas to build on without the lurking disruption fears. Transformation doesn’t seem to be a far-flung concept anymore.
To better integrations and a greater future...
Why transform SAP on AWS with Cloud4C: Your Fully Automated Managed Services Partner
Automation, Intelligence, and Data Efficiency highlight Cloud4C’s core DNA. As the world’s leading application-focused Cloud Managed Services Provider responsible for 4000+ transformations across 26 nations and with expertise in transformations involving S/4HANA, SAP HANA Enterprise Cloud (HEC), RISE with SAP, we continually invest in cutting-edge innovative technologies to push the boundaries of what’s possible with SAP transformation on the AWS cloud. Our unique automated managed services powered by AIOps, Intelligent RPA, and the novel Self Healing Operations Platform contribute to the highly advanced SAP outcomes on the AWS Cloud Platform. Our automated managed services span across end-to-end SAP landscape management, DevOps migration and implementation, Application Management, and Business Operations Administration. Explore deeper into our advanced SAP on AWS offerings here, move and modernize SAP on the cloud with our unique SAP Switch2Cloud Offer: End-to-end Automated SAP Managed Services + SAP S/4HANA Conversion + SAP Application Management under a single offering highly cost-efficient SLA.
Capabilities stand at the heart of our achievements: 250+ SAP success stories, 2000+ TB managed HANA databases, 6000+ supported SAP systems, and more. We are also an AWS Advanced Consulting Partner and SAP Certified in Application Management Services, SAP HANA Operations Services, Hosting Services, and cloud services. We promise and deliver industry-best ROI guaranteeing highest uptime, scalability, agility, and security of SAP workloads. Brace for an intelligent SAP transformation across operations, fully managed and automated on the AWS cloud.