Quantcast
Channel: SAP – Cloud Data Architect

Your ERP environment, your choice: RISE with SAP presents another ERP modernization path for AWS customers

$
0
0

Feed: AWS for SAP.
Author: Axel Streichardt.

Introduction

Recently, SAP announced RISE with SAP, an offering that enables you to upgrade to SAP S/4HANA more easily through a single-tenant, SAP-managed ERP implementation. RISE with SAP further simplifies SAP S/4HANA adoption by bundling the technology needed to execute your cloud journey into a single offer with one contract. This offering delivers a standard production SLA of 99.7% (99.5% non-PROD) on the core RISE with SAP offering (and a 99.9% SLA with a price uplift), as well as a 20% reduction¹ in total cost of ownership (TCO) over five years compared to SAP S/4HANA on-premises deployments. AWS will be one of the cloud service provider (CSP) deployment options for RISE with SAP.

Our team will lean on its experience supporting SAP customers in order to help you reliably implement SAP S/4HANA through the RISE with SAP program. Our experience includes:

  • 13+ years partnering with SAP and over 5,000 active SAP customers running on AWS, with more than half deploying SAP HANA-based solutions.
  • Running 10 SAP Business Technology Platform (BTP, SAP’s Platform as a Service) regions globally).
  • Offering best practices for migrating and managing SAP workloads on the cloud through our SAP-specific professional services practice – the only of its kind among CSPs.
  • Analyst recognition, having been named as a “leader” for ten consecutive years in Gartner’s Cloud Infrastructure and Platform Services Magic Quadrant², as well as three consecutive years in the ISG Provider Lens Quadrant for SAP HANA Infrastructure Services³.
  • Powering SAP’s future innovations and current product offerings, such as HEC, NS2, as well as some products exclusively on AWS, including SAP Concur, SAP Analytics Cloud, and SAP Data Warehouse Cloud.
  • Providing end-to-end migration and transformation solutions for our customers with over 150 SAP technology, ISV, and GSI partners, as well as AWS’s ProServe team.

In wake of this announcement, many SAP customers are naturally reflecting on their own cloud journeys. RISE with SAP might represent the next step for customers who are ready for an S/4HANA transformation and looking for a fully managed offering that maximizes simplicity over customization.  RISE with SAP users will give up the ability to leverage AWS services natively, but still be able to extend and innovate through specific partner add-ons and the SAP Business Technology Platform—which AWS supports in 10 regions—significantly more than any other cloud service provider. Some customers may find that these capabilities meet their needs for innovation, while others may want maximum flexibility to extend their systems to AWS services natively.

When we talk to our SAP customers, we find that each one has a different set of business priorities and challenges associated with their SAP systems. Some are drawn to offerings like RISE, to optimize for simplicity, while others want to retain the maximum flexibility to extend through AWS services that comes with using a traditional license on AWS.

A common theme that a lot of customers tell us is that they have plans to upgrade to S/4HANA at some point down the line, but don’t want to wait until they are on S/4HANA to get the cost, agility, or innovation benefits for their SAP workloads.

SAP modernization is not a “one size fits all” endeavor, which is why we opt to give our customers choice and flexibility on their SAP cloud journeys. I want to take this opportunity to detail a few different paths your business can take as it considers SAP migration. As you’ll see, these paths complement one another, and you have the option to shift to a different SAP strategy as your needs change. Let’s start with a lift-and-shift of SAP ECC.

Lift-and-shift

Lifting-and-shifting your existing ERP is often the best first step towards modernizing your SAP systems. That’s because lift-and-shift migrations allow you to move your SAP landscape to the cloud without altering the application or database layers, enabling an accelerated path to the cloud. Once you’re running on the cloud, you can start consolidating your SAP landscape and right-sizing resources to reduce costs, eliminate complexity, and add innovative solutions. From there, you have the option to pursue any number of activities, whether that entails maximizing the value of your existing SAP investments or upgrading to SAP S/4HANA.

One customer who has taken this route to ERP modernization is NBCUniversal. After lifting-and-shifting their SAP ECC systems to the cloud, NBCUniversal was able to remove customizations and redundancies from their SAP landscape, and project that they will reduce their TCO – by 23% over a 10 year period—a number they expect will improve as they continue to optimize their architecture and SAP Basis operations. In the long-term, these efforts will enable them to innovate on a clean core, without being inhibited by excessive modifications.

Innovating with SAP beyond infrastructure

Once you have migrated your SAP systems to the cloud and optimized infrastructure, you can begin modernizing core business systems through the adoption of advanced cloud services. On-premises, integrating emerging technologies into your SAP landscape can bring about significant cost, complexity, and risk. Pursuing these activities on AWS is more straightforward because we offer the broadest and deepest collection of cloud services among cloud providers with over 200 offerings. Furthermore, we continue to update these services, with over 90% of new features and services being launched in direct response to customer feedback. Commonly, SAP customers will look to services that enable the use of the Internet of Things (IoT), machine learning, and data lakes to drive greater operational efficiency, reshape business processes, improve customer interactions, and more.

Invista took this route to ERP modernization. Like NBCUniversal, Invista lifted-and-shifted their SAP systems to the cloud. From there, they progressed their cloud journey by innovating around their core business systems. Specifically, Invista migrated disparate data sources to AWS via AWS Snowball, bridging silos and enabling the use of advanced analytics to optimize inventory levels. They also leveraged Amazon Rekognition to improve quality inspections, dramatically reducing defect rates.

Refactor to SAP HANA

Another option is to refactor from an existing database (e.g., Oracle, IBM Db2, Microsoft SQL Server) to SAP HANA. In doing so, you unlock the in-memory data capabilities of SAP HANA, without committing to a complete SAP S/4HANA transformation. However, should you determine that SAP S/4HANA is right for your business in the future, refactoring to SAP HANA now eliminates the need to do so downstream.

One customer that opted to refactor to SAP HANA is Newmont. As part of their acquisition of Goldcorp, Newmont unified the two companies’ SAP ECC footprints. In doing so, they moved from an Oracle database to SAP HANA, taking an incremental step towards SAP S/4HANA adoption that will simplify a future upgrade to the next-generation ERP.

SAP S/4HANA transformation

If your business is ready to upgrade to SAP S/4HANA, you can rely on the experience and SAP-certified infrastructure of AWS. Since 2008, AWS has been collaborating with SAP to develop infrastructure purpose-built to meet the needs of even the most demanding SAP workloads. As a result of this work, AWS launched Amazon EC2 X1 instances in 2016 – the first cloud-native instances certified to support SAP S/4HANA – and High Memory instances in 2019. Leveraging Amazon EC2 High Memory instances, you can run SAP S/4HANA (OLTP) with up to 48TB of memory.

RISE with SAP on AWS gives you the opportunity to realize these performance capabilities in a streamlined manner, offering:

  • A full scope SAP S/4HANA implementation, including line-of-business processes supporting 25 industries.
  • Code enhancements and modifications to help you preserve customizations you’ve made to your SAP ECC landscape.
  • Expert configurations to help you take full advantage of AWS infrastructure and services.
  • The ability to extend and innovate through 10 SAP Business Technology Platform Regions.

Compared to other SAP S/4HANA deployment options, RISE with SAP provides simpler management, at the expense of the ability to use AWS service natively. This program offers support for brownfield, bluefield, and greenfield SAP S/4HANA deployments. As a result, you can more quickly convert your existing ERP implementation, or start fresh with a resilient and scalable cloud-based SAP S/4HANA architecture. At the same time, RISE with SAP allows you to transition from CapEx to OpEx via a single subscription. Overall, this program complements our existing options for running ECC and S/4HANA on AWS using traditional licensing models, making SAP S/4HANA available to a broader portion of the existing SAP customer base.

Determining the best path for your ERP modernization

RISE with SAP is one of many options that SAP customers have to modernize their ERP environments. Regardless of whether you’re ready for the full SAP S/4HANA transformation, it’s critical that you start taking incremental steps towards modernization. Doing so will help you wipe out antiquated processes, reduce costs, and start driving innovation around your core business systems. No matter which SAP strategy is right for your business, AWS provides tools, resources, and partner support to help you execute your cloud strategy.

If you’d like to learn more about RISE with SAP on AWS, visit our webpage today.

¹ https://www.sap.com/products/rise.html
² https://pages.awscloud.com/GLOBAL-multi-DL-gartner-mq-cips-2020-learn.html
³ https://pages.awscloud.com/GLOBAL-partner-DL-ent-sap-nov-2020-reg-event.html


SAP HANA database redirected restore with AWS Backint Agent

$
0
0

Feed: AWS for SAP.
Author: Sreenath Middhi.

This blog is aimed at reducing the operational overhead by performing a redirected restore of a SAP HANA database from an Amazon Simple Storage Service (S3) bucket. We will walk you through the process of restoring a Production SAP HANA database from an Amazon S3 bucket to a target SAP HANA database running on a different Amazon Elastic Compute Cloud (EC2) instance under same or different AWS account. Since you can backup your SAP HANA database directly to a S3 bucket using AWS Backint Agent for SAP HANA, you can use the redirected restore and refresh your non-production SAP HANA databases.

Overview:

There are several methods available to backup your SAP HANA database on AWS. One of the common methods is a two-step approach where you backup the SAP HANA database to a staging disk and then copy it over to an S3 bucket

SAP HANA Database Backups using EC2 EBS Volumes

Figure: SAP HANA Database Backups using local EBS Volumes

This backup process requires you to first copy the backup of HANA database to a staging disk (an Amazon Elastic Block Store (EBS) volume) attached to the SAP HANA server and then initiate the restore process.

With AWS Backint Agent, you can back up your SAP HANA database and logs directly to an Amazon S3 bucket and initiate the restore process from the S3 bucket. This eliminates the need for a backup staging disk and significantly reduces the time it takes to perform a database backup and recovery.

This blog talks about using AWS Backint Agent for SAP HANA to perform a redirected SAP HANA database restore between two different accounts (the process for redirected restore between the same accounts/same region/cross region is very similar). It is assumed that you are using AWS Backint Agent to backup your SAP HANA database to an S3 bucket.

Current SAP HANA database backup Policy:

Figure: SAP HANA Database backup

The blog assumes that you are currently using the following backup process:

1.You are using AWS Backint Agent interface to backup the SAP HANA database directly to the S3 Bucket
2.You are using AWS Backint Agent to backup the SAP HANA logs directly to the S3 bucket
3.S3 bucket is encrypted using the Customer Managed key (CMK)

Objective:

The objective is to restore the SAP HANA database on the target instance using the source database backup that resides in the S3 bucket by using SAP HANA redirected restore. The source and target account for the SAP HANA database can be owned by the same or different account ID. The account owner plays an important role while granting access to the S3 bucket using the bucket policy.

Figure:Redirected Restore

Advantages of redirected restore:

1.Periodic validation of production database backup (an important question by the auditors)
2.Reduced time to perform system copy of the SAP HANA based SAP Systems while achieving the same point in time recovery for depenent systems like ECC and BW
3.Reduced backup foot-print (no staging area!)
4.Perform regular SAP HANA consistency checks on the restored database which is a copy of production

Source and target account Pre-requisite:

1.From the source account grant access to source CMK to the target account. KMS..Customer Managed Key..YourKey..Other AWS Accounts

2.Login to the Target account & build a new policy in order to de-crypt the backup stored inside the source S3 bucket. Policy: comaws_cross_cmk_access_policy(sample name)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowUseOfCMKInAccountSourceaccount",
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "arn:aws:kms:us-east-1: 012345678901:key/<cmk key>"
        }
    ]
}

3. On the target account, create a new or use an existing EC2-Policy and add the following policy:
Role: arn:aws:iam:: 987654321098:role/comaws_ec2_instance_role_name
Note: The name of the role can be of your choice.
Attach the following policies to this role:

4. In the target account, attach the IAM role to your Amazon EC2 instance where your target SAP HANA database is running.

5. Create a Source bucket Policy granting access to target account EC2 instance role that you created above

Source S3 Bucket: Attach the following policy to the source bucket(Amazon S3 > Your Source Bucket > Permissions > Bucket Policy)

{
    "Version": "2012-10-17",
    "Id": "Policy1606090894637",
    "Statement": [
        {
            "Sid": "Stmt1606090751178",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                     "arn:aws:iam::987654321098:role/comaws_ec2_instance_role"
                ]
            },
            "Action": [
                "s3:GetBucketAcl",
                "s3:GetBucketLocation",
                "s3:GetBucketPolicy",
                "s3:GetBucketPolicyStatus",
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3::: SourceBucketName"
        },
        {
            "Sid": "Stmt1606090890057",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                     "arn:aws:iam::987654321098:role/comaws_ec2_instance_role"
                ]
            },
            "Action": "s3:GetObject",
            "Resource": " arn:aws:s3::: sourceBucketName/*"
        }
    ]
}

You are ready to restore the SAP HANA database backup from a S3 bucket owned by 012345678901 to an account 987654321098 using the SAP HANA database redirected restore method. The following instructions can be used to perform the database restores in below circumstances:

  • Systems running under same AWS account/same region.
  • Systems running under different accounts/same region using VPC Endpoint for S3.
  • Systems running under different account/different region using a NAT Gateway.

**data transfer cost may apply

Performing SAP HANA database re directed restore:

  1. Login to the target SAP HANA database server and change the current directiry to /hana/shared/aws-backint-agent.

cd /hana/shared/aws-backint-agent

2. Create a backup of the existing aws-backint-agent-config.yaml file

cp aws-backint-agent-config.yaml aws-backint-agent-config.yaml.backup

3. Edit and replace the contents of aws-backint-agent-config.yaml from the source aws-backint-agent-config.yaml. Sample aws-backint-agent-config.yaml

S3BucketAwsRegion: "us-east-1"
S3BucketName: "SourceBucketName"
S3BucketFolder: "backup/database/hana"
S3BucketOwnerAccountID: "012345678901"
LogFile: "/hana/shared/aws-backint-agent/aws-backint-agent.log"
S3SseKmsArn: "arn:aws:kms:us-east-1:012345678901:key/CMK"
S3SseEnabled: "true"

This step is required to allow AWS Backint Agent to read the backups stored in source S3 bucket
4. On the target system, check access to the source S3 bucket by using AWS command line interface:

aws s3 ls s3://SourceBucketName/backup/database/hana/PH5/usr/sap/PH5/SYS/global/hdb/backint/DB_SUP/2020_11_22_05_00_00_full_databackup
Sample Output:
PRE 2020_11_22-05_00_00_full_databackup_0_1/
PRE 2020_11_22-05_00_00_full_databackup_2_1/
PRE 2020_11_22-05_00_00_full_databackup_2_10/
PRE 2020_11_22-05_00_00_full_databackup_2_12/
PRE 2020_11_22-05_00_00_full_databackup_2_2/
PRE 2020_11_22-05_00_00_full_databackup_2_3/
PRE 2020_11_22-05_00_00_full_databackup_2_4/

Now we can start the database restore operations on the target system.

Restoring a Scale-up SAP HANA database using redirected restore:

Login to the target database using hdbsql and create a tenant database

hdbsql -n localhost -i 05 -u system -d systemdb CREATE DATABASE SUD SYSTEM USER PASSWORD Manager1;

–To recover using a particular source backup

RECOVER DATA FOR SUD USING SOURCE 'SUP@PH2' USING BACKINT ('/usr/sap/PH2/SYS/global/hdb/backint/DB_SUP/2020_11_21_04_00_00_full') CLEAR LOG; 
0 rows affected (overall time 2443.266878 sec; server time 2443.263834 sec) 
*Source tenant DB Name=SUP, Source SAP HANA system name=PH2

— To perform a point in time recovery:

RECOVER DATABASE FOR SUD UNTIL TIMESTAMP '2020-11-29 20:00:00' CLEAR LOG USING SOURCE 'SUP@PH2' USING CATALOG BACKINT USING LOG PATH ('/usr/sap/PH2/SYS/global/hdb/backint/DB_SUP/') USING DATA PATH ('/usr/sap/PH2/SYS/global/hdb/backint/DB_SUP/');

0 rows affected (overall time 402.686351 sec; server time 402.683220 sec) 
**Time shown above is in UTC

Monitoring the progress of the restore operation:

1. You can monitor the restore progress by running a tail command on the backup.log file under the target DB_SID directory on the target server

tail -f /usr/sap/DH2/HDB10/sapdh2dbsm/trace/DB_SUD/backup.log

2.You can also check the aws-backint-agent.log file on the target VM.

tail -f /hana/shared/aws-backint-agent/aws-backint-agent.log

Points to remember:

1. Since we only allowed GetObject operation on the Source S3 bucket, make sure to either disable the target database(s) backup during the restore or change the log backup destination to a local disk for the duration or even increase the duration of the log backup interval.

2.The redirected restore can be used with the S3 VPC endpoints in the same region.

3.Once the recovery completes, remember to switch the aws-backint-agent-config.yaml file of your target system. This is required to ensure that your target system’s backups are sent to the correct S3 bucket.

Conclusion:

We hope that using AWS Backint Agent to perform SAP HANA redirected restore helps you to reduce SAP HANA database refresh times with minimal changes required to the current infrastructure. You can use the same process to perform point in time recovery of multiple SAP systems like ECC and BW. This will help you maintain intact delta pointers between your ECC and BW systems post-refresh.

Please refer to AWS Backint Agent for SAP HANA database. To watch an SAP on AWS expert back up an SAP HANA databased on AWS with AWS Backint Agent, please refer to this demo.

Improving SAP Fiori Performance with Amazon CloudFront and AWS Global Accelerator

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

SAP customers with global operations are generally interested in enabling their entire workforce access SAP applications at any time, from any device, and from anywhere. Access at any time is easy to achieve, by making the services to be accessed, available around the clock. Access from any device is a key feature of SAP Fiori, with SAPUI5 capabilities. However, access from anywhere is where the complexity lies, and customers need to consider organizational security policies, performance requirements, existing network connectivity (including its bandwidth and latency characteristics) from branch locations, users mobility, and other considerations.

In this blog, I will consider a scenario where a customer is running their SAP workloads on AWS and is interested in providing direct access to SAP applications without significant investments in dedicated network connectivity.

I will discuss options to address performance challenges for globally accessible SAP Fiori workloads on AWS, and highlight the potential improvements that can be achieved.

Background

SAP Fiori is the User Interface component of SAP applications such as S/4HANA, BW/4HANA, etc. It is based on SAP’s own HTML5 implementation called SAPUI5, and OData API Calls for its dynamic content. SAP Fiori relies on the HTTP protocol and modern web browsers as the client. Due to the nature of HTML5, it runs in nearly any user devices that supports modern browsers such as mobile phones, tablets and laptops. With this capability, many SAP customers are deploying SAP Fiori to be internet facing with the flexibility to support any connectivity for the end-users globally, including connection types such as over 3G, 4G, 5G, or Wi-Fi.

There are a number of performance challenges when deploying SAP Fiori for end-users over the Internet. The challenges can be due to poor latency, limited bandwidth and stability in-between user and SAP Fiori, as well as the size of the components such as JavaScript, stylesheets, images and data that need to be transferred.

To address the standard SAPUI5 components performance, SAP provided guidance in SAP Note 2526542 (an SAP S-User id is required to access) “How to load SAPUI5 files from CDN for performance improvements in Fiori and Standalone UI5 apps”. The applicability of this SAP note is well described by Mario De Felipe in his blog on leveraging CloudFront as the Content Delivery Network (CDN). However this SAP note and blog will not address the performance of Custom UI5 apps, and Fiori OData API calls. The potential solutions that we discuss below will address these gaps.

Potential Solutions

 To address these challenges on AWS, you have a couple of options that can be considered:

 1. Amazon CloudFront.

Amazon CloudFront is a content delivery network that delivers both static (e.g. images, files, videos) and dynamic content (e.g. dynamic site delivery, API calls and web sockets) with high availability, performance and security to viewers. It operates at the layer 7 of the Open Systems Interconnection (OSI) layer and supports the HTTP and HTTPS protocols. It delivers performance improvements by performing a quick TLS handshake with the nearest edge location, by caching objects at the edge and regional location, utilizing performance optimizations such as HTTP2, persistent connections and connections reuse, leveraging the AWS network backbone.  

 2. AWS Global Accelerator.

AWS Global Accelerator is a global traffic manager that improves the performance and availability of your web applications serving internet users. It operates at the layer 4 of the OSI layer and accelerates a wide range of protocols over TCP or UDP (e.g. HTTP, RTP, WebRTC) by proxying packets at the edge and sending them over the AWS network backbone to your applications in the AWS Regions.

Please note that Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. 

Solutions Comparison and Use Case

 

CloudFront

Global Accelerator

How it works

CloudFront improves performance for both static content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).

Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.

Content Caching

Yes

No

Routing

DNS-based

Anycast

Failover

Native origin failover based on HTTP error codes or timeouts, or Route 53 DNS

Built-in origin failover in less than 30 seconds with no dependency on DNS TTLs.

Use case

Websites with cacheable contents over HTTP/S (example: SAP Fiori Launchpad), Amazon S3 buckets, Amazon MediaStore, or other servers from which CloudFront gets your files

Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover (example: SAP OData Calls API).

 

Benefit and Impact to SAP Fiori

CloudFront reduces the workload to the back-end Application Load Balancer and SAP Fiori as the frequently accessed cacheable contents will often be cached in the Edge locations.

 

The EC2 resources (of SAP Fiori) will then primarily only serve the dynamic contents (OData API calls) thus it allows a reduced hardware sizing requirement (i.e. CPU and Memory).

 

Bandwidth to the back-end SAP Fiori servers will also be reduced, and network route is optimized.

 

Global Accelerator provides shortest path for the end-users to reach the SAP Fiori, thus in-turns provide better performance for end-users

 

However you will not be able to reduce the EC2 sizing that serve SAP Fiori content.

 

 

Implementation Effort

 

Configuring CloudFront for SAP Fiori Launchpad requires deeper technical knowledge on how SAP Fiori works. This includes which resources are
cacheable, and which are not.

This is due to the fact that SAP Fiori uses quite a number of HTTP headers and cookies.

 

When the wrong resource (i.e. html) is cached then it will not get transferred to the back-end SAP Fiori. In this case, it will introduce
problem with session handling (i.e. failed logon).

 

Global Accelerator is quite straightforward to be implemented with just end-point definition to the Application Load Balancer.

 

It will not require deeper knowledge on which SAP Fiori resources cacheable and which are not.

Maintenance Effort

You need to empty cached content in CloudFront whenever there are new changes to the SAP Fiori components including SAPUI5 patching using a feature in CloudFront known as invalidating files. In general, it will take around 60 seconds to invalidate these CACHEs through AWS Console or API.

 

Because Global Accelerator does not cache any content at Edge locations, you can easily roll out any changes in the back-end SAP Fiori (without the need to manage cached content at the edge locations).

Security

AWS CloudFront comes with AWS Shield, for DDoS Protection, and may also be used with AWS Shield Advanced subscription, for additional detection and mitigation of large and sophisticated DDoS attacks.

 

You can use CloudFront with AWS Web Application Firewall (WAF) which adds malicious request blocking capability like SQL injection prevention, IP GEO based filtering and Layer 7 request flood blocking capability.

 

You can improve SAP Fiori security, as CloudFront can communicate with higher CIPHER SSL Suite automatically, including TLSv1.3. (At this time of writing, SAP only support TLSv1.2.)

 

AWS Global Accelerator comes with AWS Shield, for DDoS protection, and may also be used with AWS Shield Advanced subscription, for additional detection and mitigation of large and sophisticated DDoS attacks.

 

IP based filtering is also available with Security Group.

 

With Global Accelerator, you will not have SSL Termination, thus you can view the client’s IP address in the logging and monitoring.

If we compare what objects are accelerated using these two preferred options, versus only implementing SAP Note 2526542, we will observe the following:

Type of Traffic

Accelerated (Non-Cached)

via Route Optimization

Accelerated

via Cache at Edge & Regional

UI5 library caching only (SAP Note 2526542)

Static Content

 

 

SAPUI5

 

X

Custom UI5

 

 

Others

 

 

Dynamic Content

 

 

OData API Calls

 

 

AWS Global Accelerator

Static Content

 

 

SAPUI5

X

 

Custom UI5

X

 

Others

X

 

Dynamic Content

 

 

OData API Calls

X

 

Amazon CloudFront

Static Content

 

 

SAPUI5

X

X

Custom UI5

X

X

Others

X

X

Dynamic Content

 

 

OData API Calls

X

 

Please note ‘X’ means accelerated, based on these points, we recommend that :

  • You implement CloudFront as the solution to address SAP Fiori performance challenges when deployed globally or regionally.
  • If you have a requirement to improve the performance of OData API calls for external system integration scenario with SAP S/4HANA, you may want to implement Global Accelerator. This typically requires a set of static IP addresses for whitelisting, no SSL termination and no edge processing, which fits the use case of Global Accelerator.

Solution Overview

Demo Architecture Diagram of CloudFront and Global Accelerator for SAP Fiori

Demo Architecture Diagram of CloudFront and Global Accelerator for SAP Fiori

Based on the diagram above, we will implement a simple deployment of SAP S/4HANA 2020 with embedded Fiori in the AWS us-east-1 (Northern Virginia) region that can be accessed over the Internet, while the users are located in Singapore. The configuration for CloudFront and Global Accelerator will be described in details in the second blog post in this series, “Improving SAP Fiori Performance with Amazon CloudFront and AWS Global Accelerator – How to Guide”. In this setup, you can see that CloudFront performs TLS 1.3 offloading which improves the security posture of SAP Fiori. This is because CloudFront enforce the use of TLS 1.3, while SAP support for TLS 1.2 is not yet released.

Please note that in the real-life productive deployment, to improve the security posture, we recommend you install SAP S/4HANA with embedded Fiori in a private subnet, and setup an SAP Web dispatcher in a public subnet.

Performance Test 

In this single unit performance test, we use the Google Chrome browser, as it is able to measure the performance for end-user, and also includes a debugging tool that is supported by SAP Fiori. The SAP Fiori user’s profile is assigned with Procurement Manager role in SAP Fiori and S/4HANA.

In the first scenario, we enable incognito mode when accessing SAP Fiori Launchpad. This is to understand the improvement that can be achieved for group users who have no cache available in the browser when accessing SAP Fiori for the first time or when their cache is invalidated due
to UI5 transport changes or upgrades. The improvement via CloudFront was about ~49% compared to direct access to Application Load Balancer in North Virginia because of accelerated performance of both static and dynamic contents of Fiori Launchpad.

In the second scenario, we disable incognito mode when accessing SAP Fiori Launchpad to understand the improvement that we can achieve on the dynamic content only (OData API call). We will logon to Fiori Launchpad and select on “Managed Purchase Requisition” Fiori App with one Purchase Requisition displayed. We will measure only the dynamic content acceleration over Global Accelerator. The improvement was about ~16% compared to direct access to Application Load Balancer in North Virginia because of the use of AWS global network which is faster compared to internet.

Please note that your experiences may vary depending on a variety of factors, such as the location of the end user and your SAP workload, the performance of the network between your end user and the nearest AWS point of presence, and multiple other factors. We encourage you to include your own performance testing, with support from AWS and/or AWS Partner Network (APN) partners, as needed. 

Before you rollout CloudFront and/or Global Accelerator to wider users, we recommend to execute automated performance test from various user’s location to determine the performance improvement that can be achieved in your scenario. You can use real user monitoring tool or an automated software testing tool such as Load Runner for SAP. For more information on how to do performance testing you can refer to this blog.

Conclusion

In the first test scenario, we have reduced the load time of SAP Fiori Launchpad with the use of CloudFront. This is because of the number of cacheable objects that are now available much closer to the end users, and optimized network route for the OData API calls using CloudFront. On top of that, the SAP Fiori Launchpad is now more secure with the TLS 1.3 offloading capability of CloudFront.

In the second test scenario, we have reduced the OData API call load time with the use of Global Accelerator because of optimized network route to the SAP S/4HANA servers. The performance advantage will be even better when we have a higher volume of data being transferred from and to external systems such as SAP SuccessFactors, SAP Cloud Analytics, etc.

Finally, in short, CloudFront is recommended to accelerate the SAP Fiori Launchpad use case, while Global Accelerator is recommended to accelerate the external system integration scenario via OData API calls such as Data extraction for analytics and reporting, etc.

In the subsequent blog, I will be sharing the configuration steps required for CloudFront and Global Accelerator for SAP Fiori. You can also find out more about SAP on AWS, CloudFront and Global Accelerator from the AWS product documentation.

 

 

Enable SAP Single Sign On with AWS SSO Part 1: Integrate SAP Netweaver ABAP with AWS SSO

$
0
0

Feed: AWS for SAP.
Author: Manoj Muthukrishnan.

In this blog, we will learn about how to integrate any SAP Netweaver ABAP and SAP Netweaver JAVA with AWS Single Sign On.

AWS Single Sign-On (SSO) is a cloud Single Sign On service that makes it easy to centrally manage SSO access to multiple AWS accounts and browser based business applications. With just a few clicks, you can enable a highly available SSO service without the upfront investment and ongoing maintenance costs of operating your own SSO infrastructure. With AWS SSO, you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO also includes built-in SAML integrations with many business applications, such as SAP, Salesforce, Box, and Office 365. Further, by using the AWS SSO application configuration wizard, you can create Security Assertion Markup Language (SAML) 2.0 integrations and extend SSO access to any of your SAML-enabled applications. Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place.

Prerequisites

You need the following for this walkthrough:

  • An organization created in AWS Organizations. (If you don’t already have an organization, one will be created automatically by AWS Single Sign-On.)
  • AWS Directory Service, provisioned either for Microsoft Active Directory or AD Connector. For more information about these services, please refer to the following resources:
    1. Getting Started with Managed Active Directory
    2. Active Directory Connector Admin Guide

Part 1 Enable SAML SSO for SAP Netweaver ABAP Based Applications like Fiori with AWS SSO.

In this blog, we will learn about how to integrate any SAP ABAP browser based applications with AWS SSO to enable Single Sign On. There are multiple use cases of SAP ABAP Browser based applications. The following section will be the same for all ABAP browser based applications. Some examples of SAP ABAP browser based applications are as follows :

  1. SAP Fiori
  2. SAP Webgui
  3. SAP GRC Access Control webui with NWBC (ABAP)
  4. SAP Solution Manager work center (ABAP)
  5. SAP CRM webui (ABAP)
  6. SAP SRM (ABAP)
  7. SAP BW (ABAP)
  8. SAP NWBC (Netweaver Business Client)
  9. Any SAP ABAP Browser based application

Solution Overview

The integration between AWS SSO and any SAP ABAP Browser based applications uses industry standard SAML 2.0. The steps to configure are as follows. SAP ABAP based browser apps support only Service Provider (SP) initiated flow.

The high-level steps are as follows.

Step 1: Logon to AWS Console and add the required SAP ABAP application in AWS SSO

Step 2: Logon to SAP and open tcode RZ10 and set the required parameters in DEFAULT profile

Step 3: Ensure https is active in SMICM

Step 4: Activate required services in SICF

Step 5: Enable SAML2

Step 6: Create SAML2 Local provider.

Step 7: Download SAML 2.0 local provider metadata from SAP ABAP

Step 8: Upload SAP ABAP SAML 2.0 metadata to AWS SSO

Step 9: Download AWS SSO metadata file

Step 10: Upload AWS SSO metadata file to SAP ABAP SAML 2.0 local provider.

Step 11: Enable SAP SAML Trusted provider

Step 12: Add application url in AWS SSO

Step 13: Add users from active directory to AWS SSO application

Step 14: Map email id in SAP SU01

Step 15: Test the SAP application by launching the url

Step 1: Logon to AWS Console and add the required SAP ABAP application in AWS SSO

  • Please logon to AWS SSO Console and launch AWS SSO
  • Select Manage SSO access to your cloud applications
  • Select Add New Application
  • Search for any SAP ABAP browser based app. In this example, we are adding SAP Fiori app.
  • Select SAP Fiori ABAP Application and then select “Add Application
  • Click on View instructions to get complete step-by-step procedure
  • Customize the app name to include details like System ID (SID) of the SAP instance in case you are using this for multiple SAP instances for identification purpose.

STEP 2: Logon to SAP and open tcode RZ10 and set the required parameters in DEFAULT profile

Logon to SAP and enter transaction code RZ10. Open DEFAULT profile, click on extended maintenance, and add the following parameters. Please activate the parameters and restart your SAP instance to activate these parameters.

Parameter Name Parameter Value
login/create_sso2_ticket 2
login/accept_sso2_ticket 1
login/ticketcache_entries_max 1000
login/ticketcache_off 0
login/ticket_only_by_https 1
icf/set_HTTPonly_flag_on_cookies 0
icf/user_recheck 1
http/security_session_timeout 1800
http/security_context_cache_size 2500
rdisp/plugin_auto_logout 1800
rdisp/autothtime 60

STEP 3: Ensure https is active in SMICM

Go to SMICM and check if https is active. If it is not active, set parameter in RZ10.

icm/server_port_2=PROT=HTTPS,PORT=44300,TIMEOUT=300,PROCTIMEOUT=7200

STEP 4: Activate required services in SICF

Activate SAML2 and cdc_ext_service services in SICF

Step 5: Enable SAML2

  • Goto Tcode SAML2
  • Note: When you launch SAML2, the host name is typically that of the application server from which it is launched. If you are using message server or load balancer for HA, then please make sure that the url is changed to match the message server hostname or load balancer hostname. If you do not change the hostname and if there is a hostname or port mismatch, then you will encounter issues with SSO. The key is port and hostname has to match>
  • Select on Enable SAML 2.0 support

Step 6: Create SAML2 Local provider.

  • Select “Create SAML 2.0 Local Provider
  • Give Local Provider some name and select Next
  • Under Miscellaneous Keep the value as default for Clock Skew Tolerance and click Next
  • Click Finish under Service Provider Settings

You have now successfully configured SAML 2.0 Local Provider

Step 7: Download SAML 2.0 local provider metadata from SAP ABAP

  • Select on Local Provider and select Metadata
  • Click on Download Metadata to download SAML 2.0 Metadata. Make sure to select all three options for Service Provider, Application Service Provider and Security Token Service.

You have now successfully download SAML 2.0 Local Provider Metadata file

Step 8 Upload SAP ABAP SAML 2.0 metadata to AWS SSO

Please open the AWS SSO screen that you had opened in Step 1. Please click on Application SAML metadata file and click on browse. Upload the SAP ABAP SAML 2.0 metadata file.

Step 9 Download AWS SSO Metadata File

From Instructions guide page Select Copy to Download AWS SSO Metadata file. Copy the url in a separate browser session to “Download AWS SSO Metadata File”

Step 10: Upload AWS SSO metadata file to SAP ABAP SAML 2.0 local provider.

  • Go to SAP SAML 2.0 trusted provider to upload the metadata file downloaded from AWS SSO.
  • Select Trusted Providers. Click on Add -> Upload metadata file
  • Click on Browse and then Upload the Metadata file that was downloaded from AWS SSO under Metadata file
  • Add your custom Alias name under Provider Name and click on Next
  • You can change Digest algorithm to SHA-2 if required by your organization under Signature and Encryption and select Next.
  • For Single Sign-On Endpoints choose HTTP-POST as Default and then select Next.
  • For Single Log-Out Endpoints choose HTTP-Redirect and then select Next.
  • For Artifact endpoints keep default selection and select Next, then choose Finish

You have now successfully uploaded the AWS SSO Metadata file to SAP ABAP SAML 2.0 Local Provider

Step 11: Enable SAP SAML Trusted provider

  • Select trusted provider and Select Edit for Identity Providers.
  • Select Add in Supported NameID formats and select Unspecified under Identity Federation
  • Then under Details of NameID Format “Unspecified”, Next to User ID Mapping Mode, choose Email. Then Save and Enable under List of Trusted Providers.
  • Under SAML 2.0 Configuration select OK for popup “Are you sure you want to enable trusted provider”

Step 12: Add application URL in AWS SSO

  • Go back to the AWS SSO console page where you are configuring the Application.
  • Under Application Properties, enter the SAP Fiori ABAP URLin the Application start URL field
    • Note: Sometimes, your AWS console can time out because of inactivity. Please make sure to enter the necessary information again if it times out after logging in again via AWS console. You will get a message that configuration has been saved

Step 13: Add users from active directory to AWS SSO application

  • Click on Applications and Select the SAP Fiori Application that was added just now.
  • Click on Assigned users and click on Assign users to select the users from Active directory
  • Select the users and then select Assign users

Step 14: Map email id in SAP SU01

Go to SU01 in SAP and map the Email id from active directory to the SAP user that was created

Step 15: Test the SAP application by launching the url

You should be successfully be able to logon using your AD credentials

Conclusion:  It is very easy to configure Single Sign on to simplify operations and make SAP end user experience easy. You can use AWS SSO for any enterprise application, which supports SAML 2.0. AWS SSO is free to use. In case you integrate it with managed AD or AD connector then you pay for managed AD on AWS or AD connector based on your used case as per the pricing enclosed below.

AWS Directory Services Pricing

AWS Other Directory Services Pricing

You can use AWS SSO only for browser-based applications which supports SAML 2.0 and not for SAP GUI which needs Kerberos. You can enable MFA for AWS SSO as per the following guide:

AWS SSO MFA

In part 2 of this blog, we will cover how to enable SAML SSO with AWS Single Sign On for SAP NetWeaver Java.

Enable SAP Single Sign On with AWS SSO Part 2: Integrate SAP Netweaver Java

$
0
0

Feed: AWS for SAP.
Author: Manoj Muthukrishnan.

In part 1 of this blog, we covered how to configure AWS Single Sign On Integration for SAP ABAP.

Enable Single Sign On for SAP Netweaver Java Applications with AWS SSO

In this blog, we will learn about how to integrate any SAP Netweaver Java Application with AWS Single Sign On.

AWS Single Sign-On (SSO) is a cloud Single Sign On service that makes it easy to centrally manage SSO access to multiple AWS accounts and browser based business applications. With just a few clicks, you can enable a highly available SSO service without the upfront investment and on-going maintenance costs of operating your own SSO infrastructure. With AWS SSO, you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO also includes built-in SAML integrations to many business applications, such as SAP, Salesforce, Box, and Office 365. Further, by using the AWS SSO application configuration wizard, you can create Security Assertion Markup Language (SAML) 2.0 integrations and extend SSO access to any of your SAML-enabled applications. Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place.

Prerequisites

You need the following for this walkthrough:

  • An organization created in AWS Organizations. (If you don’t already have an organization, one will be created automatically by AWS Single Sign-On.)
  • AWS Directory Service, provisioned either for Microsoft Active Directory or AD Connector. For more information about these services, please refer to the following resources:

Step 1: Logon to AWS Console and launch AWS SSO. Add SAP Netweaver Java application

  • Logon to AWS Console and launch AWS Single Sign On
  • Select on Manage SSO access to your cloud applications
  • Select Add a new application
  • Search for SAP Enterprise portal Java or any other SAP Netweaver Java application.
  • Select Add New Application for your SAP Enterprise Portal Java or any Java Application
  • Provide a unique description. In this example, we are giving a Display Name as “SAP Enterprise Portal Java Development System”. Provide application description to describe the Application being added
  • Select View instructions to get detailed step-by-step procedure

Step 2: Enable SAML 2.0 Local Provider in SAP Netweaver administrator

  • Logon to SAP Netweaver Java Administrator Console
  • Logon as Administrator in Netweaver Administrator
  • Select Configuration
  • Select on Security under Configuration
  • Select Authentication and Single Sign On under Configuration
  • Select SAML 2.0 and select Enable SAML 2.0 support

Step 3: Download AWS SSO metadata file

  • Provide your custom provider name under SAML 2.0 and select Next. In my example, I’m calling the Local Provider Name as AWSSSO
  • Under General Settings in Signing Key Pair Select Browse for Keystore View SAML2
  • Select Create under Select Keystore Entry
  • Under New Entry and under Entry Settings, enter the following details—
    • Entry Name Your custom entry Name
    • Algorithm RSA
    • Key Length 2048
    • Valid from Date
    • Valid to Date
  • Make sure to select store certificate and then select Next
  • Enter details for Keystorage New Entry under Subject Properties—
    • Country Name
    • State or Province Name
    • Organization Name
    • Locality Name
    • Organization Unit Name
    • Common Name
  • And now Select Next for Sign with Key Pair. Leave as default and select Now Select Finish under Summary
  • Make sure you have Signing Key Pair, Encryption Key Pair and select both “Include Certificate in Signature” and “Sign Metadata” and then Select Next
  • Now under SAML 2.0 -> SAML 2.0 Local Provider Configuration Select Finish
  • Now under SAML 2.0, select Download Metadata

Step 4: Download AWS SSO metadata file

  • Go to instruction page that was opened previous step and select on download AWS SSO metadata file
  • Now copy the url in a browser to download the AWS SSO metadata file

Step 5 Upload AWS SSO metadata file to SAP

  • Now in SAP SAML 2.0 select Trusted provider
  • Select Add under SAML 2.0 and then select Upload metadata file
  • Upload the AWS SSO metadata file downloaded in previous step and select Next

Step 6 Upload AWS SSO certificate

  • Under New Trusted Identity Provider -> Provider Name enter your Alias name for Identity Provider
  • Select Encryption certificate and click browse
  • Download the certificate from instructions guide
  • Now upload the certificate downloaded under Encryption Certificate and then Select Next
  • For Single Sign-On Endpoints select HTTP-POST, then Select Next.
  • For Single Log-Out Endpoints choose HTTP-Redirect, then Select Next
  • For Artifact Endpoints Select Next.
  • For Manage Name ID Endpoints, Select Next.
  • For Authentication Contexts Settingschoose Finish.

Step 7: Set name id to unspecified.

  • Click on the Trusted Providerstab in SAML 2.0, select Edit. Next, choose the Identity Federation tab, then Select Add.
  • For the Format Name, Select Unspecified, then Select OK.
  • Select Unspecified, then under Details of NameID Format Unspecified. For User ID Mapping Mode, select Email. Select Save.

Step 8 Enable Trusted provider for AWS SSO

  • Click on Save and Choose Enable.

Step 9 Configure Authentication stack as per OSS note 2273981

  • Go back to the Configurationtab, choose Authentication and Single Sign-On, and then choose the Components tab.
  • Select Add, for the Configuration Nameenter example AWSSSO, and for Type choose custom.
  • For the Login Modules, enter the following values, then choose Save.
    • EvaluateTicketLoginModule “Sufficient
    • SAML2LoginModule “Optional
    • CreateTicketLoginModule “Sufficient
    • BasicPasswordLoginModule “Requisite
    • CreateTicketLoginModule “Requisite
  • From the Components page, under Policy Configuration Name, Edit the ticket. Then assign the custom configuration example AWSSSOcreated in previous step to the ticket. Choose Save
  • Note: Currently you have changed ticket to use this custom authentication stack. For specific Netweaver Java application, please change the corresponding application to use this authentication stack

Step 10: Change application start url in AWS SSO

  • Go back to the AWS SSO console page where you are configuring the Application.
  • Under Application Properties, enter the SAP Netweaver AS Java URLin the Application start URL field:

Step 10: Upload SAP Netweaver Java local provider SAML metadata file in AWS SSO

  • Under Application metadata, choose Browse and select the Metadata downloaded in Previous Step

Step 11 Under Applications Assign a user to the application in AWS SSO. Assign the AD user required

Step 12: Map active directory email id in SAP NWA

  • Go to SAP NWA and go to configuration -> identify provider
  • Create or modify user to map email id from Active directory

Step 13: Test the application to check for SSO

  • Enter the application url of SAP Netweaver Java to test for SSO

Conclusion:  It is very easy to configure Single Sign on to simplify operations and make SAP end user experience easy. You can use AWS SSO for any enterprise application, which supports SAML 2.0. AWS SSO is free to use. In case you integrate it with managed AD or AD connector through AWS Directory Service , you pay for managed AD on AWS or AD connector based on your use case as per the AWS Directory Service pricing.

AWS Directory Services

AWS Other Directory Services

You can use AWS SSO only for browser-based applications which supports SAML 2.0 and not for SAP GUI which needs Kerberos. You can enable MFA for AWS SSO as per the following guide:

Steps to Enable MFA for AWS SSO

To learn more about why 5,000+ customers run SAP on AWS, visit aws.amazon.com/sap

Enable Single Sign On for SAP Cloud Platform Foundry and SAP Cloud Platform Neo with AWS SSO

$
0
0

Feed: AWS for SAP.
Author: Manoj Muthukrishnan.

In this blog, we will learn about how to integrate SAP Cloud Platform Cloud Foundry and SAP Cloud Platform Neo with AWS Single Sign On to enable Single Sign On.

AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and browser based business applications. With just a few clicks, you can enable a highly available SSO service without the upfront investment and on-going maintenance costs of operating your own SSO infrastructure. With AWS SSO, you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO also includes built-in SAML integrations to many business applications, such as Salesforce, Box, and Office 365. Further, by using the AWS SSO application configuration wizard, you can create Security Assertion Markup Language (SAML) 2.0 integrations and extend SSO access to any of your SAML-enabled applications. Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place.

Prerequisites

You need the following for this walkthrough:

  • An organization created in AWS Organizations. (If you don’t already have an organization, one will be created automatically by AWS Single Sign-On.)
  • AWS Directory Service, provisioned either for Microsoft Active Directory or AD Connector. For more information about these services, please refer to the following resources:

Part 1 Enable SAML SSO for SAP Cloud Platform CloudFoundry using AWS SSO

Solution Overview

The integration between AWS SSO and any SAP Cloud Platform based on CloudFoundry works using industry standard SAML 2.0. The steps to configure are as follows

Step 1: Logon to AWS Console and launch AWS SSO. Add SAP CloudPlatform CloudFoundry

  • Logon to AWS Console and launch AWS SSO
  • Select on Manage SSO access to your cloud applications
  • Select on Add a New application
  • Search for SAP Cloud Platform CF for SAP Cloud Platform Cloud Foundry
  • Click on Add application
  • Provide a unique description
  • Click on view instructions to get detailed step-by-step procedure

Step 2 Set up trust in SAP Cloud Platform with AWS SSO

  • Login to SAP Cloud Platform Cockpit as an Administrator.
  • Choose Cloud Foundry.
  • Click on Subaccounts, then choose your account.
  • Click on Security, then choose Trust Configuration.
  • Click on New Trust Configuration.
  • Download AWS SSO Metadata fileand import into SAP Cloud Platform by clicking on Upload. Then choose Parse
    • Metadata File : https://portal.sso.us-east-1.amazonaws.com/saml/metadata/MDM0NTQ0MDI0NTI4X2lucy01ODBhYjc3ZmRjMGExYmM5
  • Insert these values, then click on Save.
    • Name AWS SSO
    • Description AWS SSO
    • Status Active
    • Show SAML Login Link on Login Page Checked
    • Link Text AWS SSO
    • Create Show Users During Login Checked
  • Get the tenant name and region in your SAP Cloud Platform Cloud Foundry account.
  • Go back to the AWS SSO console page where you are configuring the Application.
  • Under Application metadata, choose Browse and select the Metadata downloaded in previous step
  • Choose Save Changes.
  • Assign a user to the applicationin AWS SSO.

Verification

Use the following sections to verify the SSO integration.

Note

Ensure that the user performing the verification is logged out of both AWS SSO and the application before performing the steps in each section.

Verifying Service Provider Initiated SSO from SAP Cloud Platform Cloud Foundry

  • Access the SAP Cloud Platform Cloud Foundry Application URL.
  • On the AWS SSO user portal, type the credentials of a user assigned to the application in the AWS SSO user portal.
  • Choose Sign In.
  • On the SAP Cloud Platform Cloud Foundry Applicationhome page, verify that both SAP Cloud Platform Cloud Foundry Application and AWS SSO are logged in with the same user.

Part 2 : Enable SSO for SAP Cloud Platform Neo with AWS SSO

Solution Overview

The integration between AWS SSO and any SAP Cloud Platform based on Neo works using industry standard SAML 2.0. The steps to configure are as follows

Step 1: Logon to AWS Console and launch AWS SSO. Add SAP CloudPlatform Neo

  • Logon to AWS Console and launch AWS SSO
  • Select Manage SSO access to your cloud applications
  • Select Add a New application
  • Search for SAP Cloud Platform CF for SAP Cloud Platform Neo
  • Select Add application
  • Provide a unique description
  • Click on view instructions to get detailed step-by-step procedure

Step 2 Set up trust in SAP Cloud Platform Neo with AWS SSO

  • Login to SAP Cloud Platform Neo as an Administrator.
    • Select Security then choose Trust.
    • Click on the Local Service Providertab, then click on Edit.
    • Insert these values:
      • Configuration Type Value Custom
      • Principal Propagation Value Disabled
      • Force Authentication Value Disabled
        • You can have principal propagation enabled if you want to configure principal propagation.
  • Click on Generate Key Pair, then choose Save.
  • Click on Get Metadatato download SAP Cloud Platform metadata file.
  • Click on the Application Identity Providertab, then choose Add Trusted Identity Provider.
  • Download AWS SSO metadata file from the URL https://portal.sso.us-east-1.amazonaws.com/saml/metadata/MDM0NTQ0MDI0NTI4X2lucy00ZmU2NDEwZjcwMTUzODM3 and upload to Metadata File, by choosing Browse
  • For Assertion Consumer Service, choose default
  • Click on Save
  • Click on Attributes tab, under Assertion-Based Attributes, insert these values: Then choose Save
    • Assertion Attribute mail Principal Attribute Email
    • Assertion Attribute first_name Principal Attribute firstname
    • Assertion Attribute last_name Principal Attribute lastname
  • In the SAP Cloud Platform Neo console, click on Security, then choose Authorizations.
  • To add users, enter the email address in the User field and then assign the subaccount, application and role for the selected user.

Step 3 Complete setup in AWS SSO

  • Go back to the AWS SSO console page where you are configuring the Application.
  • Under Application metadata, choose Browseand select the Metadata downloaded from SAP CloudPlatform Neo
  • Choose Save Changes.
  • Assign a user to the applicationin AWS SSO.

Step 4 Verification

Verifying Service Provider Initiated SSO from SAP Cloud Platform Neo

  1. Access the SAP Cloud Platform Neo Application URL.
  2. On the AWS SSO user portal, type the credentials of a user assigned to the application in the AWS SSO user portal.
  3. Choose Sign In.
  4. On the SAP Cloud Platform Application Neohome page, verify that both SAP Cloud Platform Neo Application and AWS SSO are logged in with the same user.

Conclusion :  It is very easy to configure Single Sign on to simplify operations and make the SAP end user experience easy. You can use AWS SSO for any enterprise application, which supports SAML 2.0. AWS SSO is free to use. In case you integrate it with managed AD or AD connector then you pay for managed AD on AWS or AD connector based on your use case as per the pricing enclosed below.

AWS Directory Services Pricing

Other Directory Services Pricing.

You can use AWS SSO only for browser-based applications which supports SAML 2.0. You can enable MFA using following document.

AWS SSO MFA

Simplify SAP Jobs Scheduling using AWS Native Tooling

$
0
0

Feed: AWS for SAP.
Author: Glenn Mendonca.

Scheduling jobs in SAP is one of the routine operational tasks for customers running SAP workloads. SAP customers often use the transaction SM37 to run batch jobs within their SAP systems. When there are requirements for complex job scheduling and dependencies across multiple SAP systems, customers typically will use a third-party batch scheduling tool. It is always a challenge when the systems go down or unavailable for any reason. For customers running SAP and using a third-party job scheduling tool, it can be a significant challenge when the jobs do not run due to their unavailability. The impact of non-running jobs sometimes is isolated to one system; however, if multiple jobs have not run, it can be expensive and time-consuming to fix!

This blog will take you through how the SAP batch jobs can be scheduled and triggered using AWS native services using a job scheduling solution called “Simple Scheduler”

This blog will assume that you are familiar with SAP Job scheduling and its concepts. For this blog’s purpose, the pre-requisite is that the SAP system runs in AWS or a direct network path is available to connect to the SAP systems.

AWS SAP Professional Services has developed a cloud-native job Scheduling solution called “Simple Scheduler to simplify the routine operational task of scheduling jobs without maintaining additional infrastructure and software for managing those SAP jobs. Customers can use the “Simple Scheduler” solution to schedule SAP jobs using AWS serverless services. Simple Scheduler  executes the jobs at the defined intervals and sends out notifications upon completion or failures of the jobs.

Simple Scheduler is developed using Amazon DynamoDB, Amazon S3, Amazon API Gateway & AWS Step Functions. The majority of the AWS services used in this solution are serverless, which means there is no infrastructure to maintain. Another feature of the Simple Scheduler is to manage dependent batch jobs that need to be executed in sequence, e.g., running Job 2 once Job 1 completes.

Simple Scheduler Architecture Diagram

Simple Scheduler Architecture Diagram

  1. The job is scheduled via a front end developed by AWS Professional Services, which is hosted in an Amazon S3 bucket.
  2. The related job parameters and the SAP system information are entered in the front end and stored in Amazon DynamoDB.
  3. A time-based Amazon CloudWatch Event Rule triggers the job at the scheduled time.
  4. The orchestration of the jobs are performed using the AWS Step Functions.
  5. The Amazon CloudWatch Event triggers an AWS Step Function, which invokes a sequence of AWS Lambda functions to read the job parameters from AWS DynamoDB.
    • The tool gets the credentials from AWS Secrets Manager and then calls an AWS Lambda function, which connects to SAP and triggers the job in SM37 with the configured variant.
    • The AWS Lambda connects to SAP using node-RFC and starts the job.
  6. Some jobs might have a dependency wherein Job X needs to be executed first, followed by Job Y. The flow of the conditions maintained in the AWS Step Functions.
  7. The dependency/sequencing of jobs is defined using the input parameters in the front end and stored in DynamoDB. When the time-based event is triggered, the Lambda function reads the job definitions and starts the Step Function, providing it with the sequence included.
  8. On successful completion or failure of the job, the user will be notified via email using Amazon SNS (if required)
  9. The Simple Scheduler can orchestrate the jobs between different SAP systems such as S/4HANA, ECC, SCM, and BW.

Solution in detail

  1. The user enters the job details in the screenshot below as the first step
    Simple Scheduler Input Screen

    Simple Scheduler Input Screen

  2. The screenshots below are the end state of the solution using AWS Step Function for batch jobs.
      1. Here is an example of a Step Function job without dependencies
        AWS Step Function

        AWS Step Function

    For the Simple Scheduler, we have developed a custom AWS Lambda function, which connects to SAP to executes BAPI’s (Business Application Programming Interface), which starts the job within SAP (Execute SAP Job step in the above screenshot). Further on, another Lambda function checks the status of the job. When the SAP job has been completed, and there are no dependencies on the next jobs, the Step Function reaches the end state. If there are dependent jobs, the Step Function runs another iteration of the “Execute SAP Job” step until all dependent jobs are processed.

    1. The Below is an example of a job with dependencies
      AWS Step Function

      AWS Step Function

Simple Scheduler is built using the AWS services including AWS Step Functions, AWS Lambda and Amazon DynamoDB. There are 4000 AWS Step Functions state transitions and 1Million AWS Lambda free requests per month.

  • For cost estimate purposes, let’s assume the following:
  • Number of jobs scheduled and executed: 1000 per month
  • us-east-1 (North Virginia) Region
  • Each job runs for 30 mins
  • Job completion status is evaluated every 300 seconds

AWS Step Functions

Number of Jobs State Transitions (ST) Estimated Cost
1000 32*1000 32000 * 0.000025
$0.8 USD

AWS Lambda

Number of Jobs Per Month 1000
Lambda function per Job 3
Function Executed for 1000 jobs (R) 3 x 1000 = 3000
Memory Allocated per Call (M) 128 MB
Estimated Duration per call (D) 10000 ms
Total Compute (Seconds) (S) 1000 x 10000 = 10000000
Total compute (GB-s) (G) S * M/ 1024 = 1,250,000 GB-s
Monthly compute charges G * 0.00001667 = 20.8375 USD

Amazon DynamoDB

Based on the assumptions above, total monthly cost is an estimated $22.50, making your annual cost $270. Please refer to the following pricing details for each of the services discussed above as AWS routinely implements price reductions:

AWS Lambda Pricing
AWS Step Functions Pricing
Amazon DynamoDB Pricing

This offering serves as a turnkey solution in the form of native infrastructure as code and as an accelerator to build highly customized solutions for customer-specific requirements.

With Simple Scheduler, you only pay for the services that are used to schedule the jobs, helping to reduce operational licensing and infrastructure costs of third-party tools. You can run SAP batch jobs using an AWS cloud-native solution without a dependency on a third-party job scheduling tool.

Note: This is not an SAP-certified solution but does demonstrate how SAP batch jobs can be scheduled using AWS cloud-native tools.

The solution can be customized to orchestrate and extend to non-SAP jobs as well, which can be very useful as a single, inexpensive tool to operate jobs across your environment. If you would like a better understanding of how the Simple Scheduler solution works, then please connect with us via this link

DevOps for SAP – Driving Innovation and Lowering Costs

$
0
0

Feed: AWS for SAP.
Author: Ajay Kande.

DevOps is the combination of cultural philosophies, practices, and tools that increase an organization’s ability to deliver applications and services at high velocity. This includes evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

In this blog, we are going to discuss how DevOps for SAP brings innovation and automation for customers. We will explore the pillars of how SAP on AWS customers are achieving cost savings by leveraging these capabilities and innovating to meet business demands. We will then dive into the operational automation AWS provides with regards to starting/stopping, auto scaling, serverless refreshes, and automated patching.

SAP Build Automation:

Infrastructure as Code (IaC) for SAP – Automated, Consistent, and Repeatable SAP Deployments & SAP Operations

The first pillar of DevOps for SAP on AWS is Infrastructure as code. This practice enables customers to provision and manage their SAP using code (VCS) and software development techniques (CI/CD). This approach also entails using version control, continuous integration, and continuous deployment to deploy systems more quickly and with greater visibility.

So, you have decided that you want to describe and control your infrastructure as code. What language should you choose and why? Well, you have several options depending on your team’s background, skillset, or existing capabilities.

AWS CloudFormation and AWS Cloud Development Kit are again AWS IaC tools that you can use to automate the provisioning of the AWS resources. If your SAP Basis teams do not have an established DevOps practice and are looking for a guided experience, AWS Launch Wizard is the tool to use. It helps you right-size and configure AWS resources based on your SAP application requirements, then automates installation and configuration of the operating system and applications, all in accordance with AWS and SAP best practices. AWS Launch Wizard was built based on the popularity of our SAP Quick Starts, which are also still available.

If you have more advanced DevOps/Cloud practices and looking for additional customization capabilities, you should consider tools such as AWS CloudFormation or Hashicorp’s Terraform. AWS Professional Services has developed a set of Terraform modules that customers can adapt to their own needs to deploy the AWS resources for their SAP workloads. Customers can natively use our Terraform modules for deploying highly configurable SAP products on AWS.

Configuration Management (CM) – Process for maintaining computer systems, servers, and software in a desired, consistent state

The second pillar of DevOps for SAP on AWS, is answering the question around how you define and control drift on the underlying servers that SAP requires. It also addresses the workload specific configuration that you just deployed. CM enables customers to prepare, install, and keep SAP applications and databases in the required operational mode after the infrastructure has been deployed.

The attention to detail that CM enables, allows customers to tackle the unique operating system specific settings for HANA, the SAP ulimit configurations, Linux package management, and the endless list of ever-changing security requirements. With CM in place, customers are able to maintain higher SLAs by ensuring system performance meets expectations as changes are made over time.

In short, CM is a way to keep your servers from drifting from your standards. Those same standards that you know deliver the right performance profile for your workload.

So, you are ready to jump into CM as a way to add that additional layer of DevOps capability on top of IaC. What tools should you use? SAP on AWS customers are leveraging configuration management tools like AWS Systems Manager, Ansible, Chef, Puppet to maintain these configurations.

Just to give you an example, the AWS SAP Professional Services is helping customers implement these tools and seeing up to 90% time saving when provisioning a new SAP environment on AWS compared to on-premises or with manual processes. At CHS, the SAP server provisioning including application install was reduced from multiple days to less than 2 hours. At Phillips66, AWS SAP Professional Services helped decrease the time to build an SAP system from two weeks to twenty minutes by adopting an IaC approach. At Phillips 66, they have the ability to spin up new SAP environments using IAC via their existing ITSM solution ServiceNow within minutes.

SAP Operations Automation:

Following the DevOps mantra, we don’t lose sight of continuous innovation and the pursuit of automating all things SAP. AWS SAP Professional Services has developed solutions and tools that customers can use to further their operational efficiencies and continue to lower their TCO for SAP. This serves as the third pillar in our foundation for DevOps for SAP on AWS.

SAP Start/Stop Automation

Spending hours and hours starting and stopping SAP systems for maintenance windows, profile parameter changes, and other required activities during late nights or weekends? AWS Professional Services’ SAP Start/Stop Automation is an automated, consistent, and controlled process that automates your start and stop of SAP with significantly less human intervention.

Normal start/stop of SAP systems including the EC2 instances for any planned maintenance activity typically takes 10-15 minutes. This can become more and more time consuming with all the human intervention.

Let’s assume there are 100s of SAP systems spread across multiple accounts to start/stop each of them. This would take countless hours and your entire Basis team to complete. To address this, AWS Professional Services’ SAP Start/Stop Automation solution identifies EC2 instances using tags and starts/stops the SAP components installed (ASCS, SCS, ERS, APP, DB, DAA) in a sequenced manner including the EC2 instances. This automation can also be enhanced easily for OS patching, AWS CLI updates, SAP kernel updates, and any maintenance that requires an interruption of SAP service.

CHS, who continued their DevOps for SAP on AWS journey, leveraged this automation to shut down their non-production SAP systems after business hours all with no human intervention. CHS quickly saw a meaningful cost savings by keeping non-productions shutdown during non-business hours.

After seeing this capability, CHS furthered this automation enhancing it to include patching the operating systems that SAP relied on.  What originally took 6-8 hours, now took ~1 hour. This automation patched ~150 SAP EC2 instances across 6 separate accounts. With a similar automation at Phillips 66, system resizing (vertical scaling) took minutes, cutting down processing time by as much as 12x resulting in cost savings and improved SLAs.

SAP Autoscaling Application Servers

This solution enables customers to automatically detect SAP application server consumption based on SAP-specific workload metrics and adjust application server capacity. It can adapt to spikes and dips for concurrent user logins, month-end close, payment runs, and a variety of both predictable and unpredictable workloads.

With this solution, customers running multiple application servers can reduce their RI purchases to their minimal footprint to support normal business operations and only pay for EC2 instances using the on-demand model saving them money. Customers see improved resiliency/ availability/uptime, which enables them to offer a higher SLA.

More details on this solution can be found here.

AWS SAP Serverless Refresh

If you are familiar with SAP, you know the time and effort it takes to refresh an SAP system to support testing and production operations. You also know teams are constantly looking at ways to reduce the time and effort it takes. AWS Professional Services has developed a solution based on this customer feedback.

The AWS SAP Serverless Refresh is a solution that consists of serverless AWS services that collectively perform the system refresh process. Customers can use this to refresh an SAP system to overwrite an existing system with the latest data from another system while maintaining the configuration with minimal downtime and significantly less human resource involvement and time. Today, this solution supports SAP systems based on the HANA database.

For context, the AWS SAP Serverless Refresh is helping customers reduce the SAP refresh process from 2-3 weeks to less than 1 day with a downtime of fewer than 30 mins. At Zalora, the SAP system refresh time reduced from 5 days to under 2 days and has improved refresh quality through it consistent and automated mechanisms. This allows business features and testing cycles to iterate within a shorter time window and enables business units to see value in production sooner when compared to a traditional refresh process. This translates to more efficiency, faster time to market for new features, and value to be realized sooner by the business.

Automated HANA DB patching

Anyone who has worked with the SAP HANA database lifecycle management tool or “hdblcm” knows it’s a very robust and efficient way to patch your HANA database. So why not take advantage of this robust package and scale it to your advantage across your landscape?

AWS SAP Professional Services created a tool just for that. This enables teams to patch HANA databases in an automated, consistent, and controlled process with significantly less human intervention. Similar to the SAP start/stop automation solution mentioned above, this automation document enables SAP Basis administrators to patch any number of HANA databases in parallel without human intervention. Customers are using this automation, have patched their entire HANA landscape (50+ HANA databases) in less than 1 hour. This saved time frees up your Basis team up to focus on delivering business value, helping you lower your TCO.

Conclusion

In this blog, we covered how DevOps for SAP through three major pillars: Infrastructure as Code, Configuration Management, and Operations Automation. We explored how SAP on AWS customers are leveraging these capabilities to reduce costs, innovate faster, and accomplish more in less time.

If you are looking for expert guidance and project support as you move your SAP systems to a DevOps model, the AWS Professional Services Global SAP Specialty Practice can help. Increasingly, SAP on AWS customers—including CHS and Phillips 66—are investing in engagements with our team to accelerate their SAP transformation. If you are interested in learning more about how we may be able to help, please contact us here.


How to setup SAP Netweaver on Windows MSCS for SAP ASCS/ERS on AWS using Amazon FSx

$
0
0

Feed: AWS for SAP.
Author: Otavio Nunes.

Organizations are migrating business-critical applications like SAP to AWS. SAP systems run mission and business critical workloads for most companies around the world. SAP high availability (HA) is one of the top priority for companies when it comes to their SAP systems. As part of the Service Level Management process, it is critical to clearly understand their high availability requirements and implement the right strategy.

As part of AWS Professional Services, customers often ask for help with their SAP HA set up for a wide range of SAP systems including Windows based SAP systems. In this blog, we will describe how to set up the SAP ABAP Central Services (ASCS) and Enqueue Replication Server(ERS) using Microsoft Window Server Failover Clustering(WSFC) and Amazon FSx filesystem.

A shared filesystem is mandatory and required by SAP to be used in the Windows cluster configuration for SAP. Amazon FSx will be used as a shared filesystem.

About Amazon FSx:

Amazon FSx filesystems are Windows native file systems (NFS) that are fully managed by AWS. They provide cost-efficient capacity with high levels of reliability, and integrates with a broad portfolio of AWS services to enable faster innovation.

Architecture Considerations:

Availability Zones using multiple availability zones (AZ), allow placing independent infrastructure in physically separates locations. A Multi-AZ deployment provides high availability and fault tolerance.

Subnets: In this blog, we will create three subnets for Multi-AZ deployment.

Windows Domain Controller: Domain controller should be placed in two AZ’s to provide high availability, low latency access to Active Directory Domain Services (AD DS) in the AWS cloud. Windows Domain Controller (DC)’s should not be internet facing servers and so must be placed in private subnet. DC1 will be placed in AZ1 and DC2 will placed in AZ2

Solution Architecture

Solution Requirements:

  1. Create Multi-AZ Amazon FSx File Windows File Share
  2. Enable custom DNS for FSx
  3. Reserve additional IPs addresses for Windows Cluster and SAP Role Cluster
  4. Create windows cluster
  5. Configure Cluster Time To Live and Register All Providers IP parameters
  6. Install SAP ABAP SAP Central Services
  7. Install SAP Enqueue replication server
  8. Install SAP Primary and Additional Application Servers
  9. Disable SAP internal cache for host names and services names

How to create an FSx file system:

Open the Amazon FSx console at https://console.aws.amazon.com/fsx/.

  1.  On the dashboard, choose Create file system to start the file system creation wizard.
  2. On the Select file system type page, choose Amazon FSx for Windows File Server, and then choose Next. The Create file system page appears.
  3. In the File system details section, provide a name for your file system.For Deployment type choose Multi-AZ. By choosing Multi-AZ a fault tolerant file system will be deployed and support Availability Zone unavailability.
  4. For Deployment type choose Multi-AZ. By choosing Multi-AZ a fault tolerant file system will be deployed and support Availability Zone unavailability.File System Details
  5. In the Network & security section, choose the Amazon VPC that you want to associate with your file system. Choose the same Amazon VPC that you chose for your Microsoft Active Directory and your Amazon EC2 instances.
  6. If you have a Multi-AZ deployment, choose a Preferred subnet value for the primary file server and a Standby subnet value for the standby file server. A Multi-AZ deployment has a primary and a standby file server, each in its own Availability Zone and subnet.Network and Security
  7. For Windows authentication, you have either AWS Managed Microsoft Active Directory or Self-managed Microsoft Active Directory. Select Self-managed Microsoft Active Directory.
  8. For Encryption, keep the default Encryption key setting of aws/fsx (default) or choose a custom KMS key.
  9. For Access (optional), you have to enable access to Amazon FSx from DNS names other than the default DNS name that Amazon FSx creates. See create a custom DNS section.DNS Alias Optional Name
  10. Review the file system configuration shown on the Create file system page. Choose Create file system. Then wait for the File System creation.
  11. After the Amazon FSx creation, you can visualize it at Amazon FSx console dashboard. At Status column you can see the Amazon FSx Windows File Share is ready to be used.

FSx File System Created

Create a custom DNS name for FSx Windows File Share

SAP only supports 13 characters for physical and virtual hostname naming convention. It means physical and virtual hostnames longer than 13 characters are not supported and Software Provisioning Manager will not let you proceed with the SAP installation. Refer to the SAP note 2718300 – Physical and Virtual hostname length limitations for more information.

When an FSx file system is created, its DNS name contains 15 characters by default.

FSx Standard DNS Name created

The same rule applies to SAPGLOBALHOST, therefore an Amazon FSx custom DNS will be required to overcome this limitation. During a SAP system installation the SAPGLOBALHOST will be required and you have to provide a 13 character long custom DNS.

SAP System Cluster Parameters - File Share Hostname

Therefore, it is mandatory you create a custom DNS at your DNS manager. For this blog, Microsoft Domain Controller DNS manager is used, and then the custom DNS name was created at the Domain Controller.

DNS Manager - DNS name created

SAP High-Availability System Installation

Make sure the following overall requirements are completed before proceeding with First and Additional Cluster Nodes installation.

  • Add secondary private IPs for each Windows Microsoft Cluster nodes (except for Windows File Share Witness). In addition of the private IPs already attached to the instances, 4 additional IP addresses are required:
    • 2 for Microsoft Windows Cluster
    • 2 for SAP Role

IP Addresses - Cluster Reservation List

First Cluster Node Installation

Before starting the SAP Installation, you have to create a DNS Type A record for SAP Virtual Instance Host at DNS Server.

After configuring the previously mentioned step, proceed with SAP First Cluster node installation. Log on as domain admin user at the first Windows Cluster instance and run the SAP Installation through sapinst.

You will perform a regular SAP Windows Cluster installation. For more information regarding SAP installation on Microsoft Windows Failover Cluster, please refer to official SAP documentation here.

During the step Cluster Share Configuration at SWPM you will have to choose the option File Share Cluster.

Cluster Share Configuration

At SAP System Cluster Parameters screen provide information for the following fields:

  • SAP System ID (SAPSID)
  • Network Name (SAP Virtual Instance Host) – Virtual Hostname created at your DNS Manager
  • File Share Hostname – the name of custom DNS created at your DNS Manager

SAP System Cluster Parameters

Then proceed with other regular SWPM information required. Review the information provide and let SWPM runs the installation process for the first cluster node. Await until the process completes successfully.

Additional Cluster Node Installation

There is no special requirement or adjustment in order to proceed with the Additional Cluster Node installation.
However, before proceeding with SAP Additional Cluster node installation, it is recommended to adjust some Microsoft Windows Cluster Parameters:

  • HostRecordTTL from 1200 to 15 seconds
  • RegisterAllProvidersIP to 0

After completing the cluster parameter configuration, you can proceed with SAP Additional Cluster Node installation. Afterwards, you can validate it at Windows Failover Cluster Manager. You will find the resources created for the installation.

Also, in the SAP Management Console, you can monitor and visualize the SAP process created by the installation.

SAP MMC

Primary Application Server and Additional Application Server Installation

The Primary Application Server and Additional Application Server installation process is not covered in this blog. For more information regarding SAP installation on Windows, please refer to official SAP documentation here.

After installing the PAS and AAS, it is advisable you deactivate the internal cache for host names and services names. Please refer to SAP note 1425520 – Disable NI cache for host and service names.

Conclusion:

In this blog, we have shown how to configure Amazon FSx Windows File Share and integrate with Windows High Available Cluster for ABAP SAP Central Services and Enqueue Replication Server.

Let us know if you have any comments or questions—we value your feedback.

Securing SAP Fiori with Multi Factor Authentication

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

Cloud Security is job zero at AWS. We have a Shared Responsibility Model with the customer; AWS manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. We provide a wide variety of best practices documents, encryption tools, and other guidance our customers can leverage to deliver application-level security measures.

With this blog, we will provide a how-to guide for SAP customers on AWS to implement multi-factor authentication (MFA) for SAP Fiori (the user interface of SAP S/4HANA system). When you implement MFA, it adds an extra layer of protection on top of your user name and password. When the users sign in to SAP Fiori, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). This will reduce security risks such as brute force attacks when the user credentials are compromised.

Due to the fact that many of SAP customers utilize Microsoft Active Directory, we will use AWS Managed Microsoft AD as our directory service in the solution architecture as well as AWS Single Sign-On (AWS SSO) to manage the MFA devices. We will describe the step-by-step implementation details as well

Solution Overview

First Diagram of Authentication Flow

The first diagram above describes the authentication flow which will happen when a user is accessing SAP Fiori from the browser utilizing SAML2 mechanism that will interact to AWS SSO and AWS Managed Microsoft AD.

Second Architecture Diagram of SAP Fiori integration with AWS SSO and AWS Managed Microsoft AD

As per the second diagram above, you will configure the AWS SSO to integrate with SAP Fiori using SAML2 mechanism, and it will refer to AWS Managed Microsoft AD as directory service for user authentication. The AD Management Server is used to administer the Active Directory from here using ADUC (Active Directory Users and Computers) tools. The application load balancer (ALB), along with SAP web dispatcher in public subnet, will act as reverse proxy to the embedded SAP Fiori in SAP S/4HANA servers in private subnet. This configuration will allow users to be able to access SAP Fiori through internet connection.

AWS SSO is a cloud service that allows you to grant your users access to AWS resources, such as SAP Fiori, across multiple AWS accounts. AWS Managed Microsoft AD enables you to use a managed Microsoft Active Directory on the AWS cloud. You create and maintain your users and groups in the AWS Managed Microsoft AD, as well as in SAP Fiori.

As alternative scenario, where you want to reuse objects such as users and groups from your existing AD domains for authentication, you can create an AD Trust relationship between an existing Active Directory domain and AWS Managed Microsoft AD. Another scenario in which you want to extend an existing Active Directory domain to AWS, by creating an Active Directory secondary domain controller(s) on Amazon EC2 instance(s), which would replace the use of AWS Managed Microsoft AD.

Prerequisite

  1. Implement configuration of AWS SSO and SAP Fiori as documented in the AWS for SAP blog by Patrick Leung or you can follow the AWS SSO wizard when defining the application as “SAP Fiori ABAP” as per this documentation.
  2. On top of the above, please enable the SAML2 configuration below using SICF transaction code in SAP Fiori.
  3. In order to improve the logon and logoff user experience, I would like to recommend the following changes:
      • Apply SAP Note 2673366 (SAP S-User ID is required to access) in SAP Fiori. This will avoid the user to manually choose which Identity Provider to use when logging on.
      • In SAP Fiori, you can set the Single Logout Endpoints as HTTP Redirect to https://<SSOStartPage>/start#/signout. This will allow users to automatically logoff from SAP Fiori without being redirected to AWS SSO Start page.

    Single Logout Endpoints ScreenShot

  4. You can deploy AWS Managed Microsoft AD by following the procedure described in this documentation.AWS Managed Microsoft AD ScreenShot
  5. You need to make sure that the AD Management Server is in private subnet, and joined to AD created above, and having the relevant ADUC (Active Directory Users and Computers) Tools. You can follow this procedure to achieve this.

Solutions Implementation

  1. In AWS SSO, you can go to “Settings”, then you change identity source from AWS SSO to “AWS Managed Microsoft AD” that was deployed in the prerequisite step 4.AWS SSO Settings ScreenShot
  2. Then you set the policy “If user does not have a registered MFA device” to “Require them to register an MFA device at sign in”. This will ensure that every user is MFA-enabled.AWS SSO MFA Settings ScreenShot
  3. In AWS SSO, under Applications, you can then assign a specific AD user group (example “Domain Users”) to the “SAP Fiori ABAP” application, so whenever you create a user and assign the user to this group, it is automatically granted access to the SAP Fiori.AWS SSO User Group ScreenShot

Solution Testing

  1. In AWS Microsoft AD, you create and manage user and group via the Windows RDP, you can go to “Server Manager” and then go to “Active Directory Users and Computers”.Microsoft AD User and Computer Tool ScreenShot
  2. You can browse to the domain corp -> users. Please take note of the user’s email.ADUC browse to corp then users ScreenShot
  3. The email address is the attribute that we will use to map the users in SAP Fiori to the users in Active Directory. On top of this, the email address is to be reflected in user principal name (UPN) attribute as recommended by Microsoft as per this documentation. If this is not the case in your current AD, you should align them accordingly. Below is the screenshot of the AWS SSO configuration that drives this mapping. If for some reason you cannot use this mapping, you may review the alternative mapping by referring to the user attribute mapping documentation.UserPrincipalName ScreenShotAttribute Mappings ScreenShot
  4. In SAP Fiori, you create the user using SU01 transaction code with the same email address matching the AD user principal name (UPN) attribute value as per above. In productive deployment, you may want to replicate the SAP Fiori User IDs from Managed Microsoft AD automatically by setting up a scheduled job using LDAP* transaction codes.SU01 ScreenShot
  5. Then test by browsing to the SAP Fiori Launchpad with the following URL https://<sapfiori.example.com>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html. As per screenshot below, when a user try to access SAP Fiori Launchpad for the first time, the user will be prompted to register MFA device. In this example, the user select “authenticator app” (you can see the list of supported authenticator apps in this documentation). After the MFA registration is completed, the user will be redirected to the SAP Fiori Launchpad.Solution Testing ScreenShot
  6. Once all the activities are finished, you can logoff from SAP Fiori Launchpad, which will redirect you to the sign-in screen.

Managing Replacement or Lost MFA Device 

When the MFA device is no longer valid, due to situations such as replacement or loss, you can delete the MFA device by navigating to AWS SSO -> Users, then select on the username -> MFA device, choose on the Device, and then select Delete.Managing Replacement or Lost of MFA Device ScreenShot

Conclusion

We have discussed about step-by-step how to implement Multi Factor Authentication for SAP Fiori to improve your security posture.  The use of AWS SSO simplifies the MFA registration by supporting multiple options such as authenticator app, security key and built-in authenticator.

AWS Managed Microsoft AD enables you to run Active Directory on AWS with ease, and if you wish to use your existing Active Directory domain for authentication purpose, it is also possible to either established AD Trust or Extend to AWS.

You can find out more about SAP on AWS, AWS SSO, AWS Managed Microsoft AD  from the AWS product documentation.

 

 

Start/Stop SAP systems with Slack using AWS Chatbot

$
0
0

Feed: AWS for SAP.
Author: Ajay Kande.

Spending hours and hours starting and stopping SAP systems for maintenance windows, profile parameter changes, and other required activities? AWS Professional Services’ SAP Start/Stop Automation automates the start and stop of your SAP Systems with less human intervention and increased reliability, consistency, and control.

Let’s assume there are hundreds of SAP systems spread across multiple accounts to start and stop. If we also include the EC2 instance restart, this planned maintenance activity becomes more time-consuming. To address this, AWS Professional Services’ SAP Start/Stop Automation solution identifies EC2 instances using tags and starts/stops the SAP components installed (ASCS, SCS, ERS, APP, DB, DAA) in a sequenced manner including the underlying EC2 instances. This automation can also be enhanced easily for OS patching, AWS CLI updates, SAP kernel updates, and any maintenance that requires an interruption of SAP service.

To make this more convenient for SAP Basis administrators, we have now integrated this automation with Slack using AWS Chatbot. With this new feature, SAP Basis administrators can stop/start SAP applications from Slack. In this blog, we are going to walk you through the configuration steps to set up AWS Chatbot in a Slack channel and show how to start/stop SAP applications using the bot.

Architecture

The solution described here uses AWS Chatbot, AWS Lambda, AWS Systems Manager, Amazon CloudWatch, and Slack incoming webhooks. To perform the operations, administrative users do not need to access AWS Console or login to the operating system level. They can connect to their Slack channel and simply type commands to execute. Administrators will invoke a Lambda function from the Slack channel. This function will trigger a Systems Manager document that executes the tasks and returns the execution results to Amazon CloudWatch Log Groups. As soon as these log groups are updated with new results, the Lambda function is triggered which relays these results back to the Slack channel.

As more and more features are added in AWS Chatbot, this architecture can be easily redesigned and adapted to include those new functionalities.

Architecture diagram describing start/stop SSM document integrated with Slack using AWS Chatbot

The sequence of steps performed

  1. The user sends a message to AWS Chatbot app on the Slack channel to invoke a Lambda function.
  2. AWS Chatbot app on the Slack channel relays this request to AWS Chatbot in the respective AWS Account.
  3. AWS Chatbot invokes the Lambda function.
  4. Lambda function triggers Systems Manager document.
  5. Based on the inputs provided, the Systems Manager document performs the required operation on the set of SAP systems.
  6. The results of this operation are written in the AWS CloudWatch Log Group.
  7. As soon as AWS CloudWatch Log Groups are updated, another Lambda function is triggered.
  8. Lambda function relays the results to the Slack channel.
  9. The user reads the latest execution status on the Slack channel.

Prerequisites

Before deploying this solution, make sure you have –

  • SSM document for start/stop SAP systems.
  • A slack channel dedicated to AWS Chatbot. This channel will be used to send AWS API commands to your Enterprise account hosting SAP workloads. Please ensure that a Private channel is created with access to only the responsible individuals e.g. Systems Administrators, Cloud Operations Team, etc.
  • Access to add Integrations to this Slack channel. In certain cases, your Workspace Administrator may have disabled certain apps. In such a situation, you will have to contact them and get the App authorized for use.
  • Create an incoming webhook for your Slack channel. This webhook will be used by the Lambda function to report execution results back to the Slack channel.
  • You must have access to these services: AWS Chatbot – to configure a chat client, AWS Lambda – to create a Lambda function, AWS Systems Manager – to create an automation document, and Amazon CloudWatch – to create and manage log groups.
  • You must have IAM Roles ready, to assign to the 3 services mentioned in the previous step. When creating the IAM policies, make sure to grant only the permissions required to perform a specific task. You may refer to IAM security best practices for this.

Walkthrough

In this post, we will walk you through the configuration steps to setup AWS Chatbot in a Slack channel and show you how to invoke the Lambda function to start/stop the SAP system.

  1. Configure AWS Chatbot in a Slack channel
  2. Add an incoming webhook to the Slack channel
  3. Create a Lambda function to invoke the Systems Manager SAP Start/Stop automation document.
  4. Executing slack commands to invoke Lambda function

1. Configure AWS Chatbot in a Slack channel

In the AWS Chatbot console’s home page, choose Slack in the Chat client dropdown and choose Configure client.

AWS Chatbot service screen to configure Slack client

The setup wizard redirects you to the Slack OAuth 2.0 page. Select the Slack workspace to configure and choose “Allow”.

AWS Chatbot service screen requesting permission to access Slack workspace

Slack redirects you from here to the Configure Slack Channel page. Select the channel in which you want to run commands. You can either select a public channel from the dropdown list or paste the URL or ID of a private channel.

For private Slack channels, find the URL of the channel by opening the context (right-click) menu on the channel name in the left sidebar in Slack, and choosing the Copy link

After you choose the Slack channel, under Permissions, choose to Create an IAM role using a template. Enter a role name in the Role name textbox. In the Policy templates dropdown, choose Read-only command permissionsLambda-invoke command permissions, and AWS Support command permissions. AWS Chatbot will create an IAM role that it will assume to run commands from the selected Slack channel. You can see the permissions granted to AWS Chatbot or modify them in the IAM console. Learn more about permissions in AWS Chatbot documentation.

AWS Chatbot service screen providing IAM role details

After you choose Configure, the configuration completes.

2. Add an incoming webhook to the Slack channel

Before adding incoming webhook to Slack channel, invite AWS Chatbot to slack channel by typing/invite @aws.
Type@aws helpto get help on using AWS Chatbot.

Executing commands on Slack Channel and interacting with AWS Chatbot

Now go to thehttps://<yourslackworkspaceurl>/home and Add Applications in the Recently Added Applications section

Slack home page to add applications

Search for Incoming Webhooks → Select Incoming Webhooks and Add to Slack

Slack home page to add Incoming Webhooks

Select your channel name from the dropdown and Add Incoming Webhooks integration. You will now see the Webhook URL. You need this URL, which is used by Lambda or Systems Manager documents to send a response back to the Slack channel

Incoming webook url for Slack channel

3. Create a Lambda function to invoke Systems Manager automation document

Input parameters to start/stop SSM document needs to be created as environmental variables for Lambda and pass these values while invoking Lambda function through slack. Create Lambda function with python as runtime handler as below:

import boto3
import urllib
ssm = boto3.client('ssm')
def Lambda_handler(event, context):
    http = urllib3.PoolManager()
    ssm_document_name = 'start-stop-ssm-document'
    response = ssm.start_automation_execution(
        DocumentName=''start-stop-ssm-document',
        DocumentVersion='$DEFAULT',
        Parameters={    
                "Operation": [event['L_Operation']],
                "SID" : [event['L_sid']],
                "SIDTagKey": ["sid"],
                "RoleTagKey": ["Role"]
        }
    )

Optionally you can add triggers to the Lambda function as shown below so that the status of start/stop is sent back to the Slack channel.

AWS Lambda service screen showing how to add trigger

4. Executing slack commands to invoke Lambda function

In the Slack channel, users can execute AWS API commands by directing a message to the AWS bot. As an example to invoke a Lambda function, we will run a command as below:

@aws Lambda invoke —function-name MyLambdaFunction —invocation-type Event —payload "[JSON string here]"

In the screenshot below, we are invoking Lambda function sap-start-stop with Start operation for SAP System with SID AK1. You can also see the response once SAP is started successfully.

Invoking Lambda function with Start operation for SAP System with SID AK1 using Slack

In the screenshot below, we are invoking Lambda function sap-start-stop in with Stop operation for SAP System with SID AK1. You can also see the response once SAP is stopped successfully.

Invoking Lambda function with Stop operation for SAP System with SID AK1 using Slack

Conclusion

Running AWS commands from Slack using AWS Chatbot expands the toolkit your team uses to respond to operational events and interact with AWS.

In this post, we walked you through the configuration steps to set up AWS Chatbot in a Slack channel and show how to start/stop SAP applications using the bot.

AWS users can use a similar approach to integrate SSM documents, Lambda functions all with Slack using AWS Chatbot. If you are looking for expert guidance and project support for this integration or another SAP project, the AWS Professional Services Global SAP Specialty Practice helps SAP customers realize their desired business outcomes on AWS. If you’d like to learn more, please contact us here.

Improving SAP Fiori Performance with Amazon CloudFront and AWS Global Accelerator Part 2: How-to Guide

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

In a previous blog, we have concluded that Amazon CloudFront is recommended to accelerate the SAP Fiori Launchpad use case, while AWS Global Accelerator is recommended to accelerate the external system integration scenario via OData API calls such as Data extraction for analytics and reporting. We observed improvement in the time to first load SAP Fiori Launchpad, when accelerated through CloudFront, and improvement on OData API call times through Global Accelerator.

This post will discuss in details on how-to implement each of these solutions including the key parameters to pay attention to ensure successful deployment of CloudFront and Global Accelerator for SAP Fiori. 

Solution Overview

Solution Overview Fiori with CloudFront and Global Accelerator

The demo solution architecture above describes a simple deployment of SAP S/4HANA 2020 with embedded Fiori in the AWS us-east-1 (Northern Virginia) region that can be accessed over the internet, while the users are in Singapore.

  • SAP Fiori launchpad will be accelerated by Amazon CloudFront.
  • In another scenario, we will simulate the acceleration of the SAP OData API calls with AWS Global Accelerator.
  • The Application Load balancer (ALB) will act as reverse proxy to SAP S/4HANA 2020. Please note that in productive deployment, to improve the security posture, we recommend you install SAP S/4HANA with embedded Fiori in a private subnet, and setup an SAP Web dispatcher in a public subnet.
  • AWS Certificate Manager will be used to manage the wildcard SSL certificate for CloudFront and Application Load balancer.
  • Amazon Route 53 will manage the DNS entries required to support the solutions.

Pre-Requisites

You will need to ensure the following are configured before the solution implementation step:

  • Windows RDP (Remote Desktop Protocol) is installed. You will use this to administer and maintained various configuration of SAP S/4HANA and SAP Fiori.
  • SAP S/4HANA 2020 is installed. You can use AWS Launch Wizard to do this . 
  • After the SAP S/4HANA vanilla installation is done, please follow these steps to update the system using SAP Maintenance Planner tool to have embedded Fiori component enabled. References: SAP S/4HANA 2020 Maintenance planner, SAP S/4HANA 2020 Installation, SAP S/4HANA 2020 Best Practice Activation and SAP S/4HANA 2020 Ravid Activation for Fiori
  • You must have a valid internet domain; you can procure this from Amazon Route 53 or another DNS provider. It will be used to maintained DNS Entry (e.g., Alias in Route53 or CNAME in other DNS) records for ALB, CloudFront and Global Accelerator.
  • For ease of deployment, I use a Wildcard SSL certificate signed by public Certificate Authority (example: Verisign). You can import this into AWS Certificate Manager and SAP Fiori back-end (example *.example.com). SSL public certificates that are generated from AWS Certificate Manager (ACM) will not work with SAP because it requires a signed SSL certificate to be imported to SAP Fiori system using Certificate Signing Request (CSR) method.

Solution Implementation

1. Configure Application Load Balancer

You can follow this documentation to create Application Load Balancer.

1.1 Create Target Group

Input Parameters

Considerations

Protocol Port

In this example, we use HTTPS port 8443

Health Check Path

/sap/public/ping

Please ensure this path is enabled in SICF transaction code in SAP Fiori

Stickiness

As SAP is a stateful application, you will need ALB to always point to certain SAP instance for that particular user session. This must be enabled.

Stickiness duration

You can set to 1 day or 8 hours stickiness to cover user activities for the day.

  Target Group

1.2 Create Load Balancer

Input Parameters

Considerations

DNS Name

Let’s name this “sap-alb” which later in this document be defined as an alias record in Route53 or CNAME in other DNS

Scheme

Define as “internet-facing”
This blog series presumes your end users will access Fiori over the Internet, which also enables the use CloudFront or Global Accelerator to accelerate Fiori or OData based network connections.

Security
Group

Port 443 of the Load Balancer security group (you can name this security group as “sgLBSAP” for example) must be open to Internet

Port 8443 of the SAP Fiori security group (you can name this security group as “sgSAPPAS” for example) is opened from ALB security group (you can name this security group as “sgLBSAP” for example) to SAP Fiori

Listener. ID

HTTPS: 443

SSL Certificate

You can point this to AWS Certificate Manager which holds the wildcard SSL Certificate

Rules

Forwarding to “tgSAP” target group which contains the SAP Fiori system

Load Balancer Configuration 1 Load Balancer Configuration 2 Load Balancer Configuration 3

2. Create CloudFront Distribution

You can follow this documentation to create CloudFront Distribution.

2.1 General Settings 

Input Parameters

Considerations

Price Class

CloudFront has edge locations all over the world.
The cost for each edge location varies and, as a result, the price that are charged varies depending on the edge location from which CloudFront serves the requests.

You can align this to the user’s location and performance expectation.

Alternate Domain Names (CNAMEs)

You will set this to the target website name. This must to be in line with the alias or CNAME record and SSL Certificate, so when the user browse to SAP Fiori, it connects through the CloudFront end-point. (example: sap.example.com)

SSL Certificate

You point this to the wildcard SSL Certificate SSL certificate in Certificate Manager. The Common Name must correspond to the Domain name above. (example:CN=*.example.com)

Domain Name

You can take note of this when maintaining target “sap” alias record in Route 43 or CNAME record in other DNS

Security Policy

At the time of writing TLSv1.2_2019 is the latest security policy recommended that are supported by both CloudFront and SAP

Supported HTTP versions

Ensure that HTTP/2 is enabled, as this provides many performance improvements compared to 1.1 and 1.0, by reducing latency, minimizing protocol overhead via compression and support over requests prioritization.

Log Bucket & Log Prefix

You will need this to troubleshoot, analyse and monitor the solution.

CloudFront General Setttings

2.2 Origin Settings 

Input Parameters

Considerations

Origin Domain Name

You will point this to the ALB. Please ensure that it is aligned with the SSL wildcard domain (example: sap-alb.example.com)

Minimum Origin SSL Protocol

Please specify TLSv1.2, as at the time of writing this is the highest supported by SAP.

Origin Protocol Policy

You will use HTTPS only. This will be the protocol used between CloudFront and ALB to implement end-to-end encryption.

CloudFront Origin Settings 

2.3 Behaviour

Input Parameters

Considerations

Path Pattern

You set this as per the table below, with a Precedence zero of *FioriLaunchpad.html

Below is the example Default (*)

Viewer Protocol Policy

You can set this to “Redirect HTTP to HTTPS” to achieve better user experience (most of browsers are still going to HTTP port as default when HSTS is not enabled) and better security posture.

Non-Cacheable Objects

Please set *FioriLaunchpad.html to

cache policy : Managed-CachingDisabled

request policy : Managed-AllViewer

The GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE operation must be allowed

This is required to ensure all the session cookies, query strings, HTTP headers
are passed to the back-end SAP Fiori for proper handling.
 

Cacheable Objects

The cacheable objects include : *.html, *.js, *.css, *.jpg, *.png, *.ttf, *.ico, must have the following settings

cache policy : Managed-CachingOptimized

request policy : Managed-AllViewer

The GET, HEAD operation must be allowed

Default (*)

As fallback, if none of the objects do not fall into the above policy, it must
be set with this policy

cache policy : Managed-CachingDisabled

request policy : Managed-AllViewer

The GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE operation must be allowed

  CloudFront Behaviour 1 CloudFront Behaviour 2

3. Create Global Accelerator

You can follow this documentation to create Global Accelerator.

Input Parameters

Considerations

Accelerator name

You set this value as “gaSAP” for example

Accelerator type

You select “Standard” here

Ports

443, default for HTTPS

Client Affinity

Client affinity is set to “Source IP” as SAP Fiori is stateful application

Region

This is where the Application Load Balancer and SAP Fiori located at

Endpoint type

You set this to Application Load Balancer ARN

Listener port

TCP:443 (default HTTPS Port of Application Load Balancer)

Global Accelerator Configuration 1 Global Accelerator Configuration 2 Global Accelerator Configuration 3 Global Accelerator Configuration 4

4. Maintain alias record in Route53 or CNAME in other DNS Server and Perform test

You have registered domain “example.com”, and created a hosted zone for “example.com” in Route53. The alias records in Route53 or CNAME records in other DNS server are maintained for each of the components that were created earlier.

Record Name

Record Type

Value

sap-alb

A (Route53) or CNAME (other)

xxxxxxxxxx.cloudfront.net

sap

A (Route53) or CNAME (other)

lbsap-xxxxxxxxxx.us-east-1.elb.amazonaws.com

sap-ga

A (Route53) or CNAME (other)

xxxxxxxxxx.awsglobalaccelerator.com

5. Once the DNS is propagated, you can test using each of the URL below

Component

URL Address for testing

Application Load Balancer

https://sap-alb.example.com/<urlpath>

CloudFront

https://sap.example.com/<urlpath>

Global Accelerator

https://sap-ga.example.com/<urlpath>

 Legend: <urlpath> = sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html

 Example: = https://sap.example.com/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html

Conclusion

Today, we have shown you how to implement CloudFront for SAP Fiori Launchpad step-by-step. This allows you to achieve better SAP Fiori performance, user experience and increased user productivity, while ensuring the standard SAP Fiori behavior is not altered.

We have shown you how to implement Global Accelerator to accelerate SAP OData API Calls. This will be useful in system integration scenarios where accelerated traffic is required to reduce latency between systems.

You can find out more about SAP on AWS, CloudFront and Global Accelerator from the AWS product documentation.

 

Amazon SES configuration for SAP ABAP Systems

$
0
0

Feed: AWS for SAP.
Author: MIchele Donna.

Introduction

One of the most common requirements when running an SAP system is to send outgoing email, which can come from several different areas, from monitoring and alerting (e.g Solman), batch processing / process chains, workflows, and so on…. If you have moved your SAP system to AWS, you may want to get rid of some mail servers still running on-premises or avoid the deployment of an EC2 instance dedicated to run a SMTP(Simple Mail Transfer Protocol) server.

Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables customers to send mail from within any application. You can configure Amazon SES quickly to support several email use cases, including transactional, marketing, or mass email communications

In this blog we will guide you through the required steps to configure outbound mail from an SAP ABAP system using Amazon SES (Simple Email Service) service.

Prerequisites

As first step, we will be configuring a “Sandbox” account within Amazon SES and verify a sender email address for initial testing. Once all the setup steps will be successful, we can convert this account into Production and the SES service will be accepting all mails coming from our SAP systems (for more details on this topic, please see the Amazon SES documentation).

Within AWS EC2 Console, navigate to Amazon SES and click on Email Addresses, then press the “Verify a New Email Address”. Please enter your email, click on Verify This Email Address and check your mail inbox, you should receive an automated email with a link to confirm that you are authorized to use this email address:

Email address verification box

After the verification is completed, the Status will change to green under Verification Status

Amazon SES email address verification

Once the email address verification is completed, we need to create proper smtp credentials which will be used by our SAP systems. To create the credentials, click on “SMTP Settings” and press the “Create My SMTP Credentials” button.

Amazon SES SMTP settings screen

Please also note down the Server Name as it will be required afterwards during the SAP system configuration

Enter a meaningful username and click on create bottom in the bottom right page

Amazon SES IAM username input screen

You have the possibility to display and download the SMTP username and password credentials in a csv file (bottom right page)

Amazon SES credentials to be used in SAP

SAP ABAP Outbound Email Configuration

We can now connect to our SAP ABAP system and logon into the working client and call transaction SCOT, select the SMTP nodes, and create a new one via the wizard

SCOT landing page

Specify a meaningful name and provide the parameters noted during the Amazon SES setup

Press on the settings button, and provide the credentials generated in the previous steps

SCOT security settings display

Note: some older Netweaver releases might return an error related to the password field length; in such case, some SAP Notes need to be imported to correct the issue ( 1724704, 2439601,2363295 and 2372893).

Download the certificate from the Amazon SES SMTP endpoint (for example using this cmd line on a Linux system : openssl s_client -starttls smtp -connect email-smtp.us-east-1.amazonaws.com:587 | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’)

Note: the Amazon SES endpoints are region dependent , so please ensure you are using the one within the same region as your SAP systems are deployed or where your central SES services are configured.

In transaction STRUST, import the certificate into the SSL client (Standard) and restart the service via SMICM

STRUST ssl configuration with imported Amazon certificate

Test outbound email

In transaction SO01, create a test email and press the send button

SO01 create mail screen

In transaction SOST, you can trigger a send job via menu Send Request → Start Send Process for Selection

Transaction SOST displaying email in queue

If everything is fine, you will see the status change and get a mail in your inbox

Transaction SOST displaying email sent successfully

In case of any issue, please check the correct AWS region / SES endpoints are selected (and entered within SCOT), and that SMICM has been restarted after importing the certificate into STRUST. You will find any communication / certificate error messages in ICM logs (SMICM, Goto→ trace files → display all)

SAP transaction SMICM display trace files menu

Conclusion

In this blog, we have shown how to configure SAP ABAP systems for sending outbound emails, a very common requirement, both from Business processes and Basis operations perspectives. More information can be found in the following link

Let us know if you have any comments or questions—we value your feedback.

Predictive Maintenance using SAP and AWS IoT to reduce operational cost

$
0
0

Feed: AWS for SAP.
Author: Patrick Leung.

This post was written by Kenny Rajan, Patrick Leung, Scott Francis, Will Charlton & Ganesh Suryanarayan.

The convergence of Operational Technology (OT) and Information Technology (IT) is reinventing the way companies drive manufacturing efficiency. From the shop floor at the Programmable Logic Controller (PLC) level to the Manufacturing Execution System (MES), all the way up to SAP Plant Maintenance (PM) or S/4HANA Asset Management.

With AWS IoT solutions, industrial companies can digitize processes, transform business models, and improve performance and productivity, while decreasing waste. These are some examples of how customers can apply AWS IoT solutions to improve the performance and productivity of industrial processes.

In this blog post, we show how to integrate AWS IoT solutions with SAP for predictive maintenance. SAP Plant Maintenance (PM) comprises a set of data and processes to maintain the high availability of technical systems. This helps customers move away from outdated interval or breakdown-based maintenance cycles to a planned or preventative maintenance cycle.

The following diagram illustrates the business processes that effectively plan, inspect, record and act on information about various equipment’s, devices and assets within an organization in a timely manner. More detailed information about each of the technical objects can be found in SAP documentation.

Overview of SAP plant maintenace business processes

Example Business Process 1:

A lot of equipment readings are recorded in plant maintenance and customer service processes. In case of a potential disruptive event in an assembly or sub-component, a service notification can be triggered.

SAP PM measurement detection flow

Example Business Process 2:

Alternatively, a maintenance plan can be set up to trigger a maintenance order based on thresholds. Once the device at the edge detects an anomaly, an AWS Lambda function can be triggered to create a measurement document in SAP for the assembly or sub assembly. This would in turn trigger the creation of a maintenance order.

SAP PM measurement detection flow with AWS Integration

To keep it simple, we are using an IoT device simulator to mimic an equipment failure. Then, we show how to use AWS IoT services for early failure detection. For the integration back into SAP, we leverage REST-based OData (Open Data Protocol) services and AWS Lambda to create service notification for maintenance activities. SAP maintenance notification plays an important role in plant maintenance operations by proactively notifying an abnormal or exceptional situation. The solution consists of the following components:

Preventive Maintenance architecture for AWS IoT integration with SAP

    1. IoT Device Simulation environment: Use AWS Cloud9 or your own Integrated Development Environment (IDE) to simulate a compressor’s temperature reading to AWS IoT Core.
    2. Security and Connectivity: AWS Cloud Development Kit (CDK) deploys AWS IoT Core to register the IoT thing and generate X.509 certificate. The X.509 certificate is installed on your simulation environment to establish connectivity to AWS IoT core. The simulated data is then sent to AWS IoT Core using Message Queuing Telemetry Transport (MQTT) protocol. In addition, AWS IoT provides a registry that helps you manage IoT things. In this example, we have a compressor maintained in this registry.
    3. Message payload parsing:  AWS IoT action rules listen for specific MQTT payload from the simulator, formats the data and send the data to AWS IoT Analytics services.
    4. Monitor messages in real time with AWS IoT Analytics: AWS IoT Analytics channel receives the data from the AWS IoT Core.  AWS IoT Analytics validates that the data is within specific threshold. Here, we setup the set ID, source, and data retention period and routes the channel to appropriate pipelines.
    5. Configure AWS IoT Analytics pipeline activities:
      1. Process data:  Transform message attributes, filter entire messages. Here, you chain the activities together to process and prepare messages before storing them.
      2. Enrich data: Enrich the data from the IoT core before sending the data to the AWS IoT Analytics data store. An activity that adds data from the AWS IoT device registry to your message.
      3. Normalize data: Remove attributes from a message to normalize the IoT payload message to filter the required data at the root level.
      4. Transform data with AWS Lambda:  Call Lambda functions to transform and enhance the upstream message to get the product range from Amazon DynamoDB. Through this activity, AWS IoT Analytics can understand the standard temperature range for a given device by calling Amazon DynamoDB to determine if there is a temperature fluctuation in the data sent from the IoT simulator. In this example, this is the compressor.
      5. Retrieve SAP product data: DynamoDB setup includes the SAP attributes, which are mapped against the device name. Using these attributes, IoT Analytics retrieves the SAP product information for further downstream processing.
    6. Data Store: AWS IoT Analytics data store receives and stores your messages. Here, we configure a SQL query to select the most recent temperature reading over the period. Data sets are stored in Amazon Simple Storage Service (S3) and concurrently sent through to AWS IoT Events to capture the temperature status.
    7. Anomaly Detection: Using AWS IoT Events Detector model to define the conditional (Boolean) logic that evaluates the incoming inputs to detect an anomaly event (temperature out of normal range for more than 15 mins). When an event is detected, it changes the state and triggers additional actions to IoT events. For this demonstration, this model is configured to call the SAP to create an SAP service notification using AWS Lambda.
    8. API integration to SAP Lambda OData: This sample package contains a Lambda layer to connect with SAP and consume the OData services through REST API calls.  When an anomaly is detected, IoT Event detector model invoke an AWS Lambda function and passes the SAP equipment number and functional location to create a Service notification in SAP.
    9. Create Service Notification: HTTP POST request is initiated with the above payload to call the SAP OData service using the environment variables (Entity-Set Name, Service-name, SAP-host, SAP-port and SAP authentication). The HTTP POST response return the Service Notification number from the SAP system.
    10. Alert Message: Lambda receive a response with the SAP service notification number. This is parsed and sent to an AWS Simple Notification Service (SNS) Topic to alert the subscribers.

There are two key steps in the deployment process. The first part is to configure the SAP system and the second part is the AWS deployment. The detailed steps are shown below.

SAP Configuration

In the SAP backend system, create an OData service. There are two possible ways to achieve this.

  1. Activate the OData Service using SAP EAM_NTF_CREATE SAP OData service.
  2. Create an SAP OData service using a customized structure for the payload

For this deployment, we are using a customized structure and the high-level steps are shown below.

  1. Create a custom SAP ABAP structure using SAP Transaction SE11 for ZSERVICE_MESSAGE_IOT.                                                            SAP ABAP structure - ZSERVICE_MESSAGE_IOT
  2. Create a custom SAP ABAP structure using SAP Transaction SE11 for ZPM_SERVICE_NOTIF_STR.   SAP ABAP Structure ZPM_SERVICE_NOTIF_STR
  3. Create an SAP gateway service in SAP Transaction SEGW using the SAP ABAP structure created in the previous step.                          Create an Entity Type of Complex Type
  4. Redefine the method NOTIF_CREATESET_CREATE_ENTITY in the Service Implementation section.                                   Redefine NOTIF_CREATESET_CREATE_ENTITY
  5. Update the ABAP code to call the function module BAPI_ALM_NOTIF_CREATE, BAPI_ALM_NOTIF_SAVE and BAPI_TRANSACTION_COMMIT. This part of the code create a service notification when a HTTP POST request is triggered from SAP OData services.
    "Call the BAPI to create the Notification
    CALL FUNCTION 'BAPI_ALM_NOTIF_CREATE'
        EXPORTING
            notif_type                      = 'M2'
            notifheader                     = lwa_notif_header
        IMPORTING
            notifheader_export              = lwa_notif_export
        TABLES
            longtexts                       = lt_longtext
            return                          = lt_return
    IF lwa_notif_export IS NOT INITIAL.
         "Call BAPI to save the Notification
         CALL FUNCTION 'BAPI_ALM_NOTIF_SAVE'
            EXPORTING
                number                      = lwa_notif_export-notif_no
            IMPORTING
                notifheader                 = lwa_notif_export2
            TABLES
                return                      = lt_return
  6. Register the OData service in using SAP transaction /IWFND/MAINT_SERVICE
  7. Test the OData service in SAP transaction /IWFND/GW_CLIENT. An example payload is shown below.Example JSON for HTTP Request
  8. If the HTTP POST request is successful, the returned status code should be 201. An example screenshot is shown below.SAP Gateway HTTP Post Request

AWS Solution Deployment

AWS Cloud Development Kit (CDK) lets you define your cloud infrastructure as code in one of five supported programming languages. For AWS CDK installation steps, refer to the AWS CDK documentation.

  1. Clone the aws-iot-sap-condition-monitoring-demo repository
    git clone  https://github.com/aws-samples/aws-iot-sap-condition-monitoring-demo.git
  2. Create an SAP user for this demo and add the OData service to the menu of back-end PFCG roles. This should include the start authorizations for the OData service in the back-end system and the business authorizations for creating the service notification.
  3. Update the cdk.json. An example of the variables are shown below.
    Variable Description Example
    thing_name Name of the IoT device IoT_thing
    Type Name of the device type Compressor
    Equipment SAP Equipment number 299998888
    FunctLoc SAP Functional location SAPFunction
    temperature_min Minimum temperature 12
    temperature_max Maximum temperature 14
    sns_alert_email_topic Temperature alarm TemperatureAlarm
    alarm_emails Email address example@email.address
    odpEntitySetName ODP entity set name NOTIF_CREATESet
    odpEntityServiceName ODP service name ZPM_SERVICE_NOTIFICATION_SRV
    sapHostName SAP host name saponawshostname
    sapPort SAP port 8000
    sapUsername SAP user name, CDK stored this in  AWS Secrets Manager Johndoesap
    sapPassword SAP user password, CDK stored this in  AWS Secrets Manager Johdoepassword
  4. Follow the instruction in the readme section of repository to complete the setup and to run the simulator.
  5. Check the stack deployment in AWS CloudFormation. An example of successful deployment is shown below.screenshot of cloudformation stack deployed.

Simulate a malfunction of the compressor

Now that we’ve successfully deployment the solution, let’s run a test!

  1. The simulator uses the temperature_min and temperature_max variables to simulate temperature of the compressor. These are defined in cdk.json file as part of the CDK deployment. Refer to the simulator.py script for the code for this simulator.
  2. These are stored as an entry in a DynamoDB Table. The minimum temperature (12) and maximum temperature (14) are shown in the DynamoDB table as shown below.Screenshot of DynamoDB table showing compressor maximum and minimum temperature
  3. In AWS IoT Analytics, select the DATA SET called cdksapbloganalyticsdataset, which was defined in the CDK stack. Then click Run now to view the actual temperate generated. An example is shown below.                                          screenshot of how to preview the result from AWS IoT analytics
  4. You can view the readings generated by the simulator in AWS IoT Events. Go to Detector models, choose CDKSAPBlogDetectorModel in detector models, then select tracker in the key value. An example of the temperature in degree is shown below. screenshot of readings generated by the simulator in AWS IoT Events
  5. To simulate an anomaly, modify the table temperature values in the DynamoDB table as shown in step 2. As an example, we have changed the maximum from 14 to 13 as shown below. screenshot of how to modify the table temperature values in the DynamoDB table
  6. Check The readings generated by the simulator in AWS IoT Events and this should now reflect the temperature range set in the DynamoDB.
  7. If the anomaly continues for more than 5 minutes, an alarm will be triggered to create a service notification in SAP. In addition, an Amazon Simple Notification Services (SNS) notification to the subscriber. An example of email notification is shown below.                                                              screenshot of AWS notification message
  8. Validate the service notification in SAP. Go to SAP Transaction IW23. Enter the SAP service maintenance notification generated. Click enter to verify the entries. Screenshot of the alert trigger in SAP system

Don’t forget to clean up your AWS account to avoid ongoing charges for resources you created in this blog. Simply follow the instructions outlined in the cleanup section in the Github repository.

Machine failures cause adverse impact on manufacturing operational efficiency and the identification of such critical failures pose a challenge in a traditional manufacturing environment. In this blog post, we have shown an example of how to leverage AWS IoT Core, AWS IoT Analytics and SAP to identify equipment anomalies.

There are many more integration scenarios with SAP on AWS. For example, Amazon Lookout for Equipment can be used to quickly enable predictive maintenance. AWS also makes it easy for you to create your own Machine Learning (ML) models using Amazon SageMaker, such as Long Short-Term Memory (LSTM) neural network. If you have questions or would like to know about SAP on AWS innovations, please contact the SAP on AWS team or visit aws.com/sap to learn more. Start building on AWS today and have fun!

Amazon EC2 High Memory Instances now available for on-demand usage

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Introduction
SAP customers continue to leverage AWS as their platform of choice and innovation. Some are in the early stages of their SAP cloud journeys and are focused on executing their migration. Others have hardened their SAP systems on AWS and are innovating around their core business processes with advanced AWS services. For example, Zalando has reinvented their data and analytics architecture to improve customer experience, Invista is using AI/ML with ECC to drive better manufacturing outcomes, and Volkswagen has integrated their S/4HANA systems with AWS IoT as part of their Digital Production Platform initiative. But even for SAP customers who are transforming with these innovative solutions at the line of business and technical use case level, the underlying infrastructure supporting their SAP systems remains a critical consideration.

That’s why today, in response to customer demand, we’re excited to announce general availability of new instance sizes within our Amazon EC2 High Memory Instances family with 6, 9, and 12TB of memory as well as support for on-demand consumption. This allows you to take advantage of hourly pricing and EC2 Savings Plans, while also giving you more options for supporting use cases with temporary infrastructure needs.

Before diving into the details of this launch, I’d like to briefly cover how we got here, why we continue to invest in infrastructure for SAP, and how AWS customers are taking advantage of these investments.

Why SAP customers count on the AWS Global Infrastructure

We have been supporting SAP customers since 2011. Over this 10+ year journey, I have watched customer adoption patterns move from early dev/test workloads to becoming the net new normal for production SAP deployments and large data center exit migrations. One of the key reasons we’re seeing this is because customers realize they need to protect themselves from disruption to mission-critical business processes, and the AWS Global Infrastructure makes it a lot easier to architect for availability and resilience than has been possible on-premises.

Take, for example, Bristol Myers Squibb. They actually built high availability and disaster recovery testing into their ECC to S/4HANA on AWS migration process. Now, they take advantage of the AWS multi-AZ design for high availability and a multi-region design concept for disaster recovery, which helps them meet their needs for uptime and system resilience. Or, for example the US Navy, who last year migrated their SAP ERP system—which supports 72,000 users and the movement of some $70B worth of goods and services—to AWS to improve availability, and scalability while helping them make more timely and informed decisions. You could also look at Zalando. While they leverage 36+ AWS services in concert with S/4HANA on AWS to support their continued growth and transformation, they still recognize the role that core infrastructure plays in their success. “AWS infrastructure and broad set of services continue to be key enablers of not just our SAP strategy, but to our broader strategy to deliver better experiences for our customers, operate more efficiently, and grow the business” said Yuriy Volesenko, former Director of Enterprise Applications & Architectures at Zalando SE. Customer feedback drives nearly everything we do at AWS, and it’s been feedback like this that has validated our continued focus on delivering the world’s best infrastructure purpose-built for SAP workloads.

The creation of High Memory Instances is one such example. After we released our Amazon EC2 X1 instance in 2016—the first cloud instance purpose-built for large in-memory databases like SAP HANA—customers pushed us to go even bigger to help them support their multi-terabyte HANA-based systems in production on AWS. In 2018, we released High Memory Instances in response, which now offer up to 24TB of memory in a single instance. As High Memory Instances have grown in popularity, customers have asked for additional flexibility including on-demand usage and savings plans.

Introducing new High Memory instances with Nitro Hypervisor

Today’s announcement of new 6, 9, and 12 TB High Memory instance sizes delivers on that request, allowing you to realize the same security, performance, and flexibility benefits of the AWS Nitro System for large SAP HANA workloads, while supporting new use cases with hourly pricing. For example, customers may want to build temporary systems for testing new features, fixing bugs, or building sandbox environments. These new instance sizes makes it easier to support some of these uses cases with on-demand infrastructure requirements. In addition, when customer wants to continue their usage for extended period, support for EC2 Savings Plans allows customers to lower their compute costs by up to 72% compared to on-demand pricing.

Additionally some customers have told us they don’t need all the CPU capacity offered on the 6TB instances. To help, we are also introducing a new non-hyperthreaded version of our 6TB instance with fewer vCPUs. This instance offers 6TB of memory with 224 vCPUs still capable of strong compute performance, but at a lower price.

A brief overview of all new sizes is as follows:

Instance size

Memory (GiB)

vCPU

Network Bandwidth (Gbps)

Dedicated EBS Bandwidth (Gbps)

SAPS

u-6tb1.56xlarge 6,144 224* 100 38 380,770
u-6tb1.112xlarge 6,144 448 100 38 475,500
u-9tb1.112xlarge 9,216 448 100 38 475,500
u-12tb1.112xlarge 12,288 448 100 38 475,500

* non-hyperthreaded cores

Like all of our instances, these new High Memory Instances will provide a consistent, cloud-native experience that is seamlessly integrated with other AWS services like Amazon EBS. They are also based on the AWS Nitro System, which I referenced before. If you are unfamiliar with Nitro, it’s the underlying platform for our latest generation of EC2 instances, which provides a number of benefits:

  • Performance: The Nitro System delivers all the compute and memory resources of the host hardware to your Amazon EC2 instances. As a result, it eliminates the performance hit that typically comes from a hypervisor layer. Additionally, our dedicated Nitro Cards offer high-speed networking, block storage, and I/O acceleration. Because of these Nitro benefits, these new instance sizes offer performance with ~1% variance when compared to our existing bare metal High Memory instances. As part of this launch, we have published several SAP benchmarks showcasing the performance of these new instance sizes. See SAP Sales and Distribution benchmark and SAP BW edition for SAP HANA benchmark for complete details. For an interesting breakdown of the SAPS delivered by our Nitro-based instances, I would encourage you to explore SAP note 1656099.
  • Security: The Nitro System continuously monitors, protects, and verifies your EC2 instance’s hardware and firmware. Virtualization resources are offloaded to dedicated hardware and software, which minimizes the attack surface. Additionally, the Nitro System is locked down, prohibiting administrative access to eliminate human error or tampering.
  • Flexibility: Nitro helps us deliver all instances as truly cloud-native offerings irrespective of size. This means you can easily move from a bare metal instance to a virtual instance in a simple stop and start manner. Additionally, as you scale, there’s no need to de-provision and re-provision your storage resources, move to a new storage type, or change architecture patterns.

Summary and Getting Started

Today’s launch of additional High Memory Instances gives SAP on AWS customers more choice in how they can meet their unique requirements for cost and performance.

I highly encourage you to explore High Memory Instances and the AWS Nitro System yourself. To get started with migrating your SAP HANA workloads to an EC2 High Memory instance, see our migration guide. If you have any questions or feedback, please reach out to your account manager or SAP specialist. We remain committed to being the best place to run SAP so keep the valuable feedback coming!

Thanks,

–Steve


Automating SAP HANA Installations on AWS with DevOps

$
0
0

Feed: AWS for SAP.
Author: Chris Williams.

Introduction

In our first blog, Terraform your SAP Infrastructure on AWS, we began the SAP on AWS DevOps journey by defining the SAP infrastructure as code (IaC) .

By doing this, we demonstrated with our Terraform modules here on GitHub and here on the Terraform Registry, by defining SAP as code, infrastructure and servers can quickly be deployed using standardized patterns, updated with the latest patches/versions, and duplicated in repeatable ways to overcome some of the initial learning curve hurdles.

Another option to deploy is AWS Launch Wizard. Customer teams can build SAP systems that align with AWS best practices rapidly from the AWS Console following a guided experience designed for SAP administrators.

In the pursuit of lowering cost and increasing deployment efficiency, customers are looking to automate as many processes as they can. Customers want to further expand their capabilities by layering on the necessary software needed to run SAP on the previously defined infrastructure.

Following SAP installation logic, HANA is the next installation step that we would want to automate enabling a seamless deployment of the underlying infrastructure as well as the automation of the HANA installation.

To help customers on this journey, we are open sourcing the capability to build and configure SAP HANA here. Allowing customers to define their infrastructure using Terraform by following our above blog, you can layer on this new capability and automate the deployment of SAP HANA on AWS end-to-end all using DevOps tools and methodologies.

Applying this crawl, walk, run approach, enables customers to gain the skills they need and at the same time, we are accelerating it with the resources covered in this blog to deliver an SAP on AWS solution using DevOps tools and methodologies.

Using Terraform to deploy your automation tools

As with any EC2 or server-based application, you are going to want to adjust and tweak a few things at the operating system level not only to prepare it for your application or database but also to go a step further and automate the actual installation of SAP HANA. This element of DevOps is called Configuration Management or what is sometimes abbreviated as CM.

To tie back to what we discussed in our first blog, IaC or Infrastructure as Code is an approach where we used Terraform as a declarative language to define our infrastructure. Once done, we need a way to control mutable elements like an operating system or say a Linux kernel parameter especially for a monolithic application like SAP that doesn’t follow the immutable infrastructure model.

For more information on this topic see this article called What is Mutable vs. Immutable Infrastructure? SAP resources fall into the “We’re going to mutate it, modify it in place, to get into this new configuration.” area of discussion.

Making these types of changes is something that IaC tools are capable of but typically only do at resource instantiation. Often they aren’t the best tools to manage resources over their lifetime. That’s where CM comes in and why we chose to deploy our CM configuration using Terraform. IaC and CM have similar goals but differ in their approach. In the end, they pair nicely when deploying SAP on AWS.

With IaC owning the infrastructure deployment and Configuration Management installing the SAP components, CM is in a better position to pick up where IaC leaves off. Let’s move onto how we use CM to prepare and install.

Configuration Management for our SAP HANA installation delivers value by allowing developers and administrators to control settings and values within resources over the lifetime of their operation. CM in this case enables us to prepare our OS, install our database, and run any other commands that may be needed. We will document how to use AWS Systems Manager as a CM tool to install SAP HANA.

Leveraging SSM to install SAP HANA

An AWS Systems Manager document defines the actions performed on your managed instances. Systems Manager includes more than 100 pre-configured documents that you can use by specifying parameters at runtime. Documents use JavaScript Object Notation (JSON) or YAML, and they include steps and parameters that you specify.

In this blog, we deliver documents that allow us to define and code the exact steps needed to prepare a HANA EC2 instance, install HANA, and run any post configuration needed.

And before you think, “Oh, another tool I have to learn”, our approach to deploying these automations doesn’t change once we move into the Configuration Management layer of automation. We automate the automations .

We package our HANA automated install AWS Systems Manager documents in Terraform so we can leverage predefined parameters used by your previous infrastructure deployment and have the AWS Systems Manager documents inherit these definitions and logic. No need to reinvent the wheel.

We continue to automate within the previous automation lowering your development effort and mitigating incorrect settings. Terraform not only takes care of your parameters for you going forward with your installs, but it also deploys these documents to your AWS account seamlessly. For instance, the SID you set with Terraform cascades to all your installations and scripts that you need to run with SSM. Set it and forget it.

Now let’s dive into how AWS Systems Manager installs HANA.

AWS Systems Manager documents are defined in Terraform as resources and are deployed to AWS Systems Manager fully customized to your solution.

This solution deploys one master AWS Systems Manager document that contains child documents with dependencies baked in making it easier to maintain and deploy while following SAP standards. These documents then follow the below high-level sequence:

  • Set parameters
    • Sets OS parameters, roles, passwords, media location, SID, etc.
  • Mount disks
    • Mount, stripe, format, and prepare `/hana/data`, `/hana/log`, `/hana/shared`, `/usr/sap`, `/backup/`, and swap.
  • Download software
    • Copy media from S3 to the HANA EC2 and prepares it for installation
  • Execute install
    • Pass all parameters to `hdblcm` and execute the HANA install
  • Finalize installation
    • Setting basepath directories, adjusting `global.ini` and `daemon.ini` settings, and adjusting `hdbnsutil`.

By taking a modular approach, users can automate the trigger of the document executions or run them via the console while still adhering to the constructs of CI/CD and the DevOps methodology. This also gives Basis admins a tangible method to run these automations in the console. If executions fail or need to be adjusted, simply re-run the failed child document after deploying the necessary fixes and continue your deployment where you left off.

By packaging the documents in Terraform and executing the install with AWS Systems Manager, we enable IaC to control and deploy the correct documents while CM gives users a repeatable and consistent way to install. The best of both worlds.

Next steps

Applying a crawl, walk, run approach, has enabled us to gain the skills needed to not only deploy SAP infrastructure but also automate the installation of HANA.

Ready to get started? Find everything we mentioned above here in our Github repo.

Feel free to fork the repo and adapt our AWS System Manager documents for Chef, Puppet, Ansible, etc.

We would love to see all the ways you can use Configuration Management to automation your HANA installs. Maybe even go further and adopt this approach for your application servers, web dispatchers, central services, and so on!

If you are looking for expert guidance and project support as you move your SAP systems to a DevOps model, the AWS Professional Services Global SAP Specialty Practice can help. Increasingly, SAP on AWS customers—including CHS and Phillips 66—are investing in engagements with our team to accelerate their SAP transformation. If you are interested in learning more about how we may be able to help, please contact us here.

Join us at SAP SAPPHIRE NOW 2021

$
0
0

Feed: AWS for SAP.
Author: Brian Griffin.

AWS is excited to participate in SAP SAPPHIRE NOW as a Gold Sponsor in 2021.

The global virtual event is free to attend and will kick-off June 2nd. Please visit our virtual booth to learn why 5,000+ SAP customers choose AWS and to enter for a chance to win an Amazon Fire TV cube.

AWS is also featured in the finance track. To learn how customers are using AWS to future-proof their businesses, please join our breakout session, where I give an inside scoop into Moderna, Zalando, and Bristol Myers Squibb’s SAP on AWS journeys. Full session details below:

Session ID: FIN613
Title: Explore how finance leaders are future proofing their businesses
Dates: Thursday, June 10
EMEA – 14:00 (UTC) | 10:00 a.m. (EDT) | 4:00 p.m. (CEST)
Americas – 18:30 (UTC) | 2:30 p.m. (EDT) | 8:30 p.m. (CEST)

Thursday June 24
Asia Pacific — 4:00 (UTC) | 9:30 AM (IST) | 12:00 PM noon (SGT)

Make sure to add the session to your catalog once you register for the event. During virtual booth times, we’ll have SAP on AWS experts on hand to chat with you. We’d love to answer any questions you have through the virtual chat feature.

We look forward to participating in SAP SAPPHIRE NOW every year and are excited to meet with you virtually at SAPPHIRE NOW in 2021! To learn more, visit our SAPPHIRE NOW landing page and sign up for my breakout session.

Passive Disaster Recovery for SAP applications using AWS Backup and AWS Backint Agent

$
0
0

Feed: AWS for SAP.
Author: Milind Pathak.

Introduction

A Disaster Recovery (DR) solution is an important aspect of SAP system design. For customers running their SAP workloads on AWS, some of the key considerations for the design of the DR solution are single or multiple AWS Region or Availability Zones, Service Level Agreements such as Recovery Point Objective (RPO) and Recovery Time Objective (RTO), and cost. To make it easy for you, SAP Specialist Solutions Architects have come up with Architecture Guidance for Availability and Reliability of SAP on AWS. This guide provides a set of architecture guidelines, strategies, and decisions for SAP customers and partners who have a requirement for deploying SAP NetWeaver-based systems with a highly available and reliable configuration on AWS

The document High Availability and Disaster Recovery Options for SAP HANA on AWS describes all options for DR setup for SAP HANA databases on AWS, such as HANA System Replication (HSR) with data preload on or off, and backup and restore with Amazon Simple Storage Service (S3) Cross-Region Replication (CRR).

Customers can also use CloudEndure Disaster Recovery the block-level replication tool from AWS, , for the DR setup.

With AWS, the on-demand capabilities of Amazon Elastic Compute Cloud (EC2) provides customers with more options for their DR solution compared with a traditional on-premises setup.

In this blog, we are going to show how to build a low-cost DR solution using a passive DR approach for SAP HANA based applications. This solution is suitable for customers who wish to lower the cost of their DR solution and are able to accept higher RTO and RPO SLAs, when compared to data replication based solutions.

Solution Overview: Passive DR for SAP Applications in AWS

In this solution, the disaster recovery setup uses the backups of the primary SAP systems to build the DR environment. Setup is simplified using AWS services and features including AWS Backup, AWS CloudFormation and Amazon S3 CRR.

For this option, there are no Amazon Elastic Compute Cloud (EC2) instances or Amazon Elastic Block Storage (EBS) volumes provisioned in the DR AWS region during normal operations. Data is not replicated directly from databases. The following approach is used to backup and copy the production system to the selected AWS region for DR:

  • Use AWS Backint agent for SAP HANA or other database native tools, to take the DB backup and store it in Amazon S3
  • The Production system backups (data and log) are replicated to the DR region using Amazon S3 CRR.
  • AMIs (Amazon Machine image) of application servers and databases are created and replicated to DR AWS region using AWS Backup.
  • File system data (like transport directory and sap mount directory) is created and replicated using AWS Backup.

In the DR region, for testing or real DR events, SAP systems can be built on-demand using the AMIs copied from the primary region. To bring in the latest data into the databases, backups can be restored with point-in-time recovery. The data and log backup for SAP HANA database in AWS can be performed using native database tools and the AWS Backint Agent, or 3rd party tools. In this blog, we will describe the scenario using AWS Backint agent. The RPO in this solution depends on source system log backup frequency, change rate and amount of time taken to copy the backed up objects to the DR region. The RTO depends on the database size and change rate. The RTO/RPO are typically higher than with other DR option in which data is replicated directly from the database (like using HANA System Replication or CloudEndure).

The following figure shows an example architecture for the DR solution based on backup/restore:

Example architecture for the DR solution based on backup/restore

By using an automation service such as AWS CloudFormation, customers can build their DR systems quickly and with minimum manual effort using the backups copied from the primary region.

The DR solution for an SAP System should consider other supporting services such as Active Directory, and DNS, which is not covered in this blog.

Steps to configure Disaster Recovery

In this blog, we will show the disaster recovery setup using backup/restore for a SAP NetWeaver system with a SAP HANA database. This workload is distributed across four Amazon EC2 instances: one SAP HANA database instance, one ABAP SAP Central Services (ASCS), one Primary Application Server (PAS) instance and two additional application server instances.

Instance lists from AWS Console

It is assumed that the SAP application and database servers are already installed on these instances. If customers want to perform installation of primary systems as well, they can use AWS Launch Wizard for SAP.

In this blog it is assumed that the setup is cross region DR within the same AWS account. However for customers who have strict data residency requirement and can’t have data leave their primary region or for customers who are sensitive to network performance for end user access to the DR region, they can alternatively use this solution across multiple availability zones (AZ) within the same region, instead of multiple regions.

The following steps are designed for a cross Region configuration, and will need to be adapted if a cross Availability Zone design is used instead.  This includes changes such as removing the need to make an additional copy of backups and AMI’s. If DR will be is hosted in a separate AWS account, then additional steps are needed, such as permissions properly assigned, and encryption keys shared (if necessary) to allow access to the database backups and AMIs in the DR account.

Step 1: Create Tags to SAP EC2 Instances

On the source Amazon EC2 instances, create and assign tags to the SAP EC2 instances so that they can be uniquely identified by other AWS services in your AWS account.

Create & Assign Tags

Also, create similar tag(s) for Amazon EFS file system that you want to backup and copy in the DR region. The file systems could be used for sap mount directory or transport directory of an SAP system.

In case you are using Amazon EFS for SAP file system on source servers, ensure to have ‘nofail’ parameter set in the /etc/fstab. Please refer to the document Mounting your Amazon EFS file system automatically for details. This is to allow the DR instance to be launched from the AMI without any issues. When the server is available in the DR region, Amazon EFS file system is mounted again.

CreateAMIforDR - Tag

These tags are used in AWS Backup to configure a backup plan which creates AMIs of EC2 instances and backups of EFS File Systems.

Step 2: Configure AWS Backup to create AMIs and EFS File System backups

In AWS Backup service, go to ‘Create Backup Plan’ for copying the AMIs to the DR region and ‘Build a new plan’.

Create Backup PlanCreate Backup Plan

In the backup rule configuration, choose an option to manage the lifecycle, such as transitioning it to cold storage or expiring the backup after a defined period of time. Also, you can choose a specific backup vault, or choose the default vault to store the backups.

Backup rule configuration

Next, since you are using another region for DR, choose that as the destination region to copy the backups or AMIs. The destination region can be in a different AWS account, with a different backup vault for the DR side backups. The frequency can be aligned with your maintenance window so that any changes made to the source during maintenance can be replicated to the DR region with AMI copy.

For cross account copy

To assign resources to the plan, you can use Tags or Resource IDs. In this blog, we are using the tags we created in Step 1. Also, choose an IAM role with sufficient permissions for AWS Backup to create and manage recovery points.

IAM Role assignmentIAM Role assignementResource assignment

Assign tags to the backup, this will help you identify the backups in the DR region for restore process.

Backup plan tags

If you want different frequencies for Amazon EC2 AMIs and Amazon EFS backups, you can create separate backup plans for these two resource types. To understand how AWS Backup encrypts the backups, refer to the guidelines Encryption for Backups. If this does not meet your requirements, you can also create AMIs or snapshots using other services like Amazon Data Lifecycle Manager or automate this process using tools like AWS CLI. You can automate copies to the DR region using these same automation tools.

Step 3: Configure Backup for the Database

Configure the backup of your SAP HANA database. You can configure the backup using AWS Backint Agent for SAP HANA. The default configuration will schedule the log backup at a frequency of every 15 minutes. The RPO for DR in this solution depends on this frequency and therefore it should be tuned based on your requirements. Once the configuration is complete and data and log backups are scheduled, the backups are stored in Amazon S3 bucket in the primary region. SAP has listed a few options to schedule the database backups in SAP Note 2782059 – How to schedule Backups for HANA database (SAP S-User ID logon required):

  • hdbsql with the BACKUP DATA SQL query, using a shell script and cron jobs.
  • Backup schedule features in SAP HANA Cockpit (not in HANA Studio).
  • Schedule DB13 jobs using the DBA Cockpit.
  • Using XSEngine (Classic) Job Scheduler.

You can also use AWS Systems Manager Maintenance Windows for scheduling the databases backups centrally in AWS based on AWS resource tagging.

Step 4: Configure Amazon S3 CRR

Configure CRR from primary region to DR region for Amazon S3 bucket where the database backups are stored. You can find the steps for this configuration under Amazon S3 documentation for Replication.

S3 Cross Region Replication

With this setup, any backup file saved in the Amazon S3 bucket in primary region will be replicated to the DR region automatically. After completion of this step, you have the data required to build the SAP system in the DR region.

Step 5: Build the SAP EC2 Instances in DR region

You can now see that the AMIs of EC2 instances are created and copied into the DR region by AWS Backup. Go to AWS Console in the DR region and then click on the AMI section, you can find all the latest AMIs including those copied by AWS Backup. You can filter the resources based on tags assigned in the backup plan and pick the latest AMIs to build the DR system. You can either just select the AMIs one by one from AWS console and launch them in the DR region or use other tools such as the AWS CLI to launch these AMIs. You can also automate the launch using AWS CloudFormation to launch these AMIs faster and less manual effort.

AMI's to be used for launching the target

Also, check that Amazon EFS file system is copied to the DR region in AWS Backup from the AWS Console in DR region. Go to the Backup Vault that you chose to store the EFS backups in target region and check the latest backup that you want to restore. The resource ID is the same as the file system ID of the source file system.

Backup vault for EFS

Check the right recovery point from the backup vault and start the restore process to a new file system.

Restore Backup

Once the restore job is complete, a new file system will be created. Use this new file system ID in file /etc/fstab on Amazon EC2 instances and mount the file systems. If restore takes a long time due to file system size and you want to use another tool which performs delta replication at a defined frequency, you can alternatively use AWS Data Sync for replicating to the Amazon EFS file system. Additional cost will apply for using AWS Data Sync for replication, refer to AWS DataSync pricing for calculating this cost.

Step 6: Perform the recovery of the Database

The AWS Backint Agent configuration parameters are maintained in a YAML file in the /<installation directory>/aws-backint-agent/ directory. The name of the configuration file is aws-backint-agent-config.yaml. To change the backup configuration on the DR database server, login to the database EC2 instance at operating system level. Next, update the entries in the backint configuration file such as the backup bucket name, AWS region, account ID and other settings as needed. After this, you can access the backups from SAP HANA database in the DR region through the backint agent installed on this server. Perform the restore and recovery following the procedure provided in the SAP HANA administrator guide.

Conclusion

When you run your SAP workloads in AWS, you have multiple options to, to design and deploy DR environments that take advantage of the unmatched reliability and flexibility of the AWS Global Infrastructure. Customers do not necessarily have to provision EC2 instances and EBS volumes for DR unless required to meet their RTO and RPO SLAs. As shown in this blog, a customer can just replicate the backup data and build their SAP DR servers on-demand, reducing the cost to deploy DR and therefore lowering the TCO of their SAP environment.  To simplify the deployment and operations of such a model, AWS provides services like AWS Backup and AWS Backint agent for SAP HANA. Also, AWS provides services like AWS CloudFormation to automate the DR testing and failover events.

To learn more about why 5000+ customer choose AWS, visit aws.amazon.com

How to connect SAP solutions running on AWS with AWS accounts and services

$
0
0

Feed: AWS for SAP.
Author: Arne Knoeller.

Connectivity and data exchange between different services and PaaS or SaaS solutions are important in today’s IT infrastructure. We hear from AWS customers who are using SAP services such as HANA Enterprise Cloud (HEC), RISE with SAP or SAP Business Technology Platform (BTP), that they wish to leverage the connectivity services provided by AWS to reduce complexity and costs while improving security and performance.

Customers require connectivity from on-premise to SAP’s solutions running on AWS – both for hybrid setups, where workloads and interfaces are in customers’ on-premise data centers or simply for user access to consume and connect to the SAP solutions. But also, to exchange data between SAP solutions and other services running on AWS. In this blog you are going to learn about the connectivity options for common SAP services running on AWS.

I want to explain the different options to setup the network connection from on-premise to the SAP solutions like SAP HANA Enterprise Cloud (HEC), RISE with SAP, SAP Business Technology Platform (BTP), SAP Analytics Cloud (SAC), SAP Data Warehouse Cloud and SAP HANA Cloud. In addition, I’m also going to show how to connect from a customer managed AWS accounts (named as “customer managed AWS account” in the following text) to the AWS account managed by SAP (named as “SAP managed AWS account”). This connection is important for customers who are already running on AWS and want to re-use the existing connectivity into AWS to connect planned and future SAP solutions with AWS services.

I won’t cover technical details about AWS network technology, but rather focus on how to connect to the mentioned SAP services above.

Depending on the SAP product, there are different connectivity options, which I want to describe in more detail:

SAP HANA Enterprise Cloud (HEC) / RISE with SAP

SAP HANA Enterprise Cloud (HEC) and RISE with SAP are SAP services, running on AWS and are offered in different AWS regions. As of today, SAP has enabled 17 out of 25 AWS regions for this offering, and there are more to come. AWS offers different options to connect to your Amazon Virtual Private Cloud (VPC). Both managed services are considered as private cloud offering, thus requiring a private connection – AWS provides several options for private connectivity. The connectivity options supported by SAP are based on an AWS VPN connection and AWS Direct Connect.

AWS VPN

An easy and cost-efficient way to connect to the hosted SAP system on AWS, is to connect via AWS Site-to-Site VPN. AWS Site-to-Site VPN creates encrypted tunnels between your network and your Amazon Virtual Private Clouds or AWS Transit Gateways. Traffic between on-premise and AWS is encrypted via IPsec and transferred through a secure tunnel, using the public internet. The advantages of an AWS VPN connection are the efficient and fast implementation, as well as lower costs compared to a direct connect.

AWS Direct Connect

If you require a higher throughput and a more consistent network experience than internet-based connection, you can use AWS Direct Connect to connect between on-premises and the AWS cloud.

AWS Direct connect is offered by multiple partners and you can select from a range of bandwidth and implementation options. More information about the connectivity options can be found in the AWS Whitepaper Amazon Virtual Private Cloud Connectivity Options and resiliency are documented at AWS Direct Connect Resiliency Recommendations.

The direct connect providers, uses dedicated, private network connections between customers’ intranet and Amazon VPC. The traffic is not routed through the internet and provides a more reliable bandwidth and throughput compared to VPN.

You can also leverage an existing AWS Direct Connect, used for other workloads on AWS for example, to connect to the SAP managed AWS account. Therefore, the connection just needs to be extended by a virtual private gateway in the SAP managed AWS account to connect to the private virtual interface (VIF) or to the Direct Connect Gateway.

Connectivity between AWS accounts

HEC and RISE with SAP are running in AWS accounts, managed and owned by SAP. However, you can create your own AWS account for additional workloads and to use native AWS services. There are two options to connect the SAP managed AWS account with your customer managed AWS account:

1. VPC Peering

Virtual Private Cloud (VPC) peering is a network connection between two VPCs, which enables traffic flow using private IPv4 addresses or IPv6 addresses. Instances can communicate with each other as if they were in the same network.

To peer two VPC, the defined IPv4 Classless Inter-Domain Routing (CIDR) block must not overlap, otherwise the peering connection will fail. It’s recommended to align with SAP to defined the CIDR rages and to make sure, the SAP managed rages fit into your network concept. Once the peering connection is requested, SAP needs to accept the peering connection in their AWS VPC.

VPC peering is a one-to-one connection between VPCs. If you require direct communication to the managed SAP service with multiple VPCs, you need to setup multiple peering connections. With many of AWS accounts and VPCs this might become complex and hard to manage, that’s why option 2 (see below) should be considered for such scenarios.

VPC peering works also across AWS regions. So, it’s possible to peer a customer account running in eu-west-1 with the SAP account in eu-central-1 for example. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits and DDoS attacks.

VPC Peering

Connectivity via VPC Peering across regions

Another benefit, beside the simple setup and the cross-region capabilities, are the lower costs for VPC peering compared to the AWS Transit Gateway or routing the traffic via on-premise. Recently AWS announced a pricing change for VPC peering. Starting May 1st 2021, all data transfer over a VPC Peering connection that stays within an Availability Zone (AZ) is now free.

You can request the AZ ID from SAP to make sure it is the same as the AZ ID used in the customer managed AWS account.

2. AWS Transit Gateway

The second option to connect two or more AWS accounts is by using AWS Transit Gateway. AWS Transit Gateway is a network transit hub which can be used to interconnect Amazon VPCs. It acts as a cloud router and the connection between the SAP managed AWS account, only needs to be established once. Complex peering setups can be resolved and simplified by implementing AWS Transit Gateway as a central communication hub.

To connect the SAP managed AWS account, you need to create the AWS Transit Gateway in your own AWS account and share it with the SAP managed AWS account. Afterwards, SAP can attach the VPC for the managed SAP service to the AWS Transit Gateway and enable traffic flow through an entry in the route table. With this setup you keep control about traffic routing, because the AWS Transit Gateway resides in your own account where it can be managed.

AWS Transit Gateway

Connectivity via AWS Transit Gateway

To connect multiple VPCs across AWS accounts and AWS regions, you can establish a peering connection between multiple AWS Transit Gateways in different regions.

AWS Transit Gateway Cross Regions

Connectivity via AWS Transit Gateway across regions

By using peering between AWS Transit Gateways across regions, the traffic also stays within the AWS network and the same considerations as described in the VPC peering option apply. This is also valid for the non-overlapping IPv4 CIDR ranges for the different VPCs.

In case you are using an AWS Transit Gateway in combination with an AWS Direct Connect, you can also use this setup to route traffic from the SAP managed AWS account to on-premise and vice versa and to connect between AWS accounts.

SAP Business Technology Platform

SAP Business Technology Platform (BTP) offers a variety of different services and provides different environments, such as Cloud Foundry, ABAP, and Kyma. All three environments are running on AWS – Kyma is the latest release from April 24. As of today, SAP BTP is available in 9 commercial AWS regions.

To connect to the BTP services, you can access the public endpoints via the internet. If you require a more consistent network experience, AWS Direct Connect is also available, to connect to the BTP platform. However, the direct connect is established between the on-premise network and the public AWS endpoints. For HEC and RISE with SAP the AWS direct connect is using a private virtual interface, which accesses resources in the VPC and connects to the private IP address of the resources. To access BTP via AWS direct connect you need to connect to public IP address, using the public virtual interface. To learn more about these differences, please refer to the AWS knowledge center.

A step-by-step guide, how to setup the direct connect is described in the SAP blog Accessing SAP Cloud Platform via AWS Direct Connect.

SAP Cloud Connector

To connect BTP services with SAP systems running on AWS, the SAP Cloud Connector (SCC) is the recommended solution. The SAP Cloud Connector establishes a secure communication between BTP services and the SAP systems, without exposing the SAP System to the internet. It is not required to open inbound connections in the security groups and using reverse proxies in the DMZ to establish access to the SAP systems. The SAP Cloud Connector acts as a reverse invoke proxy and establishes a persistent TLS tunnel to SAP BTP sub-accounts. The attack surface is reduced with this architecture, because the backend SAP systems are not visible to the internet.

The SAP Cloud Connector offers a software-based HA implementation to protect against failures, or you implement the connector in an Amazon EC2 autoscaling group, to protect against EC2 instance failures as shown in the architecture picture below.

SAP Cloud Connector

SAP BTP connectivity via SAP Cloud Connector

SAP Data Warehouse Cloud, SAP Analytics Cloud and SAP HANA Cloud

These are all SaaS or PaaS Solutions, offered via SAP BTP and are running in a multi-tenant environment. That’s why it’s not possible to establish a one-to-one connection between the on-premise network or a customer managed AWS account and the SAP managed AWS account VPC of the SaaS/PaaS solution. VPC peering or AWS Transit Gateway can’t be used to connect these solutions with additional AWS accounts. However, the same connectivity principle as for BTP connectivity applies.

You can use the SAP Cloud Connector to connect to SAP systems, running on AWS like S/4HANA or BW/4HANA for example. In addition to the direct backend integration with the SAP Cloud Connector, all three services offer a direct integration to a variety of AWS services, like Amazon S3 for example.

SAP Data Warehouse Cloud can connect to Amazon S3, Amazon Redshift, or Amazon Athena for example. You can find more information in the SAP Discovery Center.

SAP Analytics Cloud offers integration to Amazon S3 (via open connectors), Amazon Redshift and Amazon EMR.

SAP HANA Cloud can connect to Amazon S3 and Amazon Athena.

For additional connectivity information and data sources, please have a look at the SAC documentation, DWC documentation or HANA Cloud documentation.

Summary

For managed offerings like HEC and RISE with SAP, VPC Peering is a simple and efficient way to connect the customer managed AWS accounts with the SAP managed AWS account, where the SAP Services run. AWS Transit Gateway is a good solution for more complex network setups and to connect the SAP managed AWS account with a large number of other AWS accounts and VPCs. Customers need to consider, that the AWS Transit Gateway can only reside in the customer AWS account.
Customers can leverage existing connections to AWS via AWS VPN or AWS Direct Connect and connect AWS resources with the described connectivity options. It is recommended to use the AWS to AWS communication and not route traffic via on-premise if it’s not required. With that you benefit from AWS network speed, latency, and security.
SAP BTP services offer public interfaces, and you can connect with the SAP Cloud Connector in a secure way, via TLS-encryption to the multi-tenant services offered by SAP BTP.

To learn why AWS is the platform of choice and innovation for 5000+ SAP customers, visit aws.amazon.com/sap.

Automate Start or Stop of Distributed SAP HANA systems using AWS Systems Manager

$
0
0

Feed: AWS for SAP.
Author: Nerys Olver.

In recent blogs, we’ve discussed the advantages of DevOps for SAP for both build and operations and explored using AWS Chatbot to initiate Stop/Start. Today we help you fast-track the use of AWS native services for SAP operations by open-sourcing an AWS Systems Manager based solution that will Start and Stop distributed SAP HANA landscapes installed on a Linux Operating System.

You can deploy this solution in less than 5 minutes using an AWS CloudFormation template. It will facilitate an application aware stop and start of the SAP workload and EC2 instances and will notify an email address of success or failure. The only prerequisites are that the instances are set up as managed instances in AWS Systems Manager and that the tagging strategy outlined in the readme has been applied.

Before we dive into the solution, let’s examine one of the reasons why it was developed.

The Use Case

5000+ customers globally have begun their SAP on AWS journey, either migrating existing SAP workloads as-is or exploring the implementation of Suite on HANA or S/4HANA as part of their move to AWS.

While evaluating the total cost of ownership for these cloud-based SAP deployments, many have examined the flexible commercial models available from AWS and identified opportunities to reduce costs using a combination of Savings Plans, Amazon EC2 Reserved Instances, and reduced operating hours using Amazon EC2 On-Demand Pricing for systems that are not required 24×7.

For many customers, non-production SAP systems including development, test, training, and sandbox instances are not in constant use. Some or all of these systems may have low uptime requirements or a short-lived role in the project cycle, in which case on-demand pricing may be a more cost-effective option.

For example, an r5.16xlarge EC2 instance that runs less than 67 hours a week in us-west-1 is cheaper using On-Demand pricing than even our most cost-effective pricing model. See the comparative costs in the diagram below and use the AWS Pricing Calculator to perform similar comparisons for your instance type and region.

A bar chart showing a cost comparison of the different pricing models for an R5.16xlarge. The break even point is 67 hours a week.

Controlling uptime is a great way to reduce costs, but it comes with the operational challenges of shutting down EC2 instances when they are not in use, for example overnight and on weekends.  It also requires the stop/start process to be application-aware in a distributed landscape. For example, an SAP application server instance cannot start until its corresponding database instance is up and running, and during shut down, stopping a database host without first cleanly stopping the database application can increase the risk of database corruption.

We often see an evolution in how customers address this challenge. In the project phases – while you become familiar with the cloud setup and the operating model – it is not unusual to see a traditional approach where your SAP Basis Administrator logs on to individual systems to execute multiple startup and shutdown commands. In addition to this being a lengthy manual process, it is prone to errors and can introduce risks if the appropriate controls are not in place.

The next iteration we often see, uses a combination of scripts scheduled in cron and/or tag-based shutdown/startup schedules. This reduces the manual steps, but still has the challenge of coordinating cross instance dependencies and can fail to minimize system uptime to only when the system is in use. Manual setup effort, scheduling changes, and visibility are other concerns listed with this approach.

Finally, we see some customers adopt off-the-shelf solutions to manage their SAP landscapes. This approach can make sense if the product provides a broad set of operational functionality and is integrated with SAP and AWS functions. In your evaluation ensure that costs relating to licensing, hosting, implementation and support are factored in.

Understanding the limitation of all these solutions we looked for an alternative. The AWS cloud-native solution we have developed can be broken down into two parts:

  • SAP Aware Systems Manager Automation Documents (Runbooks). An SAP-aware and cloud-native method of starting/stopping systems that move all instance operations (including instance start) to a central control mechanism.
  • Execution and Notification Frameworks. Options to remove or reduce the dependency on the technical team but ensure the necessary controls and visibility are in place.

The resulting solution has been used by customers including CHS Inc., a diversified global agribusiness cooperative, who have continued their DevOps for SAP journey since migrating to AWS. CHS leveraged this automation to shut down their non-production SAP systems after business hours, all with no human intervention. CHS quickly saw meaningful cost savings by keeping non-productions shut down during non-business hours.

The Solution

Let’s take a more detailed look at the two elements we described:

SAP Aware Systems Manager Automation Documents

AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and automate operational tasks across your AWS resources. For operators with a system administration background, this should be easy to configure using a combination of predefined automation playbooks, RunCommand modules which allow writing simple bash scripts, and the occasional decision step.

Each step is designed to perform a single action or step, allowing the elements to be built, chained together, and reused but also giving improved visibility and control. (This becomes a key element of the framework later on).

We chose RunCommand and bash scripts because this aligns with the “command line” use that SAP administrators would be familiar with, we also tried to minimize the configuration and input required, using queries on the host to identify what was running and to derive the parameters required for issuing commands. To tie the execution together and identify instances SSM Automation document parameters, outputs, and instance tags were used.

Once you have deployed the document you can review the Markup text descriptions to understand the steps in more detail.

As an example, the following are amongst the steps executed if the start action is selected.

 Name  Action Description
 START_QUERY_AWS_DBInstanceId  aws:executeAwsApi Runs EC2 Describe instance to query tag values to identify the DB server for a particular SID.
 START_AWS_DBInstanceId  aws:changeInstanceState Change the desired state of the Instance to be “running”
 START_DB_HANA  aws:runCommand Short batch script which calls sapcontrol start function and validates that the system is Running

The following code snippet gives an example of the bash script used to run sapcontrol commands. See Note 1763593 – Starting and stopping SAP system instances – startsap/stopsap are deprecated (requires SAP s-user)

#!/bin/bash
SAPProfile=`grep -o 'pf=[^[:space:]]*' /usr/sap/sapservices | grep _HDB`
SID=`echo ${SAPProfile} | awk -F "/" '{print $4}'`
SYSTEMNO=`echo ${SAPProfile} | awk -F "/" '{print $7}' | awk -F "_" '{print $2}' | sed 's@^[^0-9]*([0-9]+).*@1@'`

echo "Running /usr/sap/hostctrl/exe/sapcontrol -nr ${SYSTEMNO} -function GetSystemInstanceList"

/usr/sap/hostctrl/exe/sapcontrol -nr ${SYSTEMNO} -function GetSystemInstanceList
if [ $? -eq 0 ]
then
    echo "There is a system here, I am going to use start"
    /usr/sap/hostctrl/exe/sapcontrol -nr ${SYSTEMNO} -function StartWait 600 2
else
    echo "There is no service running for instance number ${SYSTEMNO} "
    exit 1
fi

/usr/sap/hostctrl/exe/sapcontrol -nr ${SYSTEMNO} -function GetProcessList
if [ $? -eq 3 ]
then
    echo "`date +%F %T`: The HANA Database Started Successfully"
else
    echo "The return code was $? "
    exit 1
fi

Runbook Execution and Notification Frameworks

As an added feature, consider how you could trigger the Systems Manager Runbook with the appropriate security controls and governance to align with the criticality of your system.

If granting access from an external source, ensure that you follow the best practices for IAM and run the document using roles with the least privilege.

Start by executing the automation manually to gain familiarity with the document. As a next step, explore the options for scheduling using triggers such as Amazon Eventbridge patterns that configure runbooks as a target of an EventBridge event rule. The options and details are best covered in the Systems Manager Documentation.

If schedules are not fixed, these mechanisms can be extended for on-demand and ad-hoc usage. It is possible to execute the runbook and receive notifications on the status by integrating with other services, including email and AWS Chatbot. More complex frameworks could be designed to work with external approval processes and instant messaging services.

In the CloudFormation template provided, we have adopted a simple solution using Amazon Eventbridge and Amazon SNS. Eventbridge will identify changes in the runbook status using rules which will send emails on completion or failure, including a link to the runbook execution detail in the AWS console.

A screenshot with the execution detail for the systems manager document

Getting Started

The CloudFormation document to deploy this solution is available for use in GitHub. Take a look at the README for information on how to set up your environment and the services which will be created.

Different deployment options - single systems, semi distributed and distributed

What’s next

Although we hope you can use the document as is, what excites us most is the extensibility of this solution. The options for use in an SAP operational context are endless.

This document could easily be modified to

  • Work with different database types
  • Use PowerShell commands for Windows workloads
  • Incorporate additional steps for clustered highly available central services or database setups
  • Add additional pre or post steps to your start-up and shutdown procedure, such as ensuring backup completion

A similar style of document and framework could be used for

  • Coordinating database and filesystem backups
  • Daily checks and housekeeping with logging capability
  • Resizing instances or changing storage characteristics for performance testing

As you start to explore, you’ll see that most combinations of AWS API calls and Operating System commands are possible. It is also possible to incorporate steps that rely on AWS Lambda. We look forward to seeing what our customers build, and if you need a hand with automating operational activities using native services, consider engaging the AWS Professional Services Global SAP Specialty Practice.

SAP on AWS: Build for availability and reliability

$
0
0

Feed: AWS for SAP.
Author: Ajay Kande.

In the words of Amazon CEO Andy Jassy ‘there is no compression algorithm for experience’. With over 5000 SAP customers on AWS, AWS has become a platform for innovation for SAP workloads. With our working backwards leadership principle, AWS has built several tools and services to help SAP customers build robust, reliable and scalable SAP systems on AWS regions across the world. In this blog, we will discuss various AWS services to build reliable SAP systems on the platform.

A robust backup policy is at the center of an enterprise’s business continuity and disaster recovery (DR) strategies. When migrating to AWS, customers can adopt new tools and services available on AWS, to simplify their SAP applications’ recovery steps from various availability events.

When planning a backup policy for your applications, consider a combination of file backups and storage snapshots to minimize recovery time objective (RTO). Mission critical applications need protection from events that occur within an AWS availability zone (AZ), as well as events that can affect an entire region. Before we go any further, it’s important to understand the system design principles for SAP systems on AWS that are detailed in the technical document, “Architecture Guidance for Availability and Reliability of SAP on AWS”, as we will extend on that topic in this blog post.

Building a backup policy on AWS

Below are some of the key services and features that can be used to build a backup policy for SAP applications and databases.

Backup of HANA databases

AWS Backint Agent for SAP HANA is used to backup SAP HANA databases to Amazon Simple Storage Service (S3) buckets directly and restores it using SAP management tools such as HANA Cockpit. Optionally, you can use SAP Backint Agent for Amazon S3, as per SAP Note 2935898. It is also possible to add a storage snapshot based backup and recovery strategy to your backup policy for your SAP HANA databases. When deploying HANA using AWS Launch Wizard for SAP, you can choose to install backint agent for integrating backups with Amazon S3 service.

Backup of AnyDB databases

AnyDB(non-HANA) databases running SAP, can be backed up to files on an Amazon Elastic Block Storage (EBS) volume, for staging your backups. Once the backups are finished, the files can be uploaded to S3 using the AWS CLI. The process can be automated using features of AWS Systems Manager (SSM).

Alternative approaches to backup non-HANA databases, using database and AWS native features, include:

Oracle Database:

Customers running SAP with Oracle DB on AWS can use Oracle Secure Backup(OSB) Cloud Module to integrate Oracle backups with AWS S3 service. This feature is available with Database 9i Release 2 or later. When using RMAN, multiple backup channels can be used to improve performance.

Customers running SAP workloads with Oracle can also leverage AWS Native EBS multi-volume crash-consistent snapshots to perform backup and recovery for Oracle databases. This procedure saves storage cost as snapshots are incremental. Amazon EBS Fast Snapshot Restore improves restore time. Please look at Kellogg’s customer story for the benefits of this approach.

SAP ASE Database:

Customers running SAP workloads with SAP ASE (Adaptive Server Enterprise) database use Amazon S3 as their backup storage. This solution needs AWS File Gateway which is used to transfer asynchronously data to Amazon S3 over an HTTPS connection. SAP has also performed detailed analysis and shared solution with configuration steps for this approach.

Similar to other databases, the SAP ASE database also provides the option to leverage the Amazon EBS snapshots option for backup and restore operations. Refer to this blog for detailed steps to perform and automate backup/restore operations on SAP ASE database using Amazon EBS Snapshots.

Microsoft SQL Server Database:

MS SQL Server running on Microsoft Windows can use VSS (Volume Shadow Copy Service) feature to perform consistent DB backup. VSS is also integrated with AWS Backup which make administration easy for backup/restore operations. Please refer to the blog for detailed configuration and testing steps.

Backup using third party tools

Many third-party enterprise backup tools are able to read and write backups to Amazon S3, also, and integrate with the SAP backint interface for supported backup methods. If you are already using a tool and would like to use it on AWS, please check with the vendor for integration with S3. We also have solutions offered by our partners such as Linke Emory Cloud Backup, where SAP HANA, Oracle, and SAP ASE database backups can be stored and recovered directly from Amazon S3. Partners like Commvault, Veritas, N2WS, Actifio and others are also able to write SAP HANA and AnyDB database backups directly into Amazon S3 buckets. The solutions offered by these ISVs may provide additional features such as de-duplication, encryption, compression, and may save you money by reducing or removing the need for EBS storage for backup staging.

Amazon S3 Replication

Amazon S3 Replication is a feature which can be used to replicate all or a subset of your SAP database backups stored in an S3 bucket, to a separate S3 bucket for alternative recovery purposes, such as for Disaster Recovery. S3 Cross Region Replication adds the capability to replicate these files to a different AWS region, for Disaster Recovery purposes. And with S3 Replication Time Control, you can now replicate your S3 objects in predictable time frame, based on Service Level Agreements from AWS. When replicating backup files in S3 to a DR region, you can choose the S3 Standard-Infrequent Access (S3 Standard-IA) tier in the DR region, to store your backups at a lower cost in the DR region.

AWS Backup

AWS Backup provides a control plane to manage backups for services such as EBS, EFS, EC2, DynamoDB, Aurora and storage gateway. AWS backup plan can also copy the backups to another region for Disaster Recovery. SAP applications often use services such as EBS, EFS, and EC2, which can be backed up using AWS backup.

Database backups via Amazon Elastic Block Storage(EBS) snapshots

Databases can be backed up using snapshots. Snapshots are incremental, meaning only the blocks on the device that have changed after your most recent snapshot are saved. Snapshots are suited to back up SAP file systems such as /usr/sap/*, /sapmnt/*. When using EBS snapshots to backup databases, make sure the database is in “backup mode” or shut down your database before a snapshot is triggered for consistency.

Database “backup mode” is invoked to pause I/O operations to storage area, for an application consistent snapshot. Most modern databases provide “backup mode” option, including HANA. Note that, when you run your database on LVM striped volumes, make sure snapshots are initiated on all EBS volumes in the volume group. Amazon EBS snapshots can be scheduled using AWS Backup.

Amazon Elastic File System(EFS) backups

Amazon EFS files systems are used for hosting saptrans and sapglobal files. These files are shared across multiple EC2 instances running SAP. Amazon EFS can be backed up using AWS Backup, either on a schedule or on-demand. Using AWS backups you can also replicate your backups across regions to meet your DR requirements. Additionally, SAP customers also replicate their EFS to their DR region, using AWS data sync.

Amazon AMI backups

AMI (Amazon Machine Image) backups provide a fully recoverable copy of your entire EC2 instance, including all EBS volumes. This can be used for a quick rollback of a change made to the entire database or application. For instance, when you are applying an O/S patch, database or application patch, an AMI backup provides a solid rollback options to recover from failure. You can use AWS Backup to schedule AMI backups periodically, or simply create an on demand backup.

CloudEndure Disaster Recovery

CloudEndure continuously replicates your machines (including operating system, system state configuration, databases, applications, and files) into a low-cost staging area in your target AWS account and preferred Region. This enables minimal Recovery Point Objectives (RPO) for all applications and databases running on supported operating systems. CloudEndure is also widely adopted for replicating SAP applications across regions.

EC2 Auto recovery

You can automatically recover an impaired instance using an Amazon CloudWatch Alarm, which will execute a recovery action. EC2 instances running SAP applications can take advantage of EC2 auto recovery feature to be highly available within an availability zone. AWS recommends enabling this feature for lowering the RTO from failures.

Let’s take a look at the architecture below to understands how various AWS services discussed so far can be used for building reliable SAP solutions on AWS.

Architecture diagram showing how AWS services can be used for building reliable SAP solutions on AWS

1. AWS backint agent backing up a SAP HANA database to S3 bucket in Region1
2. Backup bucket A replicated to another region using Amazon S3 cross region replication feature
3. Amazon services EC2(AMI), EBS, EFS snapshots replicated across regions
4. CloudEndure replicating SAP application server to another region
5. Database replicated to DR region using log replication
6. Amazon CloudWatch alarm for auto recover enabled on all hosts for protection from component failures
7. Database high availability cluster across Availability Zones
8. Application high availability cluster across Availability Zones

Using your backup policy to recover from events

Now that we know all the tools to help us build reliable backup policies, let’s take a look at services event sand how to recover from each.

Scenario 1: Amazon EC2 event

These are considered as events within the AWS Availability Zone arising out of common issues with networking, power, software bugs at hypervisor layer etc. EC2 auto recovery provides a robust solution to recover from such events. SAP HANA database provides auto-restart function, which simply starts the database upon intended/unintended host reboots. Applications and databases that do not provide this feature, may have to rely on bootstrapping with a script written in shell.

Though auto recovery provides a quick means of recovering from various events, certain applications such as S/4 HANA may need to operate at near zero recovery time objective(RTO). This can be achieved by SAP native high availability configuration using pacemaker solutions offered by SUSE or Red Hat, and other third-party clustering software providers.

Scenario 2: Amazon EBS events
Scenario 2a: Independent or single EBS volume event

SAP Databases: an Amazon EBS volume can be recovered from an EBS snapshot of that volume. This recovery is consistent only if the snapshot was invoked after the database was held in backup mode. However, when recovering from a snapshot that was taken without invoking backup mode, you can achieve consistency by applying database log files, as you will be working from a known database checkpoint. One example of this approach is described in the blog post, “How to use snapshots for SAP HANA database to create an automated recovery procedure”, in the AWS for SAP blogs.

SAP Applications: Amazon EBS snapshot is a frequently used method to meet the RPO requirements of SAP application servers. As Application servers do not store data that requires long-term persistence, failed volumes can be recovered from EBS snapshots made from the application server’s EBS volumes.

Scenario 2b: EBS volume in a volume group

Often SAP databases are installed on a volume group, using the operating system’s logical volume manager (LVM), which is striped across multiple volumes for performance. During an event with an EBS volume in volume group, you have to rebuild the volume group and use a roll forward recovery approach, by restoring the database from a known good backup, then applying database log to bring the database back up to the desired point of time, prior to the failure.

An alternative approach may be used by following the steps in Scenario 2a, for using EBS snapshots to backup and restore the database data.

Scenario 3: Availability Zone events

Production

Availability zone failure can impact SAP applications and databases. Mission critical production SAP applications and databases are deployed with highly available architecture across AZs to mitigate events within an AWS Availability Zone. You can further lower your RTO by clustering SAP applications across AZs using third party products.

Non-production

Non-production SAP instances are often deployed in single AZ. Such applications rely on Amazon AMI backup, file backups and EBS snapshots. During an event within an AWS Availability Zone, you could recover the impaired EC2 instances from a recent AMI and recover your database to a known good point in one of the available AZ’s within the region.

Scenario 4: Regional events

Amazon AMIs, Amazon EBS snapshots, Amazon S3 buckets with database backup files, and Amazon EFS file systems can be replicated across regions for Disaster Recovery. Tools such as CloudEndure can also be used to replicate EC2 instances, or even on-premises based servers across regions, which helps in building reliable DR strategies at low cost. Customers often run table top DR tests, to exercise how well they are prepared for an event, and constantly update their DR strategies.

Databases

To recover databases from regional events, launch an EC2 instance using the latest AMI of the database instance, and recover the database using its backup. To achieve this, you need to ensure AMIs and database backups are copied across the regions during normal operations. Features such as S3 Replication, AWS DataSync, and AWS Backup can help achieve replication of data across the regions.

For low recovery time/point objective(RTO/RPO), databases can be replicated data across the regions using native database replication technologies. During a regional event, the replicated database in the DR region can take over as the primary. This approach can provide you the lowest RPO possible.

Snapshots provide second layer of security for reliability. You can back up the data on your Amazon EBS volumes by taking point-in-time snapshots.

Applications

SAP application servers can be backed up using AMIs. These AMIs can be copied across to the designated region for DR, by either AWS backup or using AMI copy feature. During a DR event, use the latest AMI copy to launch your application server EC2 instances. An AMI copy based DR strategy provides a low cost DR approach.

Alternatively, CloudEndure Disaster Recovery can be used for replicating SAP application servers across regions. This approach can ensure the replicated VMs are current, in terms of operating system patches, configuration, and SAP kernel version, ready to go online during an event.

Conclusion

In this blog, we discussed various tools available to backup your SAP resources on AWS. We also looked at several availability events and how to recover from each of them. I hope this will help you in develop resilient business continuity and disaster recovery strategies for your SAP workloads on AWS. Let us know if you have any comments or questions — we value your feedback.

Securing SAP Fiori with AWS WAF (Web Application Firewall)

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

When SAP customers embark on a journey to implement SAP S/4HANA, many have questions on how to secure SAP Fiori, the User Interface component of SAP applications based on HTML5. As the starting point, we will refer to The OWASP Top 10 (Open Web Application Security Project), which represents a broad consensus about the most critical security risks to web applications. By adopting this, you start the mitigation process of ensuring that the security risks to SAP Fiori are minimized.

Even though SAP provides secure programming guide to mitigate the security risks, it is preferred that the mitigation is in place before web requests are reaching the SAP Fiori system to ensure that resources are always available to serve end users to execute their business processes. We will introduce AWS WAF which is a web application firewall to filter the web requests before reaching SAP Fiori.

With WAF, you can allow or block web requests to SAP Fiori by defining customizable web security rules and leverage AWS Managed Rules. AWS Managed Rules for AWS WAF is a managed service that provides protection against common application vulnerabilities or other unwanted traffic, without having to write your own rules. When your security requirement goes beyond AWS managed rules, you can review the Managed Rules being offered in AWS Marketplace, which are created by security experts with extensive and up-to-date knowledge of threats and vulnerabilities. You can find out more here.

In this blog, we will provide a how-to guide for SAP customers on AWS to implement AWS WAF and AWS Managed Rules with SAP Fiori. This is the extension of the related blog series on Improving SAP Fiori performance with Amazon CloudFront and Securing SAP Fiori with Multi Factor Authentication. You can also find out more details of WAF in Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities and AWS Best Practices for DDoS Resiliency.

Solution Overview

SAP Fiori with WAF Architecture

The diagram above describes the traffic flow between end-user with web browser and SAP Fiori. This simplified architecture will be used to describe the deployment process of AWS WAF and SAP Fiori. For productive deployment, further enhancement to this architecture will be required depending on your requirements, such as High Availability setup across two AWS Availability Zones, and connectivity to your own on-premises data centre, etc.

The Application Load Balancer (ALB) will route the traffic through AWS WAF so it can be inspected for security risk mitigation based on web security rules defined. When the traffic is allowed, it will then be returned to ALB and continued to back-end SAP Web Dispatcher and/or SAP Fiori.

When you want to deploy SAP Fiori globally, you can integrate AWS WAF with Amazon CloudFront as described here, as it allows you to reduce impact to SAP Fiori system availability for end users by delegating the protection of AWS WAF at CloudFront edge locations. AWS WAF comes with pre-configured rules and tutorials that allows you to automate security using AWS Lambda.

Prerequisites

Since we will integrate WAF with ALB, the ALB must be configured to point to either SAP Web Dispatcher or SAP Fiori, through its target group. You can follow the step-by-step documentation.

Solution Implementation

  1. Deploy AWS WAF and AWS Managed Rules by following AWS Security Automation Guide using AWS CloudFormationAWS Security Automation Guide
  2. In Specify stack details screen, provide the parameter values as below, and then select NextCloudFormation Stack Parameters 1CloudFormation Stack Parameters 2
  3. In Configure stack options screen, then select Next, and then review all the parameter values, then select Create StackCloudFormation Acknowledgement
  4. After the CloudFormation finished successfully, you can navigate to https://console.aws.amazon.com/wafv2/homev2/web-acls to verify that the Web ACLs are successfully deployedAWS WAF WebACLs
  5. After verification, you associate the AWS WAF to the AWS Application Load Balancer (ALB). This effectively routes all the traffic through WAF prior to being processed by SAP Fiori.AWS WAF WebACLs Resource Assignment

Solution Testing

In this section, we will try a few sample attack scenarios and demonstrate AWS WAF in action to protect SAP Fiori. The purpose of this manual testing approach is to assert that WAF is active for the associated AWS resource. The approach we have shown below is not a recommended penetration testing approach. It is used only as a quick and handy way within the limited scope of this blog to try out a few scenarios without the need to set up of elaborate testing tools

Before you begin

  1. We will use several tools to help with this testing such as curl, Apache ab tool and Chrome browser
  2. Go to AWS console, and navigate to AWS WAF, then Web ACLs, and select the AWSWAFSecurityAutomations. You can then navigate to the custom response bodies tab and select “Create custom response body”.Create Custom Response Body
  3. Enter an appropriate custom response in the provided textbox as shown below. Then save as WAF_Response.Custom Response Body
  4. Select the AWSWAFSecurityAutomationsIPReputationListsRule from the rules tab of the same Web ACL. On this screen, please note down one of the IPv4 IP addresses for testing later, and select edit.IP Reputation List
  5. In the edit screen, change the configuration of this rule as shown below, i.e., use X-Forwarded-For header for identifying request originating address.Setting X-Forwarded-For header
  6. In the Action section on the same page, select the custom response of WAF_Response. Then save the rule.Setting Response Code
  7. In the same way (by repeating step 4-6 above), change the actions section of the following rules as well :
    • AWSWAFSecurityAutomationsHttpFloodRateBasedRule
    • AWSWAFSecurityAutomationsSqlInjectionRule
    • AWSWAFSecurityAutomationsXssRule
  8. After saving the rules you can verify that the changes are reflected, as shown below. Assigning Custom Response

Execute Testing

Scenario 1: IP Allow / Deny List

A number of organizations maintain reputation lists of IP addresses operated by known attackers, such as spammers, malware distributors, and botnets. This rule leverages these reputation lists to help you block requests from malicious IP addresses. To test this scenario, you use a curl command to make a request to the SAP Fiori server via the AWS ALB. The URL used below is for the SAP Fiori login page, you can change this to any URL of the SAP Fiori application. As part of this request, you specify the X-Forwarded-For header value with one of the IP addresses noted in AWSWAFSecurityAutomationsIPReputationListsRule.

Example request to execute from command line:

curl-v https://<ALB-Host-of-SAP-Fiori>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html  -H X-Forwarded-For:124.157.0.1

Example response:

IP Deny Response

You should be able to note the custom response coming from WAF that indicates that the request is blocked.

Scenario 2: SQL Injection

SQL Injection is used by attacker to obtained sensitive information, such as user id and password which allows them to gain access to website. AWSWAFSecurityAutomationsSqlInjectionRule will block web requests that contains potentially malicious SQL Code. To test this scenario, you open a Chrome browser window and point to the SAP Fiori page with a form submission. For example: SAP Fiori login page. Then open the Chrome Developer Tool (Network).

Capturing HTTP Trace

Once you submit the form, then in the Developer tool, you can right-click to copy the request and response in curl format.

Copying to CURL command

Example request to execute from command line:

curl-v https:///sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html  -H 'authority: <ALB-Host-of-SAP-Fiori>' 
-H 'cache-control: max-age=0' 
-H 'sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="90", "Google Chrome";v="90"' 
-H 'sec-ch-ua-mobile: ?0' 
-H 'origin: https://<ALB-Host-of-SAP-Fiori> (https://<ALB-Host-of-SAP-Fiori>/)' 
-H 'upgrade-insecure-requests: 1' 
-H 'dnt: 1' 
-H 'content-type: application/x-www-form-urlencoded' 
-H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36' 
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' 
-H 'sec-fetch-site: same-origin' 
-H 'sec-fetch-mode: navigate' 
-H 'sec-fetch-user: ?1' 
-H 'sec-fetch-dest: document' 
-H 'referer: https://<ALB-Host-of-SAP-Fiori>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html' 
-H 'accept-language: en-US,en;q=0.9,bn;q=0.8,hi;q=0.7' 
-H 'cookie: sap-usercontext=sap-client=100; sap-login-<………>' 
--data-raw 'sap-system-login-oninputprocessing=onLogin&sap-urlscheme=&sap-system-login=onLogin&sap-system-login-basic_auth=&sap-client=100&sap-accessibility=&sap-login-XSRF=<………>&sap-system-login-cookie_disabled=&sap-hash=&sap-user=SELECT * FROM USERS --&sap-password=<………>r&sap-language=EN'

This request was blocked by the AWSWAFSecurityAutomationsSqlInjectionRule managed rule in the WAF Web ACL. The custom WAF_response as the previous scenario was observed

Scenario 3: XSS Attack

Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise trusted website. The malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that website. AWSWAFSecurityAutomationsXssRule will inspect commonly explored elements of incoming requests to identify and block XSS attacks. To test this scenario, we will use a parameter in the web request filled with a JavaScript snippet, as you can see below.

Example request to execute from command line:

curl-v https://<ALB-Host-of-SAP-Fiori>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html?query=<script>alert("Bad script")</script>' 

This request was blocked by the AWSWAFSecurityAutomationsXssRule managed rule in the WAF Web ACL. The custom WAF_response as the previous scenario was observed.

Scenario 4: HTTP Flooding

HTTP Flooding is another attack scenario which can cause significant slowdown to a website due to the large number of requests being sent in a very short timeframe. In the worst situation, this attack can render a website to stop serving requests from legitimate end users. AWSWAFSecurityAutomationsHttpFloodRateBasedRule is a rate-based rule that will trigger when web requests from a client exceed a configurable threshold. To test this scenario, you will use Apache HTTP server benchmarking tool (ab command), which can generate web requests in short period of time. Using ab command, you send 100 valid requests from the same client to the server, and you will attempt to repeat another set of 100 requests again every 3 minutes. In the first few requests, you will see HTTP responses with a status code of 200 (which indicates allowed requests).However in the subsequent requests, you will notice HTTP responses outside of the 2xx class of status codes, such as 403, which represents “Forbidden”, a blocked request.

This is because AWSWAFSecurityAutomationsHttpFloodRateBasedRule managed rule contains 100 maximum requests (rate limit) which is evaluated every 5 minutes.

Below are the ab command line outputs from both allowed and blocked request batches

Allowed requests

Blocked requests, with Non-2xx response

Allowed Requests in AB tool

Blocked Requests in AB tool

Sampled Requests

All of the blocked and allowed observations are captured in the Sampled Requests section of the AWS WAF Console under Web ACLs.Here you can view the sampled web requests in order to do further monitoring and deep dive into each of the traffic and rules observed.

Web ACLs Sampled Requests

Estimating Cost for WAF

To help with the cost estimation of WAF deployment for SAP Fiori, you can consider the below scenario.

  • For a typical SAP Fiori application, a user will generate of about 50-150 web requests in 1 transaction depending on the complexity of the Fiori app
  • We shall deploy AWS Managed Rules (8 Web ACLs with 1 rule each), and no other Managed Rule Group
  • Assuming 100 users creating about 20 documents per day, we will have maximum of 6,000,000 web requests per month (= 100 * 20 * 150 * 20). This number can be more accurate if you have information from your SAP system through SAP EarlyWatch Alerts and audit reports.
  • For Singapore Region as per the AWS Calculator, the estimated cost will be
    • 8 Web ACLs per month x 5.00 USD = 40.00 USD (WAF Web ACLs cost)
    • 0 Rule Groups per Web ACL x 0 Rules per Rule Group = 0.00 Total Rules in Rule Groups
    • 1.00 Billable Rules per month x 1.00 USD = 1.00 USD (WAF Rules cost)
    • 6 requests per month x 1000000 multiplier for million x 0.0000006 USD = 3.60 USD (WAF Requests cost))
    • 40.00 USD + 1.00 USD + 1.80 USD = 42.80 USD
    • f. AWS WAF cost (monthly): 44.60 USD

When you are using subset or all of functionalities of WAF Security Automations solution, you will also need to consider the cost of other services, such as Amazon Kinesis Data Firehose, Amazon S3, Amazon Lambda and Amazon API Gateway. You can refer here for more detailed pricing information.

Conclusion

We have discussed detailed steps on how to implement AWS WAF for SAP Fiori. We recommend the use of AWS WAF to minimize the security risks related to SAP Fiori Applications. AWS WAF can prevent excessive use of SAP Fiori resources against cyber attacks, thus improve system availability and performance. You can leverage AWS Managed Rules for AWS WAF, so you do not have to write your own ruleset, this will speed up your security mitigation actions as it reduces your manual effort on writing and maintaining rules. If your security need goes beyond AWS Managed Rules for AWS WAF, you can subscribe to alternative solutions provided by AWS partners in AWS Marketplace.

Besides protecting SAP Fiori, AWS WAF can be extended to protect other SAP services, such as SAP Process Orchestration, SAP Enterprise Portal, SAP API Management, SAP Business Technology Platform.

You can find out more about SAP on AWS and AWS WAF from the AWS product documentation.


Integrating SAP Systems with AWS Services using SAP Business Technology Platform

$
0
0

Feed: AWS for SAP.
Author: Arne Knoeller.

While Amazon Web Services (AWS) customers such as Bizzy, Invista, Zalando, and Engie have implemented data and analytics solutions on AWS in support of their SAP workloads, many more are working with AWS to see how they can gain further insights by exploring trends in data. The large amount of data generated by business transactions processed in SAP, when properly harnessed by data and analytics solutions, can enable innovative decision making in many areas, such as customer engagement, cost management, and product roadmaps. One of the first steps in this journey is choosing the right tools to integrate data and analytics solutions with your SAP workloads.

In this blog I’m going to show how to integrate SAP systems with AWS services, using the SAP Integration Suite, available on the SAP Business Technology Platform (BTP). To cover the network and connectivity aspect to various SAP solutions on AWS, please have a look at my blog “How to connect SAP solutions running on AWS with AWS accounts and services“.

We frequently get asked by our customers about how to integrate SAP systems with AWS services. SAP BTP is a common platform for SAP customers to build integrations and extension scenarios and it also fits the need to integrate with AWS services. Popular use cases include building feeds of SAP data into machine learning or analytics services in AWS. Or enabling analytics for large volumes of data by using Amazon Simple Storage Service (Amazon S3), which is the object storage service of choice for data lakes, for performant access to structured and unstructured data.

For extracting data from SAP systems, it is important to keep the application context. While data extraction on database level would lose the application context, unless additional tools are used, extractors using OData, IDocs etc. maintain data relationships and integrate on the application layer. There are multiple solutions available to integrate and extract SAP data. Using native AWS tools like AWS Lambda and AWS Glue are explained in the blog Building data lakes with SAP on AWS.

In this blog, I want to focus on SAP Integration Suite, running on the SAP Business Technology Platform (BTP), to showcase an approach using SAP tools and services to extract data to AWS, without writing code.

SAP provides the Amazon Web Services Adapter for SAP Integration Suite to connect to AWS services, without writing or maintaining code. The AWS Adapter enables data exchange between the SAP Integration Suite and AWS services, where the AWS services can act as sender or receiver. The following AWS services are supported:

Sender Adapter:

Receiver Adapter:

Using SAP BTP and the Integration Suite offers various benefits. Most SAP customers are already using BTP services and have created extensions on top of SAP BTP. The integration is at the application level and multiple communication channels like HTTP/HTTPS, IDoc, OData, etc. are supported. For a full list please refer to SAP documentation – communication channels. In addition to the AWS service integration, integration flow functionality like message conversion, IDoc splitter, encryption, etc. are provided. With that, you can extend existing integration scenarios and applications with AWS services.

  1. SAP Support User (S-User)
  2. SAP BTP Account – You can create a trial account to test the walkthrough described in this blog post
  3. Integration Suite subscription – or see “setup BTP account” below
  4. Deployment of SAP S/4HANA. The easiest way to deploy this is using AWS Launch Wizard for SAP

SAP Integration Suite architecture with AWS adapter

In this example walk through, I’m going to extract an IDoc from an SAP S/4HANA System, convert it to JSON format and store it in Amazon S3.

1. Setup BTP account

Subscribe and enable SAP Integration Suite in your BTP account

SAP BTP Cockpit

Step by step documentation how to setup the Integration Suite is described in the Tutorial “Set Up SAP Integration Suite Trial”. Trial accounts are available in the AWS regions eu-central-1 and us-east-1 and you can select the region during the account setup.

2. Download AWS Adapter

You can download the AWS Adapter from the SAP Software Center by navigating to

SUPPORT PACKAGES & PATCHES –> By Alphabetical Index (A-Z) –> C –> SAP CP IS ADAPTER BASE PACK –> SAP CPIS AWS ADAPTER.

Note: SAP Software Download Center requires your SAP S-User ID.

Download and extract the file with the latest version. The zip file contains also the installation guide, which describes how to implement the adapter in the SAP Integration Suite.

3. Install AWS Adapter

Open SAP Integration Suite and open “Design, Develop, and Operate Integration Scenarios”. Click on “Design” on the Menu and create a new package.

SAP Integration Suite - Design New Package

Navigate to the “Artifacts” tab and click on Add > Integration Adapter

SAP Integration Suite - Add Integration Adapter

Select the integration adapter file (AmazonWebServices.esa) which is part of the files you’ve downloaded in the previous step.

Deploy the adapter, by clicking on the actions button and select “Deploy”.

4. Configure access to AWS services

To access AWS services, you need to store the AWS user with programmatic access to your account in the SAP Integration Suite. It’s recommended to create a new user in AWS Identity and Access Management (IAM), for accessing services through the SAP Integration Suite and define least privilege access to the required AWS resources. For this example, it is sufficient to grant access to the Amazon S3 bucket, which is used to store the extracted SAP data.

You need to store the IAM user and password as secure parameter in the Integration Suite. Select “Monitor” in the menu and navigate to “Manage Security” –> “Security Material” and create a new “Secure Parameter”.

SAP Integration Suite - Secure Parameter

Please note that the secure parameter is not the password or the secure access key of the IAM user, but the access key itself. You need to create an additional secure parameter for the secure access key. For example, with the name “AWS_IAM_Secret_Access_Key”.

5. Create Integration Flow

In the “Design” section of the SAP Integration Suite, select the package you’ve created before and navigate to the “Artifacts” tab. Create a new integration flow.

SAP Integration Suite - Add Integration Flow

Once you’ve clicked on the integration flow you’ve just created, you can enter the graphical designer, where you can define your sender, receiver and the integration process.

Define the sender, by providing a name – “S4HANA” as in my example. Create a connection from the sender to “Start” and select IDoc as Message Flow in the pop-up window. In the connection tab of the IDoc communication insert “/FLIGHTBOOKING_CREATEFROMDAT01” as address and leave authorization and user role with the default values.

SAP Integration Suite - Design Integration Flow - Sender

Just for the demo purposes, I’ve implemented a message conversion from IDoc XML to JSON in the flow.

The receiver in this example is Amazon S3. From the “End Message” to the “Receiver” you can select the AWS Adapter installed in step three. Under the connections tab of the receiver, you can define the Amazon S3 bucket in your AWS account and the access and secret access key, configured in step four.

SAP Integration Suite - Design Integration Flow - Receiver

In the processing tab you need to define the “content type” depending on the MIME type (“application/xml” or “text/plain”) For this example, use “application/xml”. The complete flow should look like this:

SAP Integration Suite - Design Integration Flow - Integration Process

Once the flow is defined, you can save and deploy the configuration. Navigate in the monitoring menu to “Manage Integration Content” and wait until your flow has the status “Started”. Please copy the endpoint URL of the integration flow. You’ll need this later for the RFC configuration.

6. Configure S/4HANA System

Define a logical system in transaction BD54 for the AWS target resources. For example, “AWSS3” for the Amazon S3 bucket. You also require a logical system for the sender, which is the client 100 of the S/4HANA System in this example.

SAP Transaction BD54

In transaction SALE execute “Maintain Distribution Model and Distribute Views” and create a new model view:

SAP Transaction SALE

Next step, create a new BAPI for the created model and define the sender and receiver according to the logical systems defined in the previous steps. For this demo, I’ve selected the FlightBooking object which is available by default in the S/4HANA system:

SAP Transaction SALE

To enable a secure communication, the certificates from Integration Suite are required in the S/4HANA system. Download the certificates from the Integration Suite, by clicking on the lock symbol next to the URL in the browser. Download all three available certificates and upload all of them in the transaction STRUST, under SSL client SSL Client (Anonymous), by adding them to the certificate list. After that, you can create a new RFC connection in transaction SM59 of the type HTTP. Use the endpoint URL of your Integration Suite integration flow as host and port 443. In the tab “Logon & Security” select Basic Authentication and enter your user for the SAP BTP. Scroll down and change the status of secure protocol to SSL Certificate: Anonym SSL Client (Anonymous).

Note: please consider client certificate authentication for your productive workloads.

Now, you need to create a port in transaction WE21. Create a new port for IDoc processing and select the RFC destination created earlier. Define “application/x-sap-idoc” as content type and enable SOAP protocol.

SAP Transaction WE21

In transaction WE20 define a partner profile of the type “LS” (logical system). Select a user under post processing and create the following new outbound parameter:

Message Type: FLIGHTBOOKING_CREATEFROMDAT
Receiver port: Port created in previous step
Basic type: FLIGHTBOOKING_CREATEFROMDAT01
Select “Pass IDoc Immediately”

SAP Transaction WE20

7. Test the integration workflow

Go to transaction WE19 and execute the test by using the message type “FLIGHTBOOKING_CREATEFROMDAT”

SAP Transaction WE19

Double click on EDIDC and define the port and partner no. for the receiver, as the parameters created before.

SAP Transaction WE19

Double click on E1BPSBONEW and put in some test data.

Finally start the outbound processing by clicking on “Standard Outbound Processing”. You can monitor the IDoc processing within SAP or in the monitoring section in the Integration Suite.

SAP Integration Suite - Monitor

For a simple validation of the data in Amazon S3, you can list your configured bucket, using AWS Command Line Interface (AWS CLI) command: aws s3 ls s3://<your-S3-bucket> --summarize | sort

This is an example of an IDoc, which was successfully transformed to JSON format:

{
  "FLIGHTBOOKING_CREATEFROMDAT01": {
    "IDOC": {
      "@BEGIN": "1",
      "EDI_DC40": {
        "@SEGMENT": "1",
        "TABNAM": "EDI_DC40",
        "MANDT": "100",
        "DOCNUM": "0000000000200021",
        "DOCREL": "755",
        "STATUS": "30",
        "DIRECT": "1",
        "OUTMOD": "2",
        "IDOCTYP": "FLIGHTBOOKING_CREATEFROMDAT01",
        "MESTYP": "FLIGHTBOOKING_CREATEFROMDAT",
        "STDMES": "FLIGHT",
        "SNDPOR": "SAPS2B",
        "SNDPRT": "LS",
        "SNDPRN": "S2BCLNT100",
        "RCVPOR": "SAPBTPIS",
        "RCVPRT": "LS",
        "RCVPRN": "AWSS3",
        "CREDAT": "20210621",
        "CRETIM": "150054",
        "ARCKEY": "urn:sap.com:msgid=02D45F160CA71EEBB4D316D6C4AA2C47"
      },
      "E1SBO_CRE": {
        "@SEGMENT": "1",
        "E1BPSBONEW": {
          "@SEGMENT": "1",
          "AIRLINEID": "LH",
          "CONNECTID": "345",
          "FLIGHTDATE": "15.07.21",
          "CUSTOMERID": "R44324xxx",
          "CLASS": "1",
          "COUNTER": "34",
          "AGENCYNUM": "3562",
          "PASSNAME": "ARNE KNOELLER",
          "PASSFORM": " Mr",
          "PASSBIRTH": "06.07.1992"
        },
        "E1BPPAREX": {
          "@SEGMENT": "1"
        }
      }
    }
  }
}

SAP Integration Suite with the AWS adapter provides an easy and efficient integration to AWS services. Customers who are already using SAP BTP and the Integration Suite for other requirements, can extend their existing platform to cover AWS service integration for data and analytics use cases. Especially for RISE with SAP and S/4HANA Cloud customers this is a good way to integrate the SAP solution with AWS services.

SAP Integration Suite keeps the application context by accessing IDocs or OData services. Hence the context is also available in the data stored on Amazon S3 for example and can be processed further. It’s the customer’s choice how to integrate SAP data with AWS services and it depends on the use-case. With the native integration, SAP Integration Suite and third-party tools, we provide flexibility and choice for our customers.

To learn why AWS is the platform of choice and innovation for 5000+ SAP customers, visit aws.amazon.com/sap.

SAP HANA sizing considerations for secondary instance with reduced memory footprint

$
0
0

Feed: AWS for SAP.
Author: Pradeep Puliyampatta.

Introduction

SAP systems that are critical to business continuity require a well designed and tested High Availability and Disaster Recovery (HA/DR) framework to maximise application availability during planned and unplanned outages. SAP HANA is an in-memory database that supports mission critical workloads and is a key component of an SAP system that needs to be safeguarded from failures. A primary or active HANA database can be replicated to a secondary instance using SAP HANA System Replication (HSR). SAP HSR continuously replicates data to ensure that in the event of a failure on the primary instance, changes persist in an alternate instance. In Amazon Web Services(AWS), the secondary HANA database instance can either exist within the same region (different Availability Zone) or in a separate region.

To achieve a near zero Recovery Time Objective (RTO), it is necessary for the primary and secondary HANA database instances to have a similar memory capacity. However, if a higher RTO is acceptable, it is possible to operate the secondary HANA database instance with a reduced memory footprint. This can result in considerable cost savings for compute, either by choosing a smaller instance size or leveraging excess memory on the secondary to operate other SAP HANA database workloads.

The reduced memory requirement in the secondary HANA node is achieved by configuring the secondary to not load HANA column data into main memory. A restart is required before promoting it to be primary, at which time it is assumed that the full memory requirement is available. The actual memory demand on secondary host is highly dependent on the HANA System Replication (HSR) configuration and production data change rate. Even by disabling HANA column table loading, the memory demands on the secondary host can go beyond 60% of the actual production HANA memory usage. If the secondary instance is not sized correctly, this can lead to out-of-memory crashes and reduced resilience.

This blog aims at providing a detailed guidance on how to size the secondary HANA database instance while operating with reduced memory footprint with examples for multiple deployment scenarios.

Architecture

There are two common architectures for deploying a secondary HANA database with a lean, or reduced memory footprint in AWS. In this blog we differentiate between them with the terms ‘smaller secondary’ and ‘shared secondary’. Smaller secondary is where the infrastructure is sized smaller than the primary, and then re-sized on takeover. This is sometimes referred to as Pilot Light DR as in the blog Rapidly recover mission-critical systems in a disaster. Shared secondary is where the unused memory is utilised by a non-production or sacrificial instance. SLES documentation refers to this as ‘Cost Optimized Scenario‘.

In both these scenarios, preload of column tables is disabled on the secondary HANA database. The specific configuration to implement this change is to set the HANA database parameter preload_column_tables to false. This configuration needs to be changed to ‘true’ before promoting this instance as primary. Given the manual intervention required and the time taken following a takeover to load the column tables before the HANA database is open for SQL connections, these scenarios are more relevant for a Disaster Recovery (DR) than a High Availability (HA) solution.

Smaller secondary

The following diagram illustrates the deployment of a smaller secondary in a separate Amazon Web Services(AWS) Availability Zone within the same region. However, such a deployment is also possible across multiple AWS regions. When replicating between AWS regions, the recommended replication mode is async for HSR due to increased latency.

HSR Smaller secondary

Shared secondary

A common use case for a shared secondary scenario is to operate an active quality (QAS) instance along with the secondary HANA database instance on the same host, also called MCOS (Multiple Components One System). This setup requires additional storage to operate the additional instance(s). During a takeover, the instance with lower priority (QAS) can be shutdown to make the underlying host resources available for production workloads.

HSR Shared secondary

For both the scenarios, the secondary HANA database instance configuration is set to preload_column_tables = false. The default value of this parameter is ‘true’ and has to be explicitly changed on the secondary instance to operate with reduced memory. This change is made in HANA database configuration file (global.ini) located in the file path (/hana/shared/<SID>/global/hdb/custom/config).

The two operation modes supported by HANA System Replication (HSR) to setup a secondary system are logreplay and delta_datashipping. A high-level comparison between logreplay and delta_datashipping operation modes is presented below.

# Criteria                                                         Operation Mode
                     logreplay               delta_datashipping
1. Operational behaviour Logs are shipped and continuously applied on the secondary system. Delta data is shipped from primary to secondary occasionally (default 10 min). Logs are only shipped, not applied until a takeover.
2. Recovery time Shorter takeover window, as logs are applied continuously until the failure occurs Relatively higher takeover time
3. Data transfer needs Only logs are shipped, hence comparatively lower data transfer needs Transfer of both delta data and log shipping adds to higher data transfers
4. Network bandwidth Due to reduced data transfers, needs relatively lower bandwidth between sites Requires higher bandwidth between sites
5. Memory footprint on DR instance Significantly higher memory required as column store needs to be loaded for delta merge speed Memory footprint is much smaller as column store is not loaded into the main memory when column preload is disabled.
6. Multi-tier replication Supported Supported
7. Multi-target replication Supported Not supported
8. Secondary time travel Supported Not supported

It is worth noting that in a 3-Tier replication setup, a mix of operation modes is not supported. For example if the primary and secondary HANA database instances are configured to use logreplay for HSR, then delta_datashipping operation mode cannot be used between the secondary and tertiary systems and vice-versa.

SAP Guidance on disabling column preload

The actual memory usage on the secondary host is dependent on the HANA replication operation mode configuration and the preload setting for column tables. Hence it is important to understand the sizing requirements for different operation modes in a HANA System Replication setup. The below chart is an excerpt from SAP Note : 1999880 – FAQ: SAP HANA System Replication (SAP Support Portal login required)

SAP Note 1999880 - FAQ: SAP HANA System Replication

As per the above guidance, even with preload = off, the minimum memory requirements for operation mode ‘logreplay’ also factors in the column store memory size of tables with modifications . As a rule of thumb, consider the size of column store tables with data modified in the previous 30 days from current date of evaluation.

SAP Note 1969700 – SQL Statement Collection for SAP HANA provides a collection of SQL scripts that can be used for analysing various administration and performance aspects of SAP HANA database. With minor changes to one of the scripts from this collection, titled ‘HANA_Tables_ColumnStore_Columns_LastTouchTime’, it is possible to provide a rough estimate for the minimum memory required to accommodate these tables. For exact steps of executing this script, refer to the ‘Additional information’ section at the end of this blog.

In the following sections, the examples provide details on how to evaluate sizing when deployed in a Shared secondary scenario. Although not shown, a similar sizing exercise is applicable to the Smaller secondary scenario, without the additional considerations of running a second HANA database instance (for example QAS) on the same host.

Option 1 – HSR with Operation Mode: logreplay

In the following example, the primary and secondary production HANA database instances are deployed on two 9TiB High Memory instances. The secondary instance host also hosts a Quality Assurance (QAS) HANA Database Instance. To be able to deploy these instances, the global_allocation_limit for all the three HANA database instances (PRD-Primary, PRD-Secondary, QAS) needs to be evaluated. Following section illustrates how to derive these values.

HSR logreplayIn the subsequent sections, Global Allocation Limit is referred as ‘GAL’.

  • GAL for Primary instance

On the Primary instance, global_allocation_limit is not set in this example (default value=0 and unit is MB). A default value does not limit the amount of physical memory that can be used by the database. However, as per the SAP Note 1681092 – Multiple SAP HANA systems (SIDs) on the same underlying server(s), maximum allocation to HANA database by operating system follows the rule of 90% of the first 64 GB of available physical memory on the host plus 97% of remaining physical memory.

  • GAL for Secondary instance

SAP’s guidance for minimum memory requirements for secondary HANA database instance with preload=off and logreplay operation mode is, ‘Row store size + Column store memory size of tables with modifications + 50 GB’. As an example, on the primary instance, the SQL script ‘HANA_Tables_ColumnStore_Columns_LastTouchTime’ was executed and the column tables with modifications for the previous 30 days is identified as 3077 GB. For detailed steps of executing this script, refer to the ‘Additional information’ section of this blog.

SQL Query output

Similarly, using SQL script ‘HANA_Memory_Overview’ from SAP Note 1969700 – SQL Statement Collection for SAP HANA, the memory requirements for ‘Row store’ can be determined. An example value of 153 GB is considered for the following evaluation.

Memory required (min) = Row store size 
                        + Column store memory size of tables with modifications 
                        + 50 GB
Ex: (153 + 3077 + 50) GB = 3280 GB
GAL for DR = 3280 GB^
  ^Convert to MB before changing GAL in the system 

Note: This value provides the minimum sizing guidance only. It is recommended to allocate additional headroom of 10-20% of this value to accommodate any immediate spike in column store table modifications.

Similarly, global_allocation_limit (GAL) has to be set for the second HANA database instance (QAS) running on the DR site. According to  the SAP Note 1681092 – Multiple SAP HANA systems (SIDs) on the same underlying server(s), the sum total of all HANA database allocation limits on a particular host should not exceed 90% of first 64 GB of available physical memory on the host plus 97% of remaining physical memory.

Σ [(GAL_DR) + (GAL_QAS)] ≤ {(0.9 * 64) + [0.97 * (Phys Mem GB - 64)]}

Applying this guidance to our example of 9 TiB(9896 GB)  bare metal instance is as follows.

Σ [(GAL_DR) + (GAL_QAS)]  ≤  {[0.9 * 64] + [0.97 * (9896 - 64)]} GB
                          ≤  9595 GB

The total available memory for all HANA database instances is 9595 GB, while the total available physical memory on the bare metal instance is 9TiB (9896 GB). As the HANA DR instance memory sizing is derived at 3280 GB, the remaining size can be the maximum allocatable memory for QAS instance.

Σ [ (GAL_DR) + (GAL_QAS)]  ≤ 9595 GB
Σ [ 3280 GB  + (GAL_QAS)]  ≤ 9595 GB
GAL for QAS: The maximum allowed value for GAL_QAS = 6314 GB^
    ^Convert to MB before changing GAL in the system

Option 2 – HSR with Operation Mode: delta_datashipping

This section provides an overview of HANA System Replication deployed with ‘delta_datashipping’ operation mode and the procedure to evaluate the memory sizing for each individual instance.

HSR Delta_datashipping

  • GAL for Secondary instance

SAP’s guidance for minimum memory requirements for secondary HANA database instance with preload=off and delta_datashipping operation mode is as follows. Applying this guidance to our example scenario to derive GAL for DR is as follows

Max [64 GB, (Row store size + 20 GB)]

Applying this guidance to our example scenario to derive GAL for DR is as follows

Max [64 GB, (153 + 20 GB)] 
GAL for DR: The minimum required value for GAL_DR = 173 GB^
    ^Convert to MB before changing GAL in the system 

The total available memory for all HANA database instances is 9595 GB, while the total available physical memory on the bare metal instance is 9TiB (9896 GB). In the previous step, secondary HANA database instance memory sizing is derived at 173 GB, the remaining size on the host can be the maximum allocatable memory for QAS instance.

Σ [ (GAL_DR) + (GAL_QAS)]   ≤  9595 GB
Σ [ 173 GB   + (GAL_QAS)]   ≤  9595 GB
GAL for QAS: The maximum allowed value for GAL_QAS = 9422 GB^
   ^Convert to MB before changing GAL in the system

Conclusion

This blog highlights that the cost savings of deploying a secondary HANA database instance with a reduced memory footprint may not be realised with logreplay operation mode. This is because of the higher memory demands of column store being loaded into the main memory of secondary HANA database and occurs irrespective of the column table preload setting. This behaviour is by design for continuous log replay. In addition, there is a requirement to constantly monitor column store data growth on the primary and adapt memory allocations on the secondary to avoid out-of-memory(OOM) issues on secondary. The operation mode delta_datashipping facilitates better cost savings in comparison to logreplay while disabling the preload of column tables on secondary HANA database instance. However, this choice comes with limitations which may include a higher recovery time and an increased demand for network bandwidth between the replication sites.

Opting for delta_datashipping is usually a trade-off between cost considerations and other limitations of this operation mode as explained in this blog. With relaxed RTO requirements and higher network bandwidth between the replication sites, the use of ‘delta_datashipping’ can be promoted . Also, with larger database instances, the potential for cost savings is higher. This is because the memory footprint on the secondary has a minimum requirement of row store memory and buffer requirements even for smaller database instances. The association between various factors that influence the choice of a particular operation mode and the potential for cost benefits with delta_datashipping option are represented in the following diagram. These factors are independent of each other and can individually influence the ability to use ‘delta_datashipping’ and thereby the cost benefits.Cost benefits with delta_datashipping

It is also important to understand that the memory requirement calculation for the secondary and subsequent setting of global_allocation_limit, is an iterative process. As the production database size grows, eventually the column store demand for delta merge also grows on the secondary instance. Hence, the memory allocations of secondary and optionally, other HANA database instances provisioned on the same hardware should be monitored periodically and also after mass data loads, go-lives and SAP system specific life-cycle events.

Additional Information

Following are the steps to execute the SAP SQL script.

1. Download the file SQLStatements.zip which is an attachment to SAP Note 1969700 – SQL Statement Collection for SAP HANA

2. Unzip the file and identify the SQL script named  HANA_Tables_ColumnStore_Columns_LastTouchTime_2.00.x+.txt (for HANA 1.0, choose a different version of the same script).

3. Modify the script with the following two changes.

TOUCH_TYPE
  Identify the section with title ' /* Modification section */'
  Identify the parameter 'TOUCH_TYPE' and enter the value as 'MODIFY'
Ex: 'MODIFY' TOUCH_TYPE,          /* ACCESS, SELECT, MODIFY */
BEGIN_TIME
  Identify the section with title ' /* Modification section */'
  Identify the parameter 'BEGIN_TIME' and keyin the value as 'C-D30'
Ex: 'C-D30' BEGIN_TIME, 

4. Execute the script on the HANA Production tenant DB using any of the SQL editors of SAP HANA Cockpit, SAP HANA Studio and DBACOCKPIT or the tool hdbsql.

5. Identify the output under the column ‘MEM_GB’ for the column store size that has been modified in the previous 30 days from the date of script execution.

Extract data from SAP ERP and BW with Amazon AppFlow

$
0
0

Feed: AWS for SAP.
Author: Pavol Masarovic.

Introduction

At AWS, we commonly hear from customers that they are looking to not only move their SAP systems to the cloud, but transform the way they use data and analytics capabilities within the organisation. Furthermore, they want to combine the SAP data with non-SAP data in a single data and analytics solution on AWS. That’s why earlier this week, in response to customer feedback, we announced that you can now extract data from SAP ERP/BW systems for use with AWS services using the Amazon AppFlow SAP OData Connector. This launch makes it super easy to set up a data flow from SAP to Amazon S3 in just a few clicks.

Many SAP customers are already leveraging AWS–based data lakes to combine SAP and non-SAP data and take advantage of AWS’s industry-leading storage and analytics capabilities. Some of them have already been doing this for several years. Examples include BizzyInvistaZalandoBurberryVisyDelivery Hero and Engie. However, many customers have continued to request additional guidance on the best option to extract the data from their SAP systems.

In the Building data lakes with SAP on AWS blog, we outlined various data extraction patterns available for extracting data from SAP and are now offering these as hands on labs with our SAP Beyond  Infrastructure Immersion Days.

This launch expands those options, giving customers even more choices for storing and analyzing their SAP data on AWS. In this blog, I will show you to how to run data flows from SAP using the new AppFlow SAP OData Connector.

Operational Data Provisioning–based extraction

The Operational Data Provisioning (ODP) framework enables data replication capabilities between SAP applications and data targets using a provider and subscriber model. ODP supports both full data extraction as well as change data capture using operational delta queues.

Solutions including SAP Data Services and SAP Data Intelligence can be integrated with ODP, using native remote function call (RFC) libraries. Non-SAP solutions including AWS Glue, AWS Lambda or Amazon AppFlow SAP OData Connector can use the OData layer for integration via HTTP or HTTPS.

What are the key benefits of using Amazon AppFlow?

Our customers are looking for quick ways to innovate and deliver value for their customers. To help their developers focus on activities that drive business value, many want to use prebuilt connectors, without building and maintaining on their own.

Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications including Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services including Amazon S3 and Amazon Redshift in just a few clicks. With AppFlow, you can run data flows at enterprise scale at the frequency you choose – on a schedule, in response to a business event, or on demand. You can configure data transformation capabilities including filtering and validation to generate rich, ready-to-use data as part of the flow itself, without additional steps. AppFlow automatically encrypts data in motion, and allows users to restrict data from flowing over the public Internet for SaaS applications that are integrated with AWS PrivateLink, reducing exposure to security threats.

About the Amazon AppFlow SAP OData Connector

The Amazon AppFlow SAP OData Connector is integrated with Amazon S3 so data can be easily configured via the simple interface and extracted to your target S3 bucket(s), even in multiple file formats. Once in Amazon S3, the data can be further processed using native AWS services or third party solutions.

The Amazon AppFlow SAP OData Connector supports AWS PrivateLink which adds an extra layer of security and privacy. When the data flows between the SAP application and your target Amazon S3 bucket with AWS PrivateLink, the traffic stays on the AWS network rather than using the public internet (see further reference of Private Amazon AppFlow flows).

This image shows how AWS PrivateLink is used to keep data secure between the source application and the target S3 bucket

With the straightforward user interface in the AWS Console, you can connect and configure your data flows from SAP. All you need is to create connection and configure the flow under Amazon AppFlow.

The required configuration inputs to create new Amazon AppFlow SAP OData connection are as follows:

  • SAP Application URL
  • SAP Web Service Port
  • SAP Client
  • SAP Logon Language
  • SAP Application Catalog Service Path
  • Authentication method (Basic Auth or OAuth2)

You can find further details on how to create the SAP connection in Amazon AppFlow SAP OData Connector documentation.

The configuration steps to create new Amazon AppFlow SAP OData flow are as follows:

  1. Configure Flow
  2. Discover SAP Services
  3. Select SAP Service Entity Sets
  4. Define Flow trigger (On Demand or On Schedule)
  5. Map Fields, Define Validations and Set Filters
  6. Run Flow

Within the flow configuration you can define what SAP Service Entity you want to export using OData. Additionally, you can map the source table fields to the destination, which enables you to filter the data based on your requirements.
In last step, you simply activate the flow by specifying if you want to run it on demand or on schedule, including how often the flow will run and further details of the incremental transfer options.

This image shows the process of building your data flow with Amazon Appflow-- including connecting source/ destination, mapping source fields to destination, adding filters/ validation, and finally, activating or running the flow.

Supported file formats for writing data to Amazon S3 are as follows:

  • JSON
  • CSV
  • Parquet

You can also specify data transfer preference for aggregating all records or split them in multiple files, adding a timestamp. You can also place them in different S3 folders.

Customers running SAP Systems on-premises can also use the Amazon AppFlow SAP OData Connector by using AWS VPN or AWS Direct Connect connections to configure AWS PrivateLink, as an alternative of using public IP address of your SAP OData endpoint.

The benefits of using the SAP ODP approach include:

  • Because business logic for extractions is supported at application layer, the business context for the extracted data is fully retained.
  • All table relationships, customizations, and package configurations in the SAP application are also retained, resulting in less transformation effort.
  • Change data capture is supported using operation delta queue mechanisms. Full data load with micro batches is also supported using OData query parameters.
  • SAP Data Services and SAP Data Intelligence might have better performance in pulling the data from SAP because they have access to ODP integration using RFC layer. SAP hasn’t opened the native RFC integration capability to ODP for non-SAP applications, so AWS Glue and Lambda have to rely on HTTP-based access to OData. Conversely, this might be an advantage for certain customers who want to standardize on open integration technologies. For more information about ODP capabilities and limitations, see Operational Data Provisioning (ODP) FAQ.
  • Amazon AppFlow removes the requirement for a third-party application and thus reduce the total cost of ownership. With simple configuration and an integrated UI directly in AWS Management Console you can quickly connect your SAP system with AppFlow and extract data into Amazon S3. This comes without needs for customer development apart of the SAP OData configuration for the extraction.

Amazon AppFlow Pricing

Amazon AppFlow offers significant cost-savings advantage compared to building connectors in-house or using other application integration services. There are no upfront charges or fees to use AppFlow, and customers only pay for the number of flows they run and the volume of data processed.

You pay for every successful SAP flow run (a flow run is a call to the SAP system to transfer data to Amazon S3). Flow runs to check for new data will count towards your flow run costs, even if no new data is available in the SAP system for transfer.

Price per flow run $0.001
Maximum number of flow runs per AWS account per month 10 Million

There are additional charges for Amazon AppFlow data processing, Amazon S3 and AWS Key Management Service (if used).

You can get detailed pricing on Amazon AppFlow pricing page.

Summary

The Amazon AppFlow SAP OData Connector provides an easy and efficient way of extracting SAP data using OData directly to Amazon S3 for use with other AWS services. Customers running SAP workloads on AWS can start using this service within the AWS Management Console. Customers still running SAP systems on-premises can also feed SAP data directly to Amazon S3 and start creating powerful AWS Data Lakes to get more value from their SAP investments and enterprise data.

To get started, visit the Amazon AppFlow page. To learn why AWS is the platform of choice and innovation for more than 5000 active SAP customers, visit the SAP on AWS page.

Amazon M6i instances now available and SAP certified

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

This post was jointly authored by Steven Jones, General Manager of SAP/VMware at AWS, and Anurag Handa, General Manager of Cloud & Enterprise Solutions Group at Intel

Deliver better performance for mission-critical SAP workloads with Amazon EC2 M6i Instances

AWS and Intel recently announced the launch of our new Amazon EC2 M6i instances, which are certified for SAP NetWeaver workloads in production. These are the first generally available, SAP-certified cloud instances that are built on 3rd Gen Intel® Xeon® Scalable processors, as documented in SAP note 1656099.

These instances offer up to a 20% increase in memory bandwidth and a 10% increase in price/performance compared to previous generation (M5) instances, and give AWS customers access to  Intel features exclusive to 3rd Gen Intel Xeon Scalable processors, such as Intel Total Memory Encryption (TME) and AVX-512. They are available in 9 different sizes, with up to 128 vCPUs and up to 500GB of RAM. Full performance details are as follows:

Instance vCPU Memory (GiB)
m6i.large 2 8
m6i.xlarge 4 16
m6i.2xlarge 8 32
m6i.4xlarge 16 64
m6i.8xlarge 32 128
m6i.12xlarge 48 192
m6i.16xlarge 64 245
m6i.24xlarge 96 384
m6i.32xlarge 128 512

The m6i.32xlarge offers up to 198,080 SAPS per instance, as recently demonstrated using the SAP Sales and Distribution (SD) Standard Application Benchmark. This calculation was reviewed and certified by the SAP benchmark council.

Why SAP customers continue to choose AWS and Intel

This release is the latest milestone in AWS and Intel’s 10+ year partnership co-innovating for SAP customers on the cloud. Today, thousands of customers run SAP on EC2 instances powered by Intel Xeon for a multitude of reasons, including experience, choice, reliability, and security. Let’s talk a bit about each.

First is our unmatched experience supporting SAP and SAP customers in the cloud. In 2008, SAP started using AWS and Intel-based EC2 instances to support their own innovation journey. In 2011, we partnered to work with SAP and certified early EC2 instances (m2.4xlarge) for SAP Business Suite production workloads. We shortly followed with the certification of EC2 instances for SAP HANA in 2013, and together, AWS and Intel have continued to break new ground in this space since. For example, when we announced Amazon EC2 X1 instances in 2016, they were the first multi-terabyte capable cloud instances purpose-built for memory-intensive workloads like large SAP HANA databases. And at the same time, SAP HANA was optimized on Intel architecture. This experience and consistent pace of innovation is a key reason that industry leaders continue to choose AWS and EC2 instances powered by Intel for their mission-critical SAP workloads. For example, BP runs a very large 16TB production SAP system that underpins its mission-critical, 24×7 fuels business on AWS.

Another reason is the broad choice of instance types, which allows you to optimize costs by aligning your unique workload requirements with the right blend of CPU and memory. For example, Zalora started with 244GB EC2 instances to support their HANA database, and within two years, had expanded their cloud consumption by nearly 900 percent. Now they run on much larger 2 TB Amazon EC2 X1 instances. FAST Retailing— the parent company of Uniqlo— uses 4 and 6 TB Amazon EC2 High Memory Instances to support their continued business growth, and can change instance sizes in minutes with minimal downtime.

And of course, customers choose AWS because it is the world’s most secure, reliable, and extensive cloud platform, and Amazon EC2 instances powered by Intel are a cornerstone of the AWS infrastructure. The AWS Global Infrastructure has 81 Availability Zones across 25 geographic regions (with 2x more regions with multiple AZs than the next closest provider) — and we have plans to launch 15 more Availability Zones and five more AWS Regions in the near future. Customers can take advantage of this infrastructure footprint to build highly available SAP systems far more easily and cost-effectively than is possible on-premises.

Get started with M6i today

Customers can start using our M6i instances via EC2 Savings Plans, Reserved, On-Demand, and Spot instances. These instances are generally available today in AWS US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland), and Asia Pacific (Singapore) Regions.

To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M6i instances page, or visit the Intel 3rd Gen Intel Xeon processor page.

Thanks,
Steve and Anurag

High availability design and solution for SAP NetWeaver installations with Oracle Data Guard (Fast-Start Failover)

$
0
0

Feed: AWS for SAP.
Author: Lalit Mangla.

Introduction

Many SAP customers are still running their mission-critical SAP workloads on Oracle database with the different combinations of Operating systems (IBM AIX, HP-UX, Red Hat Enterprise Linux / SUSE Linux Enterprise Server) in their on-premises environment. The challenge in their cloud adoption journey is to migrate Oracle based workloads as-is using a “lift and shift” approach to get the immediate benefit of reduced TCO (Total Cost of Ownership) from running in the Cloud. Customers commonly select this approach as an interim step while deciding on their long term SAP Strategy for beyond 2027.

While preparing for the migration, one of the most asked question is, how do we make our Oracle database highly available in the AWS cloud? The answer is, there are multiple options available with Amazon Web Services (AWS) which are divided into Oracle Native and Third-Party Solutions.

  1. Oracle Data Guard
  2. SIOS Life Keeper
  3. Veritas Infoscale

On premises, the typical approach for making Oracle workloads highly available depends on a “shared storage solution” together with virtual IP for network failover, all under the orchestration of proprietary hardware vendor tools.

In this blog, we will dive into Oracle’s native database high availability solution Oracle Data Guard with fully automated primary to standby failover using Oracle Data Guard Fast Start Failover (FSFO) on AWS.

Note: This particular scenario has been tested on Oracle database release 12.1.0.2.0 and Oracle Enterprise Linux OL7.9-x86_64. You can select the required combination of Operating system and database by referring to SAP Product Availability Matrix. SAP NetWeaver (7.0x to 7.5) products requiring Oracle Database 18c (min 18.5.0) or 19c (min 19.5.0) must run on Oracle Enterprise Linux (OEL) 6.4 or higher. This applies to both the Database and the Application Servers that require the Oracle Client as per SAP Note 105047 (a valid S-user able to connect to SAP ONE support Launchpad is required to read the mentioned SAP note).

Architecture Diagram

Architecture Diagram of Oracle Data Guard with FSFO

Brief description of Architecture components

  • Primary node – A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary node role. This is the database that is accessed by all of your applications servers.
  • Standby node – A standby database is a transactionally consistent copy of the primary database. If the primary database becomes impaired then standby database will be promoted to become primary database.
  • Observer node – is a separate OCI (Oracle client Interface) client-side component that runs on a different server from the primary and standby databases and monitors the availability of the primary database.

In this blog we will focus on how to build the Oracle Data Guard (DG) with Fast Start Failover and Observer node, without covering the SAP Application layer part. The same underlying Oracle DG solution will work for all SAP Application scenarios, regardless if it’s a distributed or highly-available installation. We’ll also cover some of the cost benefits of using the Oracle Data Guard with the Linux environments.

For the application tier, we recommend you to follow the best practice of deploying the application servers across two or more Availability zones. For the Database (DB) tier, to ensure the high availability in the event of AZ failure, you should deploy the primary and standby nodes in two different Availability zones (AZ’s) and the Observer node in the 3rd AZ. Each Availability Zone is fully isolated, and connected through low latency network. If one DB instance fails, an instance in another Availability Zone after failover handle the requests of failed instance.

Oracle Data Guard solution enables customers to deploy a HA cluster across AWS Availability Zones (AZs) in a region for SAP NetWeaver based applications. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Oracle Data Guard can switch the role of standby database to the primary role, minimizing the downtime associated with the outage.

Fast-Start Failover enables an automated failover to standby database, in case the primary database goes down by incident or network loss. An Observer process is used to monitor the network connectivity and availability of the databases. The observer is a separate OCI client-side component that runs on a different server from the primary and standby databases and monitors the availability of the primary database. It is recommended to place the observer node in another availability zone (not with the Primary and Standby Oracle node) in case of there is an outage for both availability zones.

Prerequisites

Compute

Three nodes are required to configure Oracle Database HA solution with Fast Start Failover (FSFO)

  1. Two nodes will work as primary and standby nodes for the Oracle database
  2.  Third node within auto scaling group (minimum 1, maximum 1, desired capacity 1, across all AZ in region), will work as an Observer node (build a launch template AMI using the steps from “Observer node setup” section in this document) to maintain and centralize the creation, maintenance, and monitoring of Oracle Data Guard configurations for high availability cluster.

Recommended Sizing Recommended Sizing for Compute

* This is just an example. Instance size is to be adjusted to your Oracle DB size

Storage

Following are the recommendations on the Amazon Elastic Block Store (Amazon EBS) Oracle allocation for the on-premises to AWS and needs to be adjusted accordingly if you are looking for the Homogenous or Heterogeneous migrations of the existing SAP workload from on-premises to AWS.

To get the best performance from your database, you must configure the storage tier to provide the IOPS and throughput that the database needs. This is a requirement for Oracle Database running on Amazon Elastic Compute Cloud (Amazon EC2). If the storage system does not provide enough IOPS to support the database workload, you will have sluggish database performance and transaction backlog. However, if you provision much higher IOPS than your database actually needs, you will have unused capacity

Since, you are migrating the Oracle workload from on-premises to AWS and the best way to estimate the actual IOPS that is necessary for your database is to query the system tables over a period of time and find the peak IOPS usage of your existing database. To do this, you measure IOPS over a period of time and select the highest value. You can get this information is from the GV$SYSSTAT dynamic performance view, which is a special view in Oracle Database that provides database performance information.

General Purpose (SSD) volumes (GP3) are sufficient for most Oracle Database workloads. If you need more IOPS and throughput than GP3 can provide, Provisioned IOPS (PIOPS) is the best choice. PIOPS can provide up to 64,000 IOPS per volume for Nitro based instances and 32,000 IOPS per volume for other instance families. Using the logical volumes in the LVM with striping to achieve better EBS throughput and avoid the risk of disk corruption, and the redo log files to distribution across multiple disks to manage.

Follow the disk layout based on the Oracle recommendation, e.g. using logical volumes manager (LVM) with striping to achieve better EBS throughput and best performance at optimal cost. Oracle Database uses disk storage heavily for read/write operations, so we highly recommend that you use only instances optimized for Amazon Elastic Block Store (Amazon EBS). Amazon EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS. Bandwidth and throughput to the storage subsystem is crucial for good database performance. Redo log files distribution across multiple disk will also help to manage the risk due to disk corruptions.

Recommended storage layout and sizingRecommended storage layout and sizingRecommended storage layout and sizing

* This is just an example. Instance size is to be adjusted to your Oracle DB size

Elastic File systems (EFS)

Mount additional two Amazon Elastic File Systems (Amazon EFS) on both the Primary and Standby database nodes – /sapmnt/<SID> and /usr/sap/trans/ .Size of these file system varies from Development system to Production system and totally depends on the system usage.

Oracle Data Guard Filesystem details

Solutions Implementation

Infrastructure Setup

We will use multiple AWS Availability Zones to configure high availability for the Oracle database nodes. AWS Availability Zones which are physically separated by a meaningful distance, many kilometers, from any other Availability Zones, although all are within 100 km (60 miles) of each other. Availability Zones have a low latency network connectivity and meets customers High availability requirement to support synchronous database replication.  A key requirement to setup the Oracle Data Guard is, both Primary and Standby database host must be identical to operate the Oracle Data Guard.  This means:

  1. Same Oracle Enterprise Linux patch level and kernel on both hosts
  2. Same parameter settings on database and operating system (e.g. nfiles)
  3. Firewall on Oracle Enterprise Linux 6.4 and above version is enabled by default and needs to be disabled on database and observer nodes for communication.
  4. Identical Oracle version, with the same database patch set levels are recommended on both the nodes.
  5. Identical file system structure, especially for SAP data and Oracle home.
  6. The databases must be operated in ARCHIVELOG mode
  7. Use of server parameter file (SPFILE)
  8. “SAP ONE Support Launchpad” access is required

Refer to SAP note 105047 on Oracle Data Guard for an SAP environment.

Database Installation on Primary and Standby nodes

Download the following software from the SAP Marketplace with the authorized credentials –

  1. Software Provisioning Manager 1.0 (SWPM) with latest patch level
  2. SAP Netweaver 7.5 DVDs (required version)
  3. ORACLE 12.1 64-BIT RDBMS
  4. ORACLE 12.1.0.2 Client
  5. Latest SAP Kernel
  6. SAP Host Agent
  7. SAPCAR

The first step in setting up the SAP system, in a distributed, or High-availability (HA) scenario is to install ABAP central services instance (ASCS). Once that is done, proceed with the database installation. Once that is done, continue with the Primary Application Server instance (PAS) installation.

Complete the Oracle database installation Oracle 12.1 64-bit RDBMS on Linux on x86_64 64bit using SWPM on primary database node. On the standby node, you just need to install the Oracle binaries as rest of the database and redo logs files will be created during the Oracle Standby setup process.

For more details: Refer to SAP Note 1915301 – Database Software 12c Installation on UNIX.

Standby node setup and sync with Primary node

There are many ways of setting up the Standby node. The easiest and recommended way is to use Oracle Recovery Manager (RMAN). The standby database can be created from an OFFLINE or ONLINE backup of the production database. Amazon Simple Storage Service (Amazon S3) can be leveraged to take the database backup for the restoration purpose.

Setup the Oracle standby by setting the environment variables, adding standby logfiles and enable the flashback by following the instructions on Oracle Help Centre. Make sure that both the database nodes can communicate and validate the connectivity by tnsping command.

Oracle Data Guard Fast-Start Failover configuration (FSFO) –

At this point we have a primary database and a standby database, so now we need to configure Data Guard Broker to manage them. Data Guard command-line interface (DGMGRL) enables you to manage a Data Guard broker configuration. There are several options when it comes to setting up the protection mode with Oracle Data Guard. See below:

Different Mode types with Data Guard

To setup Fast-Start Failover, we will be using “Max Availability Mode” feature. The steps are:

  1. Configuring Data Guard Broker – Open primary database and mount standby database first. The DMON process (Data Guard Monitor) on both database nodes is set to active by setting the dg_broker_start to TRUE. Make sure that the listener on the standby database is started and that the database can be accessed with SQL*Net.
  2. Start the DG_BROKER PROCESS (on both databases nodes).

    SQL> ALTER SYSTEM SET dg_broker_start=true;
    System altered.

  3. On the primary DB server, issue the following command to login to dgmgrl
  4. dbprnode3:oraab3 44> dgmgrl sys/"Password"@AB3
    DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
    Copyright (c) 2000, 2013, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    Connected as SYSDBA.

  5. On the primary server, issue the following command to register the primary server with the broker.
  6. DGMGRL> create configuration my_dg_config_1 AS PRIMARY DATABASE IS AB3 CONNECT IDENTIFIER IS AB3;
    Configuration "my_dg_config_1" created with primary database "ab3"
    DGMGRL>

  7. Add the standby database (you can issue these commands from any of the node)
  8. DGMGRL> ADD DATABASE AB3_STBY AS CONNECT IDENTIFIER IS AB3_STBY MAINTAINED AS PHYSICAL;
    Database "ab3_stby" added
    DGMGRL>

  9. Now, we enable the new configuration upon registration of both nodes
  10. DGMGRL> ENABLE CONFIGURATION;
    Enabled.
    DGMGRL>

  11. The following command show how to check the configuration –
  12. DGMGRL> SHOW CONFIGURATION;
    Configuration - my_dg_config_1
    Protection Mode: MaxPerformance
    Members:
    ab3         - Primary database
    ab3_stby    - Physical standby database
    Fast-Start Failover: DISABLED
    Configuration Status:
    SUCCESS (status updated 23 seconds ago)

  13. By default, MaxPerformance protection mode will be enabled and needs to be altered later on to Max Availability. After the observer node setup, you can change this configuration to Max Availability and include as part of the observer node setup. Check the status of the Standby database from the broker (These commands can be issued from any node once the configuration is enabled)
  14. DGMGRL> show database AB3_STBY
    Database - ab3_stby

    Role: PHYSICAL STANDBY
    Intended State: APPLY-ON
    Transport Lag: 0 seconds (computed 1 second ago)
    Apply Lag: 0 seconds (computed 1 second ago)
    Average Apply Rate: 1.00 KByte/s
    Real Time Query: OFF
    Instance(s): AB3

    Database Status:
    SUCCESS

  15. Check the status of the Primary database from the broker
  16. DGMGRL> show database AB3

    Database - ab3

    Role: PRIMARY
    Intended State: TRANSPORT-ON
    Instance(s): AB3

    Database Status:
    SUCCESS

Note: Database should only be mounted and not opened up anytime on the standby node. The listener must be up and running on both the nodes.

Ensure you can switch over the database from the Primary to Standby Node and vice versa manually to validate the configuration before enabling the FSFO and other parameter settings.

Reconnect SAP instance to database

There are different approaches possible to connect the SAP instance to the Oracle database after the failover/disaster. We recommend to adapt the database level role change by creating a database trigger with the database services startup. This will help to avoid any changes at the SAP profiles and SQL*Net profiles during the failover/disaster situation.

Create the database trigger and execute as per below commands –

SQL> !more /home/oraab3/cre_trig1.sql
create trigger my_sap_trig1 after startup on database
declare
v_role varchar(30);
begin
select database_role into v_role from v$database;
if v_role = 'PRIMARY' then
DBMS_SERVICE.START_SERVICE('AB3-HA1');
else
DBMS_SERVICE.STOP_SERVICE('AB3-HA1');
end if;
end;
/

SQL> connect /as sysdba;
Connected.

SQL> @/home/oraab3/cre_trig1.sql

Trigger created.

Note: Make sure the StaticConnectIdentifier parameter is correctly reflected in the listener.ora file on both the Primary and Standby database nodes as you will be using the different static descriptor property and can be setup and validated using the DGMGRL command.

DGMGRL> edit database 'AB3' set property
staticconnectidentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbprnode3)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=AB3_DGMGRL)(INSTANCE_NAME=AB3)(SERVER=DEDICATED)))';

DGMGRL> edit database 'AB3_STBY' set property
staticconnectidentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbprnode4)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=AB3_STBY_DGMGRL)(INSTANCE_NAME=AB3)(SERVER=DEDICATED)))';

DGMGRL> show database AB3_STBY StaticConnectIdentifier
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbprnode4)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=AB3_STBY_DGMGRL)(INSTANCE_NAME=AB3)(SERVER=DEDICATED)))'

The following SID_DESC entries are recommended for the database and observer in the listener.ora file on both the nodes and make sure you validate these entries after the restart of listener :

Data Guard FSFO Configuration table

Observer Node Setup

In this section, we will setup the observer node and validate the configuration to make sure that both the Primary and Standby nodes can be managed via the observer node as well.

  1. Setup the Observer node using the same Oracle Enterprise Linux image (OL7.9-x86_64-HVM-2020-12-07) and install the Oracle client (Oracle 12.1.0.2 Client)
  2. Add the username “observer” on the observer node
  3. Adapt the sqlnet.ora , tnsnames.ora and environment variables for the observer node from the primary database node
  4. setenv PATH /oracle/AB3/121/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/sbin:/usr/local/sbin
    setenv DB_SID AB3
    setenv dbms_type ORA
    setenv dbs_ora_tnsname AB3
    setenv ORACLE_SID AB3
    setenv ORACLE_HOME /oracle/AB3/121
    setenv ORACLE_BASE /oracle/AB3
    setenv NLS_LANG AMERICAN_AMERICA.UTF8
    setenv LD_LIBRARY_PATH $ORACLE_HOME/lib
    setenv TNS_ADMIN $ORACLE_HOME/network/admin

  5. You should be able to ping both the primary and standby node via tnsping command
  6. [observer@dbprnobs1 ~]$ tnsping AB3
    TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-APR-2021 11:11:34
    Copyright (c) 1997, 2014, Oracle.  All rights reserved.
    Used parameter files:
    /oracle/AB3/121/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = dbprnode3) (PORT = 1521))) (CONNECT_DATA = (SID = AB3) (GLOBAL_NAME = AB3.WORLD)))
    OK (0 msec)
    [observer@dbprnobs1 ~]$ tnsping AB3_STBY
    TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-APR-2021 11:11:29
    Copyright (c) 1997, 2014, Oracle.  All rights reserved.
    Used parameter files:
    /oracle/AB3/121/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = dbprnode4) (PORT = 1521))) (CONNECT_DATA = (SID = AB3) (GLOBAL_NAME = AB3.WORLD)))
    OK (0 msec)

  7. Change the protection mode – Check the protection mode first:
  8. SQL>  SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE;
    PROTECTION_MODE      PROTECTION_LEVEL
    -------------------- --------------------
    MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE
    DGMGRL> EDIT DATABASE ab3 SET PROPERTY LogXptMode='SYNC';
    Property "logxptmode" updated
    DGMGRL> EDIT DATABASE ab3_stby SET PROPERTY LogXptMode='SYNC';
    Property "logxptmode" updated

    Then change the protection mode of configuration to Max Availability Protection.

    Then check if the protection level has been changed.
    SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE;
    PROTECTION_MODE      PROTECTION_LEVEL
    -------------------- --------------------
    MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

  9. Data Guard properties: Set the Delay for applying the redo to 2 mins (you can adjust accordingly as per your database requirements) to apply the logs to the standby node:
  10. DGMGRL> EDIT DATABASE AB3 SET PROPERTY DelayMins='2';
    DGMGRL> EDIT DATABASE AB3_STBY SET PROPERTY DelayMins='2';

  11. Configure Fast-Start Failover: Specify the target standby database with the FastStartFailoverTarget property and standby with primary as FSFO target.
  12. DGMGRL> EDIT DATABASE ab3 SET PROPERTY FastStartFailoverTarget = 'ab3_stby';
    Property "faststartfailovertarget" updated
    DGMGRL> EDIT DATABASE ab3_stby SET PROPERTY FastStartFailoverTarget = 'ab3';
    Property "faststartfailovertarget" updated
    DGMGRL>

  13. Set the FastStartFailoverThreshold property: This property manages the time for failover. The default is set to 30 seconds, setting it to a different value overwrites the default and will give DBA a longer period to possible stop the countdown.
  14. DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 600;
    Property "faststartfailoverthreshold" updated

  15. Start DGMGRL and Observer. After all the property and setup , you can issue these commands either from Primary node or Standby node.
  16. [observer@dbprnobs1 ~]$ dgmgrl sys/PASSWORD@AB3
    DGMGRL for Linux: Version 12.1.0.2.0 - 64bit ProductionCopyright (c) 2000, 2013, Oracle. All rights reserved.Welcome to DGMGRL, type "help" for information.
    Connected as SYSDBA.
    DGMGRL>DGMGRL> connect sys@AB3
    Password:
    Connected as SYSDBA.
    DGMGRL> START OBSERVER;
    Observer started

  17. The following commands are recommended to start (in the background) and stop the observer processes in the background:
  18. [observer@dbprnobs1 ~]$ dgmgrl -logfile observer.log sys/Password@AB3 "start observer" & 5666[observer@dbprnobs1 ~]$ dgmgrl -logfile observer.log sys/Password@AB3 "stop observer"

  19. Enable Fast-Start Failover: Make sure flashback is ON on both the primary and standby node before enabling the Fast-Start failover.
  20. DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
    Succeeded.
    DGMGRL>DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 3600;
    Property "faststartfailoverthreshold" updated
    DGMGRL>DGMGRL> ENABLE FAST_START FAILOVER;
    Enabled.DGMGRL> show fast_start failover
    Fast-Start Failover: ENABLED
    Threshold: 3600 seconds
    Target: ab3_stby
    Observer: dbprnobs1
    Lag Limit: 30 seconds (not in use)
    Shutdown Primary: TRUE
    Auto-reinstate: TRUE
    Observer Reconnect: (none)
    Observer Override: FALSEConfigurable Failover Conditions
    Health Conditions:
    Corrupted Controlfile YES
    Corrupted Dictionary YES
    Inaccessible Logfile NO
    Stuck Archiver NO
    Datafile Offline YESOracle Error Conditions:
    (none)

    Solution Testing

    Operational readiness is the key aspect before the Go-live of your mission critical SAP application. As a solution architect, you need to ensure the application meets the RTO and RPO requirements before the Go-live.

    Our recommended approach is to validate the following scenario by failing over the primary node to standby database node along with SAP application level testing.

    The Data Guard monitor process (DMON) is an Oracle background process that runs for every database instance that is managed by the broker. When you start the Data Guard broker, a DMON process is created and responsible for monitoring the health of the broker configuration and for ensuring that database has a consistent description of the configuration and

    In case that the Primary database node goes down due to disk-failure or other network outage, primary database will not be available for querying and the SAP Work process stay in ‘reconnect’ mode waiting for the database to come up again on the standby node. Once the database is moved on the standby node, the SAP work processes are able to re-connect to the standby database and resume normal operation. This proves the Observer is able to move the database between the nodes without any manual intervention.

    It is also recommended to manually test the failover first. You can use the following commands to switchover back and forth to validate the failover scenario.

    Data Guard Failover Scenario

    DGMGRL> switchover to ab3_stby;
    Performing switchover NOW, please wait...
    Operation requires a connection to instance "AB3" on database "ab3_stby"
    Connecting to instance "AB3"...
    Connected as SYSDBA.
    New primary database "ab3_stby" is opening...
    Operation requires start up of instance "AB3" on database "ab3"
    Starting instance "AB3"...
    ORACLE instance started.
    Database mounted.
    Switchover succeeded, new primary is "ab3_stby"
    DGMGRL>

    Pros & Cons on the different options available for migration to Oracle Enterprise Linux on AWS

    Key Considerations Oracle Data Guard 3rd Party Solutions (e.g. SIOS, Veritas)
    Cost Part of Oracle Enterprise License Additional Software License cost involved
    Implementation Efforts Medium to High Medium
    Technical constraints Source must be Oracle Database 10g or higher. Destination has to be on the 11.4 or a higher

    See SIOS Protection for Linux Support Matrix

    See Veritas Infoscale Release Notes – Linux

    Special features Oracle Supported SAP Certified solution and supported by 3rd parties (e.g. SIOS and Veritas)
    Skills required Infrastructure specialist, Oracle DBA Infrastructure specialist, SAP Basis

    Conclusion

    In this blog, we have shown how to enable the Fast-Start Failover option for the Oracle Database native HA solution on AWS by using Oracle Data Guard with observer node for SAP application. Please note, if you have an Oracle runtime license acquired from SAP you can setup and use Oracle Data Guard with Fast-Start Failover option, however SAP will not provide support for the Fast-Start Failover component. If Oracle support is also required for the Fast-Start Failover component then an Oracle support contract is required directly from Oracle.

    To learn more about why 5000+ active customers run SAP on AWS, visit aws.amazon.com/sap

Introducing the SAP Lens for the AWS Well-Architected Framework

$
0
0

Feed: AWS for SAP.
Author: John Studdert.

Introduction

SAP applications represent the financial system of record and business process backbone for most of the world’s enterprises. Because SAP workloads are both mission-critical and often resource-intensive, architectural decisions can have large impacts. Understandably, customers want to make sure their architecture supports their unique business and technical requirements.

AWS has been supporting SAP workloads since 2008, when SAP themselves started using AWS to support their own innovation journey, and SAP customers have been running production SAP on AWS since 2011. Over this 13+ year journey, we have learned a lot about how to realize the full benefits of AWS for SAP applications. Customers have increasingly asked that we give them structured guidance to help them apply these learnings to their own SAP migration journeys. Today, we’re happy to announce the SAP Lens for the AWS Well-Architected Framework, which will do just that.

Since 2015, AWS customers have consulted the AWS Well-Architected Framework to build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.

The guidance of the Well Architected framework is relevant to all AWS customers, but as we have done with other industry and technology domains, such as analytics, machine learning and financial services, we saw an opportunity to extend and tailor the recommendations to the specific needs of SAP customers.

What is the SAP Lens?

The SAP Lens is a collection of customer-proven design principles and best practices to help you adopt a cloud-native approach to running SAP on AWS. These recommendations are based on insights that AWS has gathered from customers, AWS Partners, and our own SAP technical specialist communities.

The lens highlights some of the most common areas for assessment and improvement. Designed to align with and provide insights across the five pillars of the AWS Well-Architected Framework:

  • Operational excellence focuses on running and monitoring systems to deliver business value, and continually improving processes and procedures. SAP topics include monitoring and automation approaches.
  • Security focuses on protecting information and systems. SAP topics include data protection and system access.
  • Reliability focuses on ensuring a workload performs its intended function correctly and consistently when it’s expected to. SAP topics include maximizing the availability benefits of cloud and reviewing the protection and recovery approaches for various failure scenarios.
  • Performance efficiency focuses on using IT and computing resources efficiently. SAP topics include sizing and ensuring that performance benchmarks for SAP Support are met.
  • Cost optimization focuses on avoiding unnecessary costs. SAP topics include scaling and reservation strategies aligned with the workload requirements.

Across all pillars, we also focus on highlighting the support and compatibility requirements of SAP and the database and operating system vendors which make up the technology stack.

Diagram showing the key pillars of the AWS Well-Architected Framework: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization

Who should use the SAP Lens?

Whether you have been running your SAP workload on AWS for years or are just now migrating to AWS, you will find clear advice tailored to your specific circumstances.

We recommend SAP technology architects, cloud architects, and team members who build, operate, and maintain SAP systems on AWS review the SAP lens content and suggestions. With this guidance the SAP Lens can help you make the appropriate design decisions in line with your business requirements.

Applying the lens to your architecture can validate the resiliency and efficiency of your design or provide recommendations to address the gaps that are identified. We expect customers to use the SAP Lens as a supplement to the AWS Well-Architected Framework.

Next Steps

The SAP Lens is available now in the AWS Documentation and as a PDF for customers to use in a self-service fashion. If you require additional expert guidance, contact your AWS account team to engage an SAP specialist solution architect or the AWS Professional Services SAP Specialty Practice.

AWS is committed to the SAP Lens as a living tool. As SAP’s products evolve and new AWS services become available, we will update the SAP Lens accordingly. Our mission will always be to help you design and deploy well-architected applications so that you can focus on delivering on your business objectives.

Learn more about supported SAP solutions, customer case studies and additional resources on our SAP on AWS website.

Licensing options for Microsoft Windows Server/ SQL Server-based SAP workloads on AWS

$
0
0

Feed: AWS for SAP.
Author: Sreenath Middhi.

Introduction:
Customers have been running Microsoft Workloads on Amazon Web Services (AWS) for over 12 years, longer than any other cloud provider. AWS has been running SAP workloads since 2008, which is also meaningfully longer than any other cloud provider. Today, more than 5000 active customers are running SAP on AWS, and many of them are running their SAP workloads on Microsoft SQL Server.

The objective of this blog is to provide guidance and resources for the planning and migration of SAP workloads running on Microsoft SQL Server databases to AWS We will cover the licensing options to be considered while migrating these SAP workloads to AWS.

Assess Your License Options:

Customers with SAP workloads running on SQL Server databases should assess their current license model, business strategy, and operational efficiency as part of the migration strategy. The following diagram helps you identify the type of license you have and the mobility options.

SQL Server License Evaluation

SQL Server License Evaluation

*BYOL=Bring Your Own License

As shown in the figure, if the SQL Server licenses are bought from SAP or through an authorized SAP reseller, they are called as Runtime licenses. Please refer to the SAP Note – 2139358 (SAP ONE Support Launchpad) to understand the changes and requirements to stay compliant with the runtime licenses.

As mentioned in the SAP note, SAP customers may continue to run SQL Server versions up to SQL Server 2012 until 12 July 2022 on shared/hosted environments. After 12th July 2022, or when upgrading to SQL Server version 2014 or higher, you must run these databases on Dedicated Hosts to stay compliant.

The following section focuses on the migration options available for the SAP applications that are affected by the changes in these license terms.

1)SQL Server with Microsoft License Mobility:

For the SQL Server databases that have the Microsoft License Mobility; you can bring these licenses to the Amazon Elastic Compute Cloud (EC2) shared tenant environments. You may refer to the SQL Microsoft License Mobility site to determine the number of cores required for various instance sizes.

To run Microsoft SQL Server under a BYOL model using EC2 shared tenancy instances:

  1. SQL Server licenses should be purchased from Microsoft or an authorized reseller of Microsoft licenses
  2. SQL Server licenses should have active Microsoft Software Assurance or subscription licenses with SA equivalent rights (*).

In the event that your SQL Server licenses are not covered by Software Assurance, there are restrictions on your ability to used your SQL server licenses on EC2 instances. SQL Server databases running on version 2019 without Software Assurance, are not eligible for deployment on EC2 due to Microsoft’s licensing terms that took effect on October 1, 2019.

(*)Note: Any licenses purchased from Cloud Solution provider (CSP) program are not eligible for BYOL

Please consult your specific Microsoft license agreements for information on how your software is licensed. You are solely responsible for complying with all applicable Microsoft licensing requirements, including the Product Terms. AWS recommends that you consult with your own advisors to understand and comply with the applicable Microsoft licensing requirements.

2)Amazon EC2 Dedicated Hosts

As mentioned in SAP Note 2139358 (SAP ONE Support Launchpad login required), SQL Server versions up to 2012 can be run on shared/hosted hardware until 12 July 2022. After this time, you may continue to run them on Dedicated Hosts. For customers that want to stay on SQL Server database, you may consider migrating the SAP runtime licenses to the Dedicated Hosts provided by AWS. Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirements.

Amazon EC2 offers Dedicated Hosts with EC2 instance capacity fully dedicated for your use. Dedicated Hosts support different configuration (physical cores, sockets, and VCPUS) which allow you to select and run instances of different families and sizes depending on your business need. You may refer to EC2 Dedicated Hosts configurations here.

You can start using EC2 Dedicated Hosts by allocating a host using AWS Management Console or Amazon Command Line Interface (CLI) and then launching the instances into them. The capabilities available on Dedicated Hosts include:

  1. Multiple instance size support – Amazon EC2 Dedicated Hosts allow you to configure multiple instance sizes from an instance family. You can run different instance sizes within the same instance family on a Dedicated Host. Support for multiple instance types on the same Dedicated Host is available for the instance families listed here.
  2. Instance placement control – EC2 Dedicated Hosts allow you to launch the instances onto a specific Dedicated Host.
  3. Affinity – You have the option to keep instances attached to a host even if you stop and start it, by specifying instance affinity to a particular host.
  4. Monitoring – You may use AWS Config to continuously monitor, record the instances that are launched on to the Dedicated Hosts.
  5. Visibility of sockets and physical cores
  6. Integrated license management – You may use AWS License Manager to automate the tracking and management of your software licenses on EC2 Dedicated Hosts
  7. Automated management and automatic scaling
  8. Cross-account sharing
  9. Host recovery –if there is an unexpected hardware failure host recovery automatically restarts your instances on a new host.

3)License included Amazon EC2 instances:

One of the other options available is to purchase license included Amazon EC2 instances, which will include licenses for Windows Server and a SQL Server database. Depending on your use case, you may be able to provision the license included Amazon EC2 instances.

By choosing license included Amazon EC2 instances, you benefit from pay as you go model. This model allows you to pay for what you use and save on Windows Server license costs when you stop the Amazon EC2 instances.

For example, consider that you have Non-Production SAP instances that run on Windows Server; you run these instances 60 hours a week. In this case, your Windows Server charges will only be for the 60 hours you use. As mentioned in SAP Note 2539944 (SAP ONE Support Launchpad login required), In order to be eligible for SAP support, you’ll have to purchase support for MS SQL server and Windows Server when using the marketplace Amazon Machine Image (AMI).

As described in SAP Note 1656099 (SAP ONE Support Launchpad login required), Amazon Relational Database Service (RDS) for SQL server is not supported for WebAS ABAP/JAVA. For SAP Data Services specifically, Amazon Relational Database Service (RDS) for SQL Server is supported.

4)Migrate from SQL Server:

SAP has made an announcement in 2020 that their support of SAP Business Suite 7 software is extended until the end of 2027 with an option to extend the support until end of 2030. You can refer to the announcement here. After this date, customers would need to migrate to SAP S/4HANA. It may be a good idea to evaluate and consider options from moving from SQL Server to SAP HANA database to align with SAP’s product roadmap plans.

Amazon Web Services (AWS) and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory computing platform on AWS. With SAP HANA on AWS you can:

  • Achieve faster time to value – Provision infrastructure for SAP HANA in hours versus weeks or months.
  • Scale infrastructure resources – As your data requirements increase over time so does your AWS environment.
  • Reduce cost – Pay for only the infrastructure resources that you need and use.
  • Bring your own license – Leverage your existing licensing investment with no additional licensing fees.
  • Achieve a higher level of availability – Combine Amazon EC2 Auto Recovery, multiple Availability Zones, and SAP HANA System Replication (HSR).

Assess your Windows Server Licenses:

The Microsoft product terms do not grant Microsoft License Mobility to Windows Server.  As a result, it is generally not compliant with Microsoft licensing terms for customers to bring Windows Server to the shared tenancy of Amazon EC2.

Amazon EC2 Dedicated Hosts are recommended for customers bringing Windows Server licenses to Amazon EC2.

For Windows Server to be eligible for bring your own license (BYOL) on EC2 Dedicated Host:

  1. The version must be Windows Server 2019 or a prior version.
  2. The license must either be purchased from Microsoft before October 1, 2019 or purchased as a true-up under an active Enterprise agreement that was effective before October 1 2019.
  3. If the license does not meet the terms stated above, Microsoft licensing terms do not permit BYOL per this announcement.

In addition, you can also use Windows Server AMIs provided by Amazon to run the latest versions of Windows Server on Dedicated Hosts. This is common for scenarios where you have existing SQL Server licenses eligible to run on Dedicated Hosts but need Windows Server to run the SQL Server workload. The current price for using Windows Server AMIs is 0.046$per hour per vCPU.

You may refer to this page to get the latest pricing information.

Conclusion:

We understand that your SAP applications are business critical and you need reliable global infrastructure and services to support your workloads. By following the guidelines shared in this blog post, you can deploy your SAP on SQL Server workloads on AWS, to increase their flexibility and value with the world’s most secure, reliable, and extensive cloud infrastructure. You may choose bring your licenses or modernize your SAP applications by migrating to SAP HANA database or Linux based operating system depending on your business needs/strategy. Please contact SAP on AWS team for any assistance and to deep dive into your requirements.

Please refer to below links for further consideration

Microsoft SQL Server HA design for SAP on AWS

AWS License Manager

Migrate an On-premises Microsoft SQL Server database to Amazon EC2 using CloudEndure


Set up observability for SAP HANA databases with Amazon CloudWatch Application Insights

$
0
0

Feed: AWS for SAP.
Author: Balaji Krishna.

Introduction

SAP applications support mission-critical business processes, so customers want to be able to identify and resolve issues impacting their SAP HANA databases quickly. However, SAP workloads have unique requirements and consumption patterns, and getting HANA-specific intelligence can be complex. Further, when issues are identified, finding root cause can also be difficult.

Since 2019, AWS customers have been using Amazon CloudWatch Application Insights to solve these same challenges for Microsoft SQL Server databases and .NET-based Applications. Earlier this week, we announced that CloudWatch Application Insights now supports observability for SAP HANA databases. This will make it easier than ever before to understand the health of your HANA databases using native AWS services.

How CloudWatch Application Insights for SAP HANA works

You can easily onboard SAP HANA databases and setup monitors relevant for resources (e.g. memory, disk usage, etc.) to continuously analyze data, uncover potential problems and help to quickly isolate ongoing issues with your HANA based applications or underlying infrastructure.

Application Insights analyzes metric patterns using historical data to detect anomalies, and continuously track errors and exceptions from HANA, operating system, and infrastructure logs. It correlates observations using a combination of classification algorithms and built-in rules. Then, it automatically creates dashboards that show the relevant observations and problem severity information to help you prioritize your actions. For common problems in HANA, such as an unresponsive system, failed database backups, memory leaks, or canceled I/O operations, it provides additional insights to help guide towards a root cause and steps for resolution.

Guided, intuitive setup of all HANA instances

CloudWatch Application Insights reduces the time it takes to set up monitoring by scanning the HANA systems in a specific Resource Group, providing a customizable list of recommended metrics and logs, and setting them up on CloudWatch to provide visibility into various resources, such as Amazon EC2 and Elastic Load Balancers (ELB), Operating system, and SAP HANA. It also sets up dynamic alarms on monitored metrics which are automatically updated based on anomalies detected on historical data.

HANA onboarding process in CloudWatch Application Insights

Problem detection and notification

CloudWatch Application Insights detects signs of potential problems with SAP HANA, such as metric anomalies and log errors. It correlates these observations to surface potential problems with the application and generates CloudWatch Events which can be configured to receive notifications or take actions. This eliminates the need for you to create individual alarms on metrics or log errors. In the below example you can see how we can debug an out-of-memory issue with the HANA instance

Dashboard showing HANA level 4 alerts and HANA out of memory events

Finally, CloudWatch Application Insights anomaly detection leverages Amazon Sagemaker to apply statistical and machine learning algorithms on logs and metrics data from systems and applications, determine normal baselines, and surface anomalies with minimal user intervention. This is accomplished by using the metric’s past data (eg. hanadb_cpu_percent) to create a model of the metric’s expected values. The model assesses both trends and hourly, daily, and weekly patterns of the metric.

Anomaly detection for HANA

Get started with Application Insights for SAP HANA today

Minimizing disruption to HANA databases and the business processes they support is critical for SAP customers. Thousands of customers run HANA on AWS largely because running on a secure, reliable, and performant cloud infrastructure like AWS makes that easier than is possible on-premises . This week’s launch of Application Insights for SAP HANA makes it even easier for customers by adding SAP-specific metrics and insights with minimal configuration required.

To setup monitoring for your SAP HANA databases today, refer to the Amazon CloudWatch Application Insights documentation for detailed tutorials on how to get started.

To learn why AWS is the platform of choice and innovation for 5000+ SAP customers and hundreds of partners, visit aws.com/sap.

Automating SAP installation with open-source tools

$
0
0

Feed: AWS for SAP.
Author: Guilherme Sesterheim.

Introduction

We’ve already demonstrated in our first blog post how to provision the infrastructure for SAP applications using Terraform, and in our second blog post we added in automation of SAP software installation using Systems Manager. Now it is time to go deeper with open-source common tools like Jenkins and Ansible to have the SAP installation in a comprehensible single pipeline. This approach brings a few benefits added on top of the other alternatives:

  1. Helps customer teams to be compliant with auditable policies related to configuration as code, since in this blog post we will automate all of SAP software installation.
  2. Turns the SAP installation into a repeatable process, making the quality of the outcome easier to improve, since it can be simulated and run several times using the same source of information.

Another good option to deploy SAP is AWS Launch Wizard. Customer teams can build SAP systems that align with AWS best practices rapidly from the AWS Console following a guided experience designed for SAP administrators.

To help achieve goals such as increasing deployment efficiency and quality, many customers are automating as many repeatable processes as they can. Jenkins is an industry pattern providing one orchestrator environment that helps to put together all the required pieces. It runs the same commands we’d do manually using BASH for Linux.

In the end of this article you’ll have a Jenkins pipeline with the image below as its outcome:

Jenkins output with successful result

The above pipeline has capabilities to build all the needed infrastructure and install the actual software for non-HA (1) SAP Primary Application Server (PAS), SAP Hana Database and SAP ABAP SAP Central Services (ASCS).

To help you make use of Jenkins and Ansible to fully automate your SAP software installation, we’ve open sourced code to a GitHub repository for this installation automation. This will automate and will be used together with the GitHub repository we created for provisioning your infrastructure using, which was explained in our first blog post.

Understanding the pipeline steps

  1. Checkout SCM – this is when Jenkins looks for the code on GitHub
  2. Prepare – Jenkins checks if all the required variables for the run are present (variables are described on section “Preparing Jenkins”)
  3. Check ENV states – checks if there is one S3 bucket available for storing the final Terraform state file, and also if there’s already one environment up using this automation. IMPORTANT: this step is going to create on bucket on your provided account. The bucket name will be “sap-install-bucket-” followed by a random number. Terraform will store its state file in this bucket.
  4. Create ENV – The infrastructure automation based on Terraform creates all the needed infrastructure for this installation. To understand what’s going to be created, review our first blog post.
  5. Install Hana and ASCS – this is a place holder, meaning that the next two steps (6 and 7) run in parallel.
  6. Install Hana – installs Hana on the instance created by Terraform.
  7. Install ASCS – Installs ASCS on its instance.
  8. Install PAS – installs PAS on its dedicated instance after Hana and ASCS are finished.
  9. Notify – a simple terminal notification stating the end of processing.
  10. Post actions – Jenkins auto-generated step stating the end of the whole pipeline.

Why Ansible vs regular Bash script?

Ansible is a programming language for configuring the operating system of our OS. It is a robust declarative language with far more benefits than regular Bash. Ansible operates this way and the main benefit it brings is:

You state one command and a Python code runs behind the scenes to achieve the desired state. Let’s take a look at one example in the main repository:

- name: Create directories if they don't exist
  file:
    path: "{{ item }}"
    state: directory
    mode: '0755'
  loop: "{{ folders_to_create }}"

using one single Ansible command to state several things:

  1. path – states the directory path I want to ensure are createed.
  2. state – tells the command to create directories instead of files.
  3. mode – the permissions I want those folders to have.
  4. loop – means this will repeat X times according to the number of values I have inside varaible “folders_to_create”, and also making the “item” on path to work.

The most useful thing in Ansible is this state declaration. You just declare the state you want to reach and Ansible takes care of checking and performing the necessary steps to reach the state you described. Let’s say that one of those three folders on variable “folders_to_create” already exists. There’s no issue. Ansible will create the remaining two, and also fix the permissions of the 3 of them if it has to.

How to run the code

The installation automation repository has several folders putting together at least 12 repos that can be separated for you better understanding. Check the README files on each of the 12 folders mentioned on the main README to understand how each of them work.

1. Setting up the pre requisites

  1. Have access to a terminal on a Linux or Mac computer.
  2. Install Vagrant and VirtualBox on your computer.
  3. Have your SAP installation media files on a bucket in your AWS account to be used. Follow Launch Wizard’s guidelines on how to separate files between the buckets.
  4. For now only HANA 1909 is fully tested for this scenario. You can use a different version of that as well, but have in mind that you might have to tweak the code a bit for it to work.

2. Setting up Jenkins

  1. After cloning the installation automation repo, using a terminal, go to the folder “jenkins-as-code”, and type “sudo vagrant up”. Wait for this to complete. This might take around 10 minutes depending on your internet speed.
  2. When it’s done, open a browser window and type in “localhost:5555” and you will have your own Jenkins. Log in to it using the default user/password: admin/my_secret_pass_from_vault

3. Setting up the parameters

After logging in to Jenkins, go to Manage Jenkins > Manage Credentials. Here you will have to fill in the information for all the REQUIRED parameters. There are also some other optional parameters you can take a look at.

  1. AWS_ACCOUNT_CREDENTIALS – The AWS access key ID and secret access key for the IAM User you will use with Jenkins. Make sure you have a separate account for this demo only and provide administrator privileges to this user to avoid errors due to insufficient permissions.
    1. Example access key ID AKIA3EEGHLDKU6NTJYNZ and secret access key: nSrpAhTsPL81jVmFYjlYRtIVsKTHlFN82wyONh7X
  2. AMI_ID – Look for the AMI ID of the image named “Red Hat Enterprise Linux for SAP with HA and Update Services 8.2” on AWS Marketplace for the region you want to use (AMI IDs are specific in each AWS region). Subscribe to it and find AMI ID by clicking on the button “Launch new Instance”.
    1. Example: ami-0e459d519030c2bd7
  3. KMS_KEY_ARN – Create one customer managed key on your Key Management Service (KMS) and note down the ARN.
    1. Example: arn:aws:kms:us-east-1:764948313645:key/09fb3dfd-e0fa-4a78-aa12-8d69d96fce1e
  4. SSH_KEYPAIR_NAME – the name of the file you use to ssh into AWS instances. You may create a new one if necessary through the AWS CLI, or in the AWS Console, under the EC2 console, select Key Pairs. IMPORTANT! Do not add “.pem” in the end of the file. Use just the first part (before dot).
    1. mykeypair
  5. SSH_KEYPAIR_FILE – the actual creds.pem file. Upload it to Jenkins
    1. The “mykeypair.pem” file itself
  6. S3_ROOT_FOLDER_INSTALL_FILES – the S3 bucket and folder if applicable containing all your SAP media files. Follow the AWS Launch Wizard’s folder hierarchy for S/4HANA in the Launch Wizard documentation.
    1. Example: s3://my-media-bucket/S4H1909
  7. PRIVATE_DNS_ZONE_NAME – a private DNS zone name from Route53 for your SAP installation.
    1. Example: myprivatecompanyurl.net
  8. VPC_ID – VPC Id where to put the infrastructure to.
    1. Example: vpc-b2fa0ddf
  9. SUBNET_IDS – two PUBLIC subnet IDs have to be provided here (this is for future HA capabilities). Using public subnets is not advised for your real scenarios. We’re using public for making the demo simpler. For your real scenarios you should use private subnets with a Bastion or other strategy for reaching them and increasing security.
    1. Example: subnet-fec01a12,subnet-a615b465
  10. SECURITY_GROUP_ID – an already existing security group. IMPORTANT: make sure you add your own IP as the source CIDR in a rule allowing access on port 22 (SSH) to this security group.
    1. Example: sg-831778bb
  11. You are welcome to take a look at the other possible parameters. You can change the SIDs of the instances, default password, names, tags and some other important information for your installation.

4. Running the installation

Go back to Jenkins home, select “SAP Hana+ASCS+PAS 3 Instances” > “Spin up and install” > and then “Build now”. This process is going to take almost 2 hours to complete, and in the end you will have three EC2 instances with software installed to run the first as PAS, the second as ASCS, and the third as your HANA database, in your AWS account. The final output will be the image you’ve seen on the introduction part of this post.

As a last step of the installation, all three instances (PAS, ASCS and HANA) perform health checks to understand if the installation finished successfully or not. You can also do that by sshing into the instances and running “sapcontrol -nr 00 -function GetProcessList” using the <SID>adm user (ad0adm if you’re using the default SID) from terminal.

To make it easier for you to test spinning up and down your SAP, there’s also the pipeline “SAP Hana+ASCS+PAS 3 Instances” > “Destroy env”. Once you trigger this one, Jenkins is going to look for the current Terraform state file and delete everything that previous execution has created.

Next steps

Ready to get started? Head straight to the installation automation repo and start testing on your environment.

Once your tests are finished, you are welcome to customize the repo to meet your specific needs. The repo’s folders have READMEs with more instructions about how each of them work to put all the pieces together and have SAP running in the end.

If you are looking for expert guidance and project support as you move your SAP systems to a DevOps model, the AWS Professional Services Global SAP Specialty Practice can help. Increasingly, SAP on AWS customers—including CHS and Phillips 66—are investing in engagements with our team to accelerate their SAP transformation. Please contact our AWS Professional Services team if you would like to learn more about how we can help.

Maintain an SAP landscape inventory with AWS Systems Manager and Amazon Athena

$
0
0

Feed: AWS for SAP.
Author: Noé Hoyos.

Introduction

Effective maintenance and operation of SAP systems rely on access to system information to support decision-making. Inquiries about, for example, SAP kernel version, installed ABAP components, or simply active SAP systems are often part of IT operation activities. Furthermore, these inquiries are typically more elaborate, for example, listing systems matching a particular version of both the SAP kernel and operating system kernel.

It is not uncommon that SAP administrators keep an inventory of systems to help in the planning of maintenance activities. Typical places to store inventory data are text files or spreadsheets. Although these data sources provide quick access to inventory data they are difficult to update and share with team members. More elaborate alternatives to keep an inventory may involve extracting data directly from the SAP database or calling SAP transactions remotely, but these are difficult to scale as the SAP landscape grows. SAP products like Solution Manager keep updated inventory data, but querying the data is rather done through a User Interface (UI) or an Application Programing Interface (API).

Third-party configuration management tools can help capture some of this data, but AWS customers are often looking for cost-effective, scalable and highly available cloud-native solutions, where no additional infrastructure or software needs to be deployed by the customer, with low implementation and maintenance efforts involved.

In this blog we will show you how to use Amazon EventBridge, AWS Systems Manager Inventory, Amazon Athena and SAP Host Agent to maintain an SAP landscape inventory that is automatically updated and can be queried using standard SQL.

Solution overview

The following diagram shows the AWS services and components used to create an SAP landscape inventory that can be queried using Amazon Athena.

Solution architecture

We leverage the instance discovery and inventory features of SAP Host Agent to extract information from each SAP server in the landscape. Amazon EventBridge and AWS Systems Manager Run Command support the automation of calls to SAP Host Agent on a defined schedule. The automation also calls custom scripts to create inventory files in JSON format for AWS Systems Manager. The inventory JSON files are picked up by the AWS Systems Manager Agent (SSM Agent) to create an AWS Systems Manager Inventory.

AWS Systems Manager Resource Data Sync sends inventory data to an Amazon Simple Storage Service (Amazon S3) bucket. Finally, AWS Systems Manager Inventory prepares the inventory data stored in an Amazon S3 bucket and makes it available to Amazon Athena where it can be queried using standard SQL.

To demonstrate how an SAP landscape inventory is created with AWS Systems Manager we used the following systems:

  • An EC2 instance running an SAP (A)SCS instance.
  • An EC2 instance running SAP ERS, SAP gateway and SAP webdispatcher instances.
  • An EC2 instance running SAP PAS.
  • An EC2 instance running Oracle database.

For demonstration purposes the IAM instance profile for these EC2 instances includes the AWS-managed policies AmazonS3ReadOnlyAccess and AmazonSSMManagedInstanceCore. These allow the EC2 instances interact with Amazon S3 and use AWS Systems Manager service core functionality.

All systems in this SAP landscape use the Linux operating system and have the following software packages installed and configured:

  • SAP Host Agent 7.2
  • AWS SSM Agent version 3.1
  • AWS CLI
  • jq (to parse output of OS commands into JSON format)
  • dos2unix (to convert plain text files in DOS/MAC format to UNIX format)

To control the scope of the data collection process, each EC2 instance has these tags:

  • sap:inventory = yes
  • sap:sid = <SAP SID>

Replace <SAP SID> with the corresponding SID of your SAP system.

We used a single Amazon S3 bucket to store the following:

  • Shell scripts
  • SSM Inventory synchronization data
  • Amazon Athena query results

Before moving on to the walk through, verify that your SAP EC2 instances are integrated into AWS Systems Manager. Open AWS Systems Manager, navigate to Node Management, Fleet Manager and look for your EC2 instances. The following image shows our SAP systems being listed in AWS Systems Manager, Fleet Manager:

Systems Manager managed nodes

Walk through

Creating Scripts to Collect Custom Metrics

Create a shell script called SAPInventory.sh to call SAP Host Agent to discover running SAP instances and generate the corresponding inventory file in JSON format.

The following shell script obtains the list of running SAP instances and generates a corresponding JSON inventory file:

#!/usr/bin/sh
SHA=/usr/sap/hostctrl/exe/saphostctrl
SCNTRL=/usr/sap/hostctrl/exe/sapcontrol

# Get my EC2 ID
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
EC2ID=$(curl http://169.254.169.254/latest/meta-data/instance-id -H "X-aws-ec2-metadata-token: $TOKEN")

# Inventory file: SAP Instances
SSMINVSAPINST="/var/lib/amazon/ssm/${EC2ID}/inventory/custom/SAPInstanceList.json"

# Inventory header
echo -n -e "{"SchemaVersion": "1.0","TypeName": "Custom:SAPInstanceList","Content": [" > ${SSMINVSAPINST}

# Get list of SAP instances
SAPINSTANCELIST=$(${SHA} -function ListInstances -running 2>&1)

# Iterate through list and add to inventory file
for I in $(echo ${SAPINSTANCELIST}|sed -E -e 's/s//g' -e 's/InstInfo:/n/g')
do
SID=$(echo ${I}|cut -d"-" -f1)
SN=$(echo ${I}|cut -d"-" -f2)
VH=$(echo ${I}|cut -d"-" -f3)
IN=$(${SCNTRL} -nr ${SN} -function GetInstanceProperties |grep INSTANCE_NAME|awk 'BEGIN { FS = "," } ; { print $NF }'|sed -E 's/s//g')

echo -n -e "{"SID": "${SID}","System Number": "${SN}","Virtual hostname": "${VH}","Instance Name": "${IN}"}," >> ${SSMINVSAPINST}
done

# Complete the JSON file
sed -i 's/,$//' ${SSMINVSAPINST}
echo -n -e "]}" >> ${SSMINVSAPINST}

A similar approach can be used to get information about SAP kernel version, SAP instance access points, SAP instance processes and SAP ABAP components version.

This is an example of an inventory file in JSON format generated by script SAPInventory.sh:

{
   "SchemaVersion": "1.0",
   "TypeName": "Custom:SAPInstanceList",
   "Content": [
     {
      "SID": "SC3",
      "System Number":  "02",
      "Virtual hostname":  "sc3gw",
      "Instance Name":  "G02"
     },
     {
      "SID": "SC2",
      "System Number":  "01",
      "Virtual hostname":  "sc2wd",
      "Instance Name":  "W01"
     },
     {
      "SID": "SC1",
      "System Number":  "10",
      "Virtual hostname":  "sc1ers",
      "Instance Name":  "ERS10"
     }
   ]
}

Refer to the documentation about working with custom inventory for additional details about the JSON format used by AWS Systems Manager Inventory.

You could also extend the use-case and capture operating system metrics that may be relevant to your analysis. Suppose that you want to know what SAP systems currently have the most unused file system space in order to prioritize cost optimization efforts. This next sample script (FileSystems.sh) captures the relevant file system metrics. It also uses an EC2 tag value to help aggregate results per SAP System:

#!/usr/bin/sh
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
EC2ID=$(curl http://169.254.169.254/latest/meta-data/instance-id -H "X-aws-ec2-metadata-token: $TOKEN")
REGION=$(curl http://169.254.169.254/latest/dynamic/instance-identity/document | jq .region -r)

# Inventory file: SAP Instances
SSMINVFS="/var/lib/amazon/ssm/${EC2ID}/inventory/custom/FileSystems.json"

# Inventory header
echo -n -e "{"SchemaVersion": "1.0","TypeName": "Custom:FileSystems","Content": " > ${SSMINVFS}

# Capturing a Tag Value (Ex: tag key = SAPSID)
SID=`aws ec2 describe-tags 
--region $REGION 
--filters "Name=resource-id,Values=$EC2ID" 
"Name=key,Values=SAPSID" 
| jq .Tags[0].Value | sed 's/"//g'`

# Capturing list of filesystems, appending SAP SID 
df | tr -s ' ' | sed "s/$/ $SID/" | jq -sR 'split("n") | .[1:-1] | map(split(" ")) | map({"SID": .[6], "file_system": .[0], "total":.[1], "used": .[2], "available": .[3], "used_percent": .[4], "mounted": .[5]})' >> ${SSMINVFS}

# Complete the JSON file
echo -n -e "}" >> ${SSMINVFS}

Upload these shell scripts to an Amazon S3 bucket. In our example the scripts are stored in an AWS S3 bucket with the prefix /scripts/.

Creating AWS Systems Manager Document

Running the custom shell sripts on EC2 instances is done through an AWS Systems Manager Document.

1. Open AWS Systems Manager.
2. In the navigation bar go to Shared Resources and choose Documents.
3. Then choose Create document and choose Command or Session.
4. Provide a name for the document and leave other fields unchanged.
5. You can use the following JSON content, but replace the AWS S3 bucket name with one of your own:

{
  "schemaVersion": "2.2",
  "description": "Create SAP SSM Inventory files",
  "mainSteps": [
    {
      "inputs": {
        "timeoutSeconds": "300",
          "runCommand": [
          "mkdir -p /root/tmpscripts",
          "aws s3 cp s3://<bucket name>/scripts/SAPInventory.sh /root/tmpscripts/",
          "aws s3 cp s3://<bucket name>/scripts/FileSystems.sh /root/tmpscripts/",
          "sudo dos2unix /root/tmpscripts/* ",
          "sudo chmod 755 /root/tmpscripts/* ",
          "/root/tmpscripts/SAPInventory.sh",
          "/root/tmpscripts/FileSystems.sh",
          "rm -rf /root/tmpscripts "
        ]
      },
      "name": "runCommands",
      "action": "aws:runShellScript"
    }
  ]
}

This is how the document looks like in AWS Systems Manager, Documents:

Systems Manager document content

Defining the Schedule-based Amazon EventBridge Rule

The Amazon EventBridge Rule will run the AWS Systems Manager Document periodically. The AWS Systems Manager Document, in turn, will run the data collection shell scripts.

1. Open the Amazon EventBridge in the AWS Console.
2. Select Rules, Create rule.
3. Provide a Name and Description for the rule.
4. In the Define pattern section select Schedule and type the Cron expression to invoke targets.

Use the following image as reference to create the schedule for this rule, for example every 30 minutes:

EventBridge rule pattern

5. In the Select targets section select Systems Manager Run Command as the Target.
6. For Document select the AWS Systems Manager document you created in the previous section.
7. As Target key type tag:sap:inventory.
8. As the Target value(s) type yes.
9. Finally choose Create. The rule will be triggered according to the defined schedule.

Use the following image as reference to select the target for this rule:

EventBridge rule targets

Displaying Inventory Data

To look at the custom inventory data:

1. Go to AWS Systems Manager in the AWS Console
2. Navigate to Node Management, Fleet Manager.
3. From the list of Managed nodes choose one the instances where SAP inventory was collected.
4. Choose the Inventory tab.
5. Open the drop down list Inventory type and choose Custom:SAPInstanceList.

The following image shows and example of the custom inventory data for one of the EC2 instances in our SAP landscape:

Systems Manager inventory exmaple

Preparing the AWS Systems Manager Inventory data

Before the inventory data can be queried using Amazon Athena, a data source must be prepared. This consists of several steps, but AWS Systems Manager simplifies the process as described next.

1. Open AWS System Manager in the AWS Console.
2. Navigate to Node Management and choose Inventory.
3. Select the Detailed View tab.
4. Choose Create a resource data sync.
5. Provide a name for the data sync, the name of an Amazon S3 bucket to store the inventory data and a prefix to identify the data.

Use the following image as reference to create the Resource data sync:

Create a resource data sync

6. Wait a few minutes and return to AWS Systems Manager, Inventory, Detailed View.
7. The drop-down list under Resource data syncs has the new sync.
8. Select the new sync, in this case SAP-inventory, and choose Run Advanced Queries

SAP-inventory resource data sync

This will take you to Amazon Athena where the Data source and Database corresponding to AWS Systems Manager Inventory are preselected. The following image shows the table corresponding to running SAP instances (for example, custom_sapinstancelist):

Athena table

Note that all the objects present in the Amazon S3 bucket at the time of the Resource data sync creation will be catalogued. This may result in a larger set of tables in addition to those of Systems Manager Inventory.

Querying the Inventory with Amazon Athena

If you are using Amazon Athena for the first time, specify an Amazon S3 bucket to store query results.

1. Choose Settings in the main screen of Amazon Athena.
2. Specify the Amazon S3 bucket (and prefix) to store query results:

Athena query results location

To Preview the data from one of the SAP inventory tables, for example custom_sapinstancelist:

1. Click on the ellipsis menu button next to the table name.
2. Choose Preview table.
3. This will add a new tab with the corresponding SQL and results at the bottom.

The following image shows example results:

query sapinstancelist - results

Creating Custom Athena Queries

Now that the Systems Manager Inventory is available to Amazon Athena, it is possible to run more complex queries. For example, the following query combines data from the standard AWS Systems Manager Inventory with our custom SAP inventory to get the version of the C++ standard library in our SAP systems:

SELECT a.name, a.version, a.packageid, a.publisher, b.sid, b."instance name", a.resourceid
FROM "myxferbucket-us-west-2-database"."aws_application" a, "myxferbucket-us-west-2-database"."custom_sapinstancelist" b
WHERE a.resourceid=b.resourceid
AND a.name='libstdc++';

Custom query example

If you also included file system statistics when you captured your AWS Systems Manager Inventory data you could now run a query like the one shown next to retrieve the top ten SAP systems with the most available space in file systems used for Oracle, DB2 or HANA data. This could reveal potential candidates for storage cost optimization, for example:

SELECT sid as "SAP System", sum(cast(available as bigint))/1024/1024 as "Available Data FS Space (GB)"
FROM   custom_filesystems 
WHERE  (mounted like '/oracle/___/sapdata%') 
OR     (mounted like '/db2/___/sapdata%') 
OR     (mounted = '/hana/data')
GROUP BY sid
ORDER BY 2 desc
LIMIT 10;

Filesystem space query example

Cost

AWS services provide cost-effective solutions to respond to requirements like the ones described in this blog. The following table provides cost estimates for each service used as part of the scenarios presented in this blog. For these estimates, we assumed:

  • AWS Region utilized: us-east-1 (N. Virginia)
  • The SAP landscape was composed of 2000 SAP servers (EC2 instances)
  • The captured metrics were queried 100 times a day, on average
  • Both SAP and file system custom metrics were part of the custom Systems Manager Inventory data. In addition, all standard Systems Manager Inventory data for Linux was also included.
Service Comments Estimated cost Additional pricing information
Amazon EventBridge No charges for standard EventBridge events $0 Amazon CloudWatch pricing
AWS Systems Manager No charges for using AWS Systems Manager Inventory and RunCommand $0 AWS Systems Manager pricing
Amazon S3 $0.023/GB per month (we estimated the size of the inventory data for 2000 SAP Servers to be around 2GB) $0.5 Amazon S3 pricing
Amazon Athena We estimated 100 queries a day, based on the minimum 10MB scanned data per query at $5/TB of scanned data (all our queries scanned significantly less than the minimum 10MB) $0.16 Amazon Athena pricing
Amazon Glue Our catalog was well under the 1 million objects free tier. We estimated the cost of hourly crawler runs based on the minimum 10-minute DPU charge, at $0.44 per DPU/hour (the tested Crawler runs lasted less than the minimum 10 minutes) $53.00 AWS Glue pricing
Total Estimated Monthly Cost $53.21

Cleanup

To remove the configuration of services and objects created in the walk through section, we suggest following these steps:

  1. Delete the AWS Systems Manager inventory resource sync
  2. Delete the AWS Glue Crawler
  3. Delete the AWS Glue Database
  4. Remove the Amazon EventBridge Rule
  5. Remove the AWS Systems Manager document
  6. Delete the Amazon Simple Storage Service (AWS S3) bucket
  7. Delete the json inventory files from the OS
  8. Remove the inventory definitions using AWS CLI

Conclusion

This blog presents just a few ideas on how you can leverage AWS services to enhance visibility over your SAP systems inventory. With a few configuration steps you can have Amazon EventBridge and AWS Systems Manager working together to automatically gather, store and aggregate SAP system information data. Then you can use Amazon Athena and standard SQL queries to quickly access this information. Furthermore, this can be achieved without deploying additional infrastructure.

The examples provided in this blog can be easily extended to:

  • Use PowerShell commands to capture custom inventory data for Windows workloads
  • Include database metrics in your custom inventory data, using a combination of shell scripting and database command line tools
  • Enhance your custom inventory data by using additional components of your AWS tagging strategy to enable more advanced query scenarios

Furthermore, observability of your SAP environment, especially those where SAP HANA is a core component, can be enhanced by Amazon CloudWatch Application Insights. To setup monitoring for your SAP HANA databases today, refer to the Amazon CloudWatch Application Insights documentation for detailed tutorials on how to get started.

These ideas can be leveraged to support different aspects of your SAP-on-AWS environment. Operational support, audit and compliance, capacity planning, and cost optimization are just a few examples. We are excited to see our customers build upon these ideas. We encourage you to log on to the AWS Console and start exploring the services we discussed in this blog.

If you are looking for expert guidance and project support as you move your SAP systems to AWS, the AWS Professional Services Global SAP Specialty Practice can help. Increasingly, SAP on AWS customers—including CHS and Phillips 66—are investing in engagements with our team to accelerate their SAP transformation. Please contact our AWS Professional Services team if you would like to learn more about how we can help.

To learn why more than 5,000 active customers run SAP on AWS, visit aws.amazon.com/sap

Automate SAP HANA database restore using AWS Systems Manager

$
0
0

Feed: AWS for SAP.
Author: Ajay Kande.

Introduction:

For many customers, SAP system copies are one of the routine maintenance activities. SAP system copies are a defined sequence of steps to copy SAP production data to non-production environments. In this blog post, we discuss about automating the SAP HANA database restore.

Amazon Web Services (AWS) Backint Agent for SAP HANA is an SAP-certified backup and restore solution for SAP HANA workloads running on Amazon Elastic Compute Cloud (Amazon EC2) instances. AWS Backint Agent backs up your SAP HANA database to Amazon Simple Storage Service (Amazon S3) and restores it using SAP management tools, such as SAP HANA Cockpit, SAP HANA Studio, or SQL commands. AWS Backint Agent supports full, incremental, differential, and log backup of SAP HANA databases and catalogs to Amazon S3. There is no cost to use AWS Backint Agent. You only pay for the underlying AWS services that you use.

There are several steps and manual efforts involved in the process of restoring a SAP HANA database. This blog is aimed at reducing operational overhead to SAP support staff (such as SAP basis team, database administrators) by automating restore of a SAP HANA database using AWS Systems Manager (SSM) document. You can deploy this SSM document in less than 5 minutes using an AWS CloudFormation template. SSM document needs to be run to restore HANA database (scale-in or scale-out) from a backup stored in Amazon S3 bucket created by AWS Backint. To help customers, we are open sourcing the capability to restore HANA database here.

Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and automate operational tasks across your AWS resources. For operators with a system administration background, this should be easy to configure using a combination of predefined automation playbooks, RunCommand modules, which allow writing simple bash scripts, and the occasional decision step.

Overview:

The following diagram explains the high-level steps involved in the automation of SAP HANA database restore using SSM documents. Backups are stored in Amazon S3 using AWS Backint Agent for SAP. The administrator initiates restore using Systems Manager (SSM) document. AWS Systems Manager runs restore activities on the target SAP HANA database system. AWS Systems Manager records the logs in Amazon CloudWatch.

Steps involved in the automation of SAP HANA database restore using SSM documents

Prerequisites:

  • Setup tags to enable and identify the instances as shown in the following table. Note: using a prefix like “ssmsap:” clearly identifies a purpose for the tags and will reduce the likelihood of unrelated changes.

Source HANA Master Node:

Key Value
ssmsap:enabled TRUE
ssmsap:role HANAMASTER
ssmsap:sid <<Source database SID>>

Target HANA Master Node:

Key Value
ssmsap:enabled TRUE
ssmsap:role HANAMASTER
ssmsap:sid <<Target database SID>>
  • HANA Keys
    Create hdbuserstore key on the target HANA instance with root user as shown. SSM document will use this key to execute restore steps.Note: In this example, we used System user. You may create a custom user with restore authorizations, create the keys for that user instead of SYSTEM user.

Creating hdbuserstore key

The Solution:

Sequence of the steps that are performed as part of this solution are shown in the following diagram. This can also be easily customizable by removing or replacing specific step as per your requirements.

Detailed steps that are performed as part of this solution

Each step is designed to perform a single action or step, allowing the elements to be built, chained together, and reused but also giving improved visibility and control. (This becomes a key element of the framework later on). We chose RunCommand and bash scripts because this aligns with the “command line” use that SAP administrators would be familiar with, we also tried to minimize the configuration and input required, using queries on the host to identify what was running and to derive the parameters required for issuing commands. To tie the execution together and identify instances SSM automation document parameters, outputs, and instance tags were used.

Let’s see what each step in this solution does:

Step 1: Export backup root keys (optional)

This is an optional step and required only if the source system backups are protected by the backup root keys. Using hdbnsutil command, backup root keys are exported to a local file system, and then copied to target instance or upload to Amazon Simple Service Storage (Amazon S3) bucket. In this example, we are uploading to an Amazon S3 bucket which is encrypted.

All the steps following are performed on the target SAP HANA instance which is being restored.

Step 2: Suspend log backups

In this step, we are going to suspend log backup on the target HANA database using following command:

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j "ALTER SYSTEM ALTER CONFIGURATION (''global.ini'', ''SYSTEM'') SET (''persistence'', ''enable_auto_log_backup'') = ''no''"

Step 3: Stop target tenant database

Stop target HANA database before proceeding with restore activities using following command:

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j "ALTER SYSTEM STOP DATABASE {{ TARGETSID }}"

Step 4: Validate and import backup root keys (optional)

This is an optional step and required only if the source system backups are protected by backup root keys. Copy the backup root keys from Amazon S3 bucket which are exported in step 1, validate, and import as shown below.

su - ${SIDLower}adm -c "/usr/sap/{{ TARGETSID }}/SYS/exe/hdb/hdbnsutil -validateRootKeysBackup /hana/shared/{{ SOURCESID }}KEY.rkb --password=${rootkeypassword}"

su - ${SIDLower}adm -c "/usr/sap/{{ TARGETSID }}/SYS/exe/hdb/hdbnsutil -recoverRootKeys /hana/shared/{{ SOURCESID }}KEY.rkb --database_name={{ TARGETSID }} --password=${rootkeypassword}"

Root key password mentioned in above command is stored in AWS Parameter Store and retrieved as show following:

rootkeypassword=`aws --region={{ AWSREGION }} ssm get-parameter --name "{{ SOURCESID }}-ROOT-KEY-PASSWORD" --with-decryption --output text --query Parameter.Value`

Step 5: Restore target tenant database

This is the step to restore target tenant database using source system backup which are stored in Amazon S3 bucket. Please note, target database instance should have backint configured and IAM role assigned to the instance have access to Amazon S3 bucket where source system backups are stored.

Option 1: Restore with BACKUP_ID

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j "RECOVER DATA FOR {{ TARGETSID }} USING SOURCE '{{ SOURCESID }}@{{ SOURCESID }}' USING BACKUP_ID {{ BACKUPID}} USING CATALOG BACKINT USING DATA PATH ('/usr/sap/{{ SOURCESID }}/SYS/global/hdb/backint/DB_{{ SOURCESID }}/') CLEAR LOG"

Option 2: Restore with BACKUP_ID and Log Backups

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j " RECOVER DATABASE FOR {{ TARGETSID }} UNTIL TIMESTAMP '{{ DATEANDTIME }}' CLEAR LOG USING SOURCE '{{ SOURCESID }}@{{ SOURCESID }}' USING CATALOG BACKINT USING LOG PATH ('/usr/sap/{{ SOURCESID }}/SYS/global/hdb/backint/DB_{{ SOURCESID }}') USING DATA PATH ('/usr/sap/{{ SOURCESID }}/SYS/global/hdb/backint/DB_{{ SOURCESID }}/') USING BACKUP_ID {{ BACKUPID }} CHECK ACCESS USING BACKINT "

Step 6: Restore backup root key (optional)

This is an optional step and required only if the source system backups are protected by backup root keys. After the restore is success, set the password for backup root keys on target database using below command:

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j "ALTER SYSTEM SET ENCRYPTION ROOT KEYS BACKUP PASSWORD "${rootkeypassword}""

Root key password mentioned in above command is stored in AWS Parameter store and retrieved as show below:

rootkeypassword=`aws --region={{ AWSREGION }} ssm get-parameter --name "{{ TARGETSID }}-ROOT-KEY-PASSWORD" --with-decryption --output text --query Parameter.Value`

Step 7: Resume log backups

As final step, enable log backups on the target tenant database

/usr/sap/{{ TARGETSID }}/HDB{{ TARGETDBSYSTEMNO }}/exe/hdbsql -U {{ TARGETDBSYSTEMKEY }} -j "ALTER SYSTEM ALTER CONFIGURATION (''global.ini'', ''SYSTEM'') SET (''persistence'', ''enable_auto_log_backup'') = ''yes''"

If there are any additional restore activities, this solution can easily customizable by adding steps to as shown above. Once you have deployed the document you can review the Markup text descriptions to understand the steps in more detail.

Execution:

In CloudFormation, select Create Stack and populate the required parameters or leave them as the defaults, ensuring that they are unique in your account. Select Next, then under configure stack options select Next, review the inputs and select Create Stack. Note: If you are redeploying this template, consider deleting old stacks.

Specify stack details and create stack

Usage:

Under Systems Manager > Documents > under “Owned by me” > Select the document with the name you specified and click on “Execute automation”. Familiarize yourself with the document by reading through the document and step descriptions.

Provide below input parameters to execute the restore

Option 1: Restore with BACKUP_ID

Restore parametes with BACKUP_ID

Option 2: Restore with BACKUP_ID and Log Backups

Restore parameters with BACKUP_ID and Log Backups

Click “Execute” and you can see the execution status in the next screen. Start time and End time of each step is displayed as shown below.

Execution status for Restore with BACKUP_ID

Execution status for Restore with BACKUP_ID

Execution status for Restore with BACKUP_ID and Log Backups

Execution status for restore with BACKUP_ID and Log Backups

Conclusion:

In this blog post, you learned about automating the SAP HANA database restore using the AWS systems manager document. You may use this procedure to reduce your system refresh time and reduce manual efforts.

For more information, Please refer to:
AWS Backint Agent
AWS Systems Manager

We look forward to seeing what our customers build, and if you have questions or would like to know about SAP on AWS innovations, contact the SAP on AWS team or visit aws.com/sap to learn more. Start building on AWS today and have fun.

SAP Disaster Recovery Solution Using CloudEndure: Part 1 Failover

$
0
0

Feed: AWS for SAP.
Author: Anjani Singh.

Disasters due to natural calamities, application failures, or service failures not only causes downtime for business applications but also cause data loss, and revenue impact. To mitigate the impacts of such scenarios, Disaster Recovery (DR) planning is critical for organizations running mission-critical and business-critical applications such as SAP.

In this blog, we will walk through how organizations can leverage CloudEndure as Disaster Recovery solution for SAP applications and review aspects that are applicable to SAP.

CloudEndure Disaster Recovery solution enables organizations to quickly and easily shift their disaster recovery strategy to AWS from existing physical or virtual data centers, private clouds, or other public clouds, in addition to supporting cross-region / cross-AZ disaster recovery in AWS. CloudEndure Disaster Recovery minimizes downtime and data loss by providing fast, reliable recovery of physical, virtual, and cloud-based servers into AWS Cloud, including public regions, AWS GovCloud (US), and AWS Outposts. You can use CloudEndure Disaster Recovery to protect your most critical databases, including Oracle, MySQL, and Microsoft SQL Server, as well as enterprise applications such as SAP.

CloudEndure Disaster Recovery continuously replicates your machines (including operating system, system state configuration, databases, applications, and files) into a low-cost staging area in your target AWS account and preferred Region. In the case of a disaster, you can instruct CloudEndure Disaster Recovery to automatically launch thousands of your machines in their fully provisioned state in minutes. By replicating your machines into a low-cost staging area while still being able to launch fully provisioned machines within minutes, CloudEndure Disaster Recovery can significantly reduce the cost of your disaster recovery infrastructure. The two key concepts when it comes to DR planning are Recovery Time Objective (RTO) and Recovery Point Objective (RPO). RTO is the maximum period you want your systems to be unavailable due to an outage. RPO refers to the point of data processing you wish to recover to if there is a disaster. The following diagram illustrates the correlation of RTO and RPO:

Chart showing the difference between RPO and RTO

Solution Overview

CloudEndure Disaster Recovery minimizes downtime and data loss by providing fast, reliable recovery of physical, virtual, and cloud-based servers into AWS in the event of IT disruptions.

The CloudEndure Agent is installed on servers running SAP workloads to connect to the CloudEndure console to replicate those servers (including operating system, system state configuration, databases, applications, and files) into a low-cost staging area in the target DR AWS account in DR AWS Region or DR AWS Availability Zone. In the case of a disaster, you can launch the replicated servers via CloudEndure console to fully provisioned state in minutes then register in DNS to continue the operations. Below diagram is a reference Production Architecture for SAP Application DR from primary AWS Region to DR AWS Region.

Architecture diagram showing Primary and Disaster Recovery Regions

Pre-Requisites

Customers have the flexibility to implement Disaster Recovery in another AWS Region different from where production workload runs or in another Availability Zone depending on their data sovereignty compliance requirements. In this blog, we will cover using a different region for Disaster Recovery but the steps are similar to using another Availability Zone.

  1. Customer has implemented SAP systems in the primary region and identified the DR AWS region
  2. Customer has CloudEndure license
  3. Customer has identified the RTO and RPO for application servers, and database
  4. Customer has implemented replication for shared file systems (EFS) using AWS DataSync or rsync or other appropriate tool from the primary to DR AWS region
  5. Customer has identified the process to update the DNS during DR test
  6. Customer handles Database native replication to sync DR Database
  7. Customer has at least 50% of the production level Amazon EC2 instance capacity reserved in DR AWS Region in case of DR failover
  8. Create AWS IAM users and policies

Steps

  1. Register the CloudEndure account for the customer in AWS Marketplace
  2. Create a Disaster Recovery project in the CloudEndure console
  3. On the Setup & Info page, under AWS Credentials, the AWS access key ID and secret access key of the IAM user were created in the target account and choose SAVE.
  4. Setup the IAM role which enables CloudEndure to create target EC2 instances and copy EBS volumes
  5. Setup the Blueprint and Replication settings
    • Blueprint
      1. Choose the Subnet
      2. Choose to use new private IP for DR instance
      3. Choose the IAM role
      4. Choose the source systems disks to replicate
    • Replication Settings
      1. Choose the replication server instance type
      2. Choose the DR Amazon Virtual Private Cloud (VPC)
      3. Choose the DR security group
      4. Choose the DR staging area disks
  1. Save the settings
  2. CloudEndure Agent installation instructions are in the “How to Add Machines” section of CloudEndure Console. Download the CloudEndure Agent on one Amazon EC2 instance running SAP and share with the others for installation using below command line:

wget -O ./installer_linux.py https://console.cloudendure.com/installer_linux.py

  1. Install the CloudEndure Agent in each source SAP instance using the project installation token as shown below. The installation Token is unique to the project and available in the CloudEndure console.

sudo python ./installer_linux.py -t <Installation Token> –no-prompt

Screenshot of how to add machines within the CloudEndure console

  1. To check the status of CloudEndure Agent on source machines, execute the following command:

ps -ef | grep cloudendure | grep -v grep | grep -v bash | wc -l

The output of the above command shows ‘5’ CloudEndure processes that indicates Agent is fully running and less than ‘5’    indicates the Agent is not fully operational. The Agent log can be found in the file: /var/lib/cloudendure/agent.log.0

  1. Enable the port 443 and 1500 to establish communication between CloudEndure Agent and the Replication Server. Table below shows the ports and their purpose for reference.
Port Number Protocol Source Destination Description
443 HTTPS Source Machines (CE Agent) CloudEndure Service Manager Agent Download, Upgrade. Display replication status. Capture source machine packages and metrics
443 HTTPS Replication Server CloudEndure Service Manager Display replication status and capture replication server metrics
1500 Custom TCP Source Machines Replication Server Encrypted data transfer
  1. Once the Agent is installed and the source machine is registered in the CloudEndure Console, you’ll see the instance appear in the CloudEndure Console while the initial data replication task starts
  2. Once the CloudEndure Replication is complete; the instance is ready to be launched at the DR site.

Launch the SAP instance in the DR site

For illustration purposes, the following steps launches one Amazon EC2 server running SAP workload on the DR site replicated via CloudEndure from the Primary site. Same procedure applies for launching additional servers.

  1. Select the instance to be launched and click on the “Launch Target Machine”.
  2. Select the “Recovery Mode”, for instance to failover from primary to DR site. Testing the DR solution is standard practice to make sure the solution works and should be part of periodic Disaster Recovery test.
    • To perform a DR test, select the “Test Mode”

Screenshot of the Step where we have the Launch Machines options

Screenshot of the Confirmation on the Launch Machine

  1. Click Next
  2. Choose the Recovery point.
    Screenshot showing the option to choose recovery point
  3. Click on “Continue with Launch”
  4. The SAP DR instance is launched in the DR AWS Region
    Screenshot of the Console showing the Instance launched in the DR Site
  5. Connect to the DR instance either by using Session Manager or SSH through a Bastion host or direct access via corporate network.
    Screenshot showing the SSM Option to login to the EC2 instance
  6. CloudEndure performs the block level replication of the source EC2 instance EBS volumes which hosts the Operating system and file systems. The shared file systems such as /sapmnt and /usr/sap/trans which were created using EFS are not part of CloudEndure Replication. The shared file systems are replicated by CloudEndure to DR AWS Region using AWS DataSync or rsync as stated in pre-requisites point 4. The replicated DR EFS for shared file systems: /sapmnt and /usr/sap/trans are mounted in the DR system
  7. Start the SAP application on each server in the DR environment using the following commands as the <sid>adm user. Confirm that the system returns “OK” in response to each command.

sapcontrol -nr <Inst No> -function StartService <SID>

sapcontrol -nr <Inst No> -function Start

Register the new DR IP in the DNS. Login to the SAP instance using SAPGUI to validate if SAP is up and running in the DR AWS Region.

Cost Estimation

For cost estimation, AWS CloudEndure pricing is $0.028 per hour per server, or an estimated $20 per month per server. As a reference point, the cost of 50 instances in the DR AWS Region would be

SAP Instances Estimated Cost (Month)
50 $1000.00

Conclusion: We saw how AWS CloudEndure could be leveraged as a DR solution for SAP systems to fail from Primary to DR AWS Region. The CloudEndure is an effective and cost-optimized solution for critical and non-critical applications. In the next blog, we will see how we can fail back from DR to the primary region.

To learn why more than 5,000 customers run SAP on AWS, visit the SAP on AWS page.

CloudEndure Reference Documentation:

https://docs.cloudendure.com

SAP Disaster Recovery Solution Using CloudEndure: Part 2 Failback

$
0
0

Feed: AWS for SAP.
Author: Anjani Singh.

In the previous blog, we covered failover from Amazon Web Services (AWS) Primary region to AWS Disaster Recovery Region with CloudEndure Disaster Recovery. In this blog post, we will walk you through a failback of your productive workload from the DR region to the primary Region.

There are a variety of reasons for doing a failback as soon as possible to your primary region. This includes effective leverage of previously purchased usage commitments (Reserved Instances or Savings Plans), network latency to end users and other linked workloads, or reducing data transfer costs.

CloudEndure allows you to prepare for Failback by reversing the direction of Data Replication from the Target machine back to the Source machine. This blog post will cover fail back procedure from AWS DR Region to AWS Primary Region using CloudEndure Disaster Recovery.

Solution Overview

In this scenario, the failover from Primary Region to AWS DR Region is performed using CloudEndure as documented in SAP DR Solution Part 1.

Now that the SAP instances are running in the AWS DR Region with connections to all the interfaces, SAP and non-SAP systems and services are back up & running as per normal.

CloudEndure provides the option to fail back the SAP instances by replicating the instance at block level to Primary region. Using CloudEndure, customers don’t have to perform backup and restore of applications from DR to Primary Region for the failback to Primary. When the CloudEndure replication to Primary Region is complete, the instances are updated in DNS to match the Primary Region IPs to resume services.

Note: One of the criteria to be observed when implementing DR is not to share the resources between Primary and AWS DR region.

This picture depcits the high level architecture of the CloudEndure setup and the replication flow from DR Region to Primary region during failback. The CloundEndure Agents on source will continuously replication the data to Primary Region

Pre-Requisites

CloudEndure provides the fully orchestrated failback method within the CloudEndure DR console. Customer’s need to ensure that the following pointers are met when they are failing back to AWS Primary Region, this will help validate the overall efficient failback.

1.     Customer has failed over the SAP instances from Primary to AWS DR region

2.     Customer has identified and scheduled a time to perform the failback will not disrupt the production workload SLA’s

3.     Customer has implemented replication for database, shared file systems (Elastic File System) using AWS DataSync or rsync or other supported replication solution from the DR to primary region

4.     Customer has identified the process to update DNS during DR test

5.     Customer has the instances reserved in the Primary region

Pre-Requisites Steps

First, we will prepare CloudEndure to perform the failback of the failed over Amazon EC2 Instances, by configuring the failback replication.

1.     Use the Disaster Recovery project in the CloudEndure console that was used to failover from primary to AWS DR region

2.     On the Setup & Info page, check that the AWS Credentials, the AWS access key ID and secret access key of the IAM user are created in the target account and choose SAVE.

3.     Check the Blueprint and Replication settings

a.     Blueprint

i.     Choose the Subnet in primary region

ii.    Choose to use new private IP for primary instanc

iii.   Choose the AWS Identity Access Management Role in primary region, which Amazon EC2 instances has to be assigned

iv.    Choose the source systems disks to replicate

b.     Replication Settings

i.     Choose the replication server instance type

ii.     Choose the primary Region Amazon VPC

iii.     Choose the primary instance security group

iv.     Choose the staging area disks

4.     Save the settings

5.     Enable the port 443 and 1500 in source and target security groups,

You can establish communication between the Source machines and the CloudEndure Service Manager over TCP Port 443 either via Direct communication between the Source machines and the Service Manager or indirect communication by using a proxy, for more details refer to CloudEndure Documentation

6.     Perform a telnet check to connect port 1500 from the DR instance to Primary region

7.     Use a private connection, such as over an IPSec VPN or AWS Direct Connect link, by checking the “Use Private IP” option. For more details refer to the CloudEndure Documentation

Failback Steps

Now that the failback replication has been configured, we can start the replication, then failback the failed over Amazon EC2 instances.

1.     Select the instance to be replicated from AWS DR Region to Primary Region in the CloudEndure console

2.     Start the replication

Screenshot Shows the Replication Progress on CE Console

3.     Once the Replication is complete, stop any user/batch activity on the source machine and then launch the target machine

Tip: Look at user sessions in SM04, Al08 or SM66 to make sure of any user activity. Check for any active batch jobs in SM37.

This picute is showing the console prompting to launch the Target system at Primary Site during failback

4.     Choose the recovery point and click on “Continue with Launch”.

Tip: This step defines the RPO strategy of the SAP Disaster Recovery

Screenshot shows to Choose the Recovery Point to Launch the instance

5.     The Machine Launch Status can be checked for status under Job link

The Screenshot shows the Launch Progress

6.     Once the machine is launched, the target information will show the Amazon EC2 Instance ID

Screenshot Shows the Instance Information Launched on the CE Console

7.     AWS Management Console will show the launched target instance under Amazon EC2 we can connect to the Failback instances either by using session manager (SSM) or SSH through a bastion host or direct access via corporate network.

The Screenshot shows the Instance status of EC2 on the AWS Management Console

8.     Mount /sapmnt and /usr/sap/trans to the Amazon EC2 instances in the primary region.

Tip: CloudEndure performs the block level replication of the source Amazon EC2 instance Amazon Elastic Block Store volumes which hosts the Operating system and file systems. The Amazon Elastic File system shared file systems such as /sapmnt and /usr/sap/trans are not part of CloudEndure Replication. The Amazon EFS at DR are replicated back to Primary region using AWS DataSync or rsync as stated in pre-requisite point 3.

9.     Start the SAP application on each replicated Amazon EC2 instance in failback environment using the follow commands as <sid>adm user. Confirm that the system returns “OK” in response to each command.

sapcontrol -nr <Instance number> -function StartService

sapcontrol -nr <Instance number> -function Start

10. Check the sap application services and validate if application is up and running (Message Service, Enqueue, Dispatcher). The following command can be used to check the status

sapcontrol -nr <Instance number> -function GetProcessList

Conclusion

Using CloudEndure as Disaster Recovery solution for SAP works both ways – for failing over SAP systems from Primary to DR AWS Region and when the Primary site has been recovered, CloudEndure can also be used to failback to Primary site in by allowing you to prepare for Failback by reversing the direction of Data Replication from the Target machine back to the Source machine in a fully orchestrated way.

For additional details, consult the CloudEndure Reference Documentation. 

To learn why 5,000+ SAP customers trust AWS to get more value out of their SAP investments, visit the SAP on AWS page.

Extend your SAP business processes using Amazon AppFlow and AWS native services

$
0
0

Feed: AWS for SAP.
Author: jcurrid.

Our customers increasingly want to combine their SAP and non-SAP data in a single data lake and analytics solution to bridge siloed data and drive deeper business insights. Last year, we launched the Amazon AppFlow SAP OData Connector to make it easier for customers to get value out of SAP data with AWS services. We outlined how to get started with AppFlow and SAP and some of the benefits in previous blog.

Following this launch, customers told use that have told us they also want to use the AWS data platform to enhance their SAP business process by enriching data using higher level services such as Artificial Intelligence or Machine Learning. and then feeding the data back to their SAP applications. In January, we delivered against that request by enabling bi-directional data flows between SAP applications and AWS data lake, analytics services and AI/ML services in just a few clicks.

Today, I will show you how to set up a bi-directional data flow in just a few minutes.

The write back feature supports Amazon S3 as a source and writes the data to an SAP system at the OData layer. You can also create deep entities with the SAP OData deep insert feature. Customers also have the option to further protect the data flowing between AWS and SAP systems, with optional AWS PrivateLink security.

The new functionality can enable variety of use cases for our customers and integration from data sources such as Redshift, Lambda, or enriched business data from Amazon AI/ML services such as Sagemaker, Rekognition, Textract or Lookout for Vision.

Amazon Appflow SAP Write Back from S3

In the next section we will show you how to easily get up and running with an example of what you can achieve with Amazon AppFlow, SAP and AWS native services.

First, I will recap some of the basics for the SAP OData Connector before going into a detailed guide on setting up the write back feature and an example end to end use case with AppFlow, SAP and native services.

Amazon AppFlow offers a significant cost saving advantage compared to building connectors in-house or using enterprise integration platforms. There are no upfront charges or fees to use AppFlow, and customers only pay for the number of flows they run and the volume of data processed. The SAP OData Connector provides direct integration of SAP with AWS services without the need to pay for any additional adapters or licenses. This is all configured in the same simple AppFlow interface.

The Amazon AppFlow SAP OData Connector supports AWS PrivateLink which adds an extra layer of security and privacy. When the data flows between the SAP application and your source or destination Amazon S3 bucket with AWS PrivateLink, the traffic stays on the AWS network rather than using the public internet (see Private Amazon AppFlow flows for further details).

Customers running SAP Systems on-premises can also use the Amazon AppFlow SAP OData Connector by using AWS PrivateLink in conjunction with an AWS VPN or AWS Direct Connect based connection as an alternative of using the public IP address for the SAP OData endpoint.

In our previous blog we showed you how to set up a connection and configure an extract flow, the setup steps for configuring a write back connection are the same and you can also refer to our Amazon AppFlow SAP OData Connector documentation.

Once you have your connection in place, the configuration steps to create the update flow in SAP through Amazon AppFlow SAP OData are as follows:

1) Configure Flow. In this configuration screen you can set select source Amazon S3 bucket, target SAP Connection with Service Entity Sets as well as file formats for reading data from Amazon S3 (JSON or CSV)

Configure SAP Write Back flow in Appflow

You can also select a destination for the response handling which will write response data into a destination S3 bucket. In the error handling section you can define how the flow will behave if the AppFlow is unable to write a record to the destination, you can select to a) Stop the current flow run or b) Ignore and continue the flow run.

Configure Error and Response Handling in Appflow

2) Data mapping. In the map data fields step you can select the method for mapping the source to destination fields – manually, using a CSV file or passthrough without modification (recommended for hierarchical structure of input data).

In the destination record preference section you can either select to a) Insert new data records or b) Update existing records. Note: The Upsert operation is not supported for SAP OData connector.

Configure Field Mapping for Amazon Appflow SAP Write Back flow

3) Create Flow. In this step you will confirm the flow parameters and create the flow

4) Run the Flow. In this step you will trigger the flow execution

Now we are able to set up flows to both an extract from and write back to SAP and S3 using the Amazon AppFlow SAP OData Connector we are able to extend customer SAP business processes with AWS native services.This example architecture combines AI/ML services and SAP to provide an automated invoice processing process.

Example architecture using Amazon Appflow and AWS native services to extend SAP business processes.

1) Scanned invoices are sent from Vendors and stored to Amazon S3

2) Sales Order data is extracted  from SAP S/4 HANA using the AppFlow SAP OData connector

3) Amazon Textract is used to process incoming scanned invoices and extract the text contained in these using machine learning.

4) AWS Step Functions is used to carry out the workflow functionality

5) The first step is to process and store the extracted invoice data to S3.

6) The following step in the AWS Step Function workflow compares the invoice data extracted from SAP with the scanned invoice data.

7) If a match is found an Amazon DynamoDB table is updated with details of the Sales Order and Invoice

8) If no match is found an exception is generated and an email notification is sent to the operations team to investigate using Amazon SNS

9) The final step in the workflow is to generate update files in JSON format based on the matched records in DynamoDB

10) Amazon Appflow writes the updates back to the SAP S/4 HANA system using the SAP OData connector

All of this is done using native AWS services, without provisioning any servers or procuring costly enterprise licenses. The Amazon AppFlow SAP OData flows are all configured through a few clicks in the AWS console and are fully secured through transport level encryption and PrivateLink connectivity.

This was one of the simple example of Amazon Textract, where you can use it for various other use cases such as

  • Document processing with SAP
  • Automate business processes such as ingesting sensitive tax documents
  • Automating search and index of historical records using Textract

The architecture pattern and approach of using AWS native services to extend your processes aligns to both SAP and AWS’s strategy to keep the core SAP system clean. Customers who follow this approach will benefit from less customization and associated overheads in the SAP system of record. They will also be to take advantage of the pace of innovation in AWS services and as these support more agility will enable faster time to market.

The possibilities for our customers are really exciting and we have multiple AI/ML use cases that can help to enrich your SAP business data and built similar pattern

The Amazon AppFlow SAP OData Connector is important building block for this approach and with the new write back feature you can easily extend SAP business processes with AWS services.

The Amazon AppFlow SAP OData Connector write back functionality provides extended capabilities for customers who want to extend their SAP processes and make use of AWS native services to gain business value from their data. This release continues AWS’s 13 year track record of delivering SAP innovations for our customer and builds upon the same easy and efficient pattern which was introduced with the extract function of the SAP OData Connector. Customers will now be able to read and write data directly from SAP to S3, from where they can leverage higher level AWS services in Analytics, Machine Learning or Artificial Intelligence.

Today, we have shown you how easy it is to get up and running with this new feature and how an example extension of your SAP process may look like with AWS native services. Customers running SAP workloads on AWS can start using this service within the AWS Management Console. Customers still running SAP systems on-premises can also integrate their data using Amazon AppFlow and benefit from the multiple AWS services.

To get started, visit the Amazon AppFlow page. To learn why AWS is the platform of choice and innovation for more than 5000 active SAP customers, visit the SAP on AWS page.


Proactively detect and prevent manufacturing defects with SAP on AWS

$
0
0

Feed: AWS for SAP.
Author: Ganesh Suryanarayanan.

This post was written by Ganesh Suryanarayanan, Krishnakumar Ramadoss, Joseph Rosing and Manoj Muthukrishnan.

Introduction
Machine learning increasingly enables a lower total cost of quality by detecting defects faster (in some cases predicting them) and augmenting traditional six sigma business process improvements with scalable, low-cost solutions. However, like many industrial machine learning use cases, the machine learning insights are only valuable to the extent that someone acts on them. As a result, it is key to integrate insights into existing quality management workflows or business processes. This blog details how users can automate this integration by applying computer vision-based defect detection with Amazon Lookout for Vision and sending the defect notifications to SAP Quality Management (QM) in order to improve first pass yield(FPY), Rolled throughput yield(RTY), and increased customer satisfaction. The usage of computer vision for quality provides the ability for an increased sample size for inspection and this has a direct correlation with yield and defect reduction.

Amazon Lookout for Vision
Customers such as Dafgard and General Electric have used Amazon Lookout for Vision to automate and scale computer vision-based inspections. The managed service uses computer vision to identify misplaced or missing components in electronic products, surface defects or damage to metal structures, irregularities in production lines, and even minuscule defects in silicon wafers. As a result, customers can eliminate more costly, fixed visioning systems or inconsistent manual inspection points with lower- cost cameras and machine learning to catch defects at the source, improving quality control. More details around benefits and pricing can be found here.

SAP Quality Management (QM)
Within SAP, the QM application supports tasks associated with quality planning, quality inspection, and quality control. In addition, it controls the creation of quality certificates and manages problems with the help of corrective action plans.

SAP QM Building blocks

SAP QM is an integral part of several key business processes including procurement, production, and sales. For example, it can be used to validate quality compliance of raw materials when they’re first delivered by vendors and as well as logging quality records for materials during production and postproduction. Finally, QM is also used to ensure compliance with customers’ quality specifications before finished goods are shipped. The quality plan involves setting up the master data, inspection plans, and the codes/class/characteristics/info records for inspection and calibration. To record quality inspections during the production process, users create an inspection lot, perform the physical inspection, and record the inspection results with assigned corrective actions, if needed. Samples business flows for Sales, Production and Procurement are as follows:

Sales

Sales Order QM flow

Production

Production order/Process order flow for SAP QM

Procurement

Prucahsing process flow in SAP QM

Integrating Amazon Lookout for Vision and SAP QM
The solution is made of two parts:
Setting up the anomaly detection model using AWS Look out for vision . To get started with Lookout for Vision, we create a project, create a dataset, train a model, and run inference on test images. We finally host the model for use as described in the later section of the solution.

Model training for Look out for vision

Setting up the workflow and Integration with SAP where we use the Amazon lookout for vision anomaly detection API to identify a defect utilizing the Lookout for Vision model as part of the inspection process, and take appropriate actions in SAP, which we will dive deep in the following sections.

SAP integration with Lookout for vision

The solution is composed of the following building blocks
1. Equipment(Camera): The images from a manufacturing facility camera can be ingested either directly by the camera, which supports compute, or via a client application that collects images from the cameras and uploads them to S3. For simplicity we are going to manually upload the image to the S3 bucket.
2. S3 bucket: This bucket will be used as a landing zone for images captured by the equipment.
3. Lambda for Orchestration and API integration: This sample package contains Lambda layer to connect with SAP and consume OData services through the HTTP layer. When an image gets landed in the bucket, the S3 object notification invokes the Lambda function, which will orchestrate the process of detecting whether an image is anomalous by making a call to look out for vision, and then the inference result is passed back to SAP for creating a defect.
4. Amazon Lookout for Vision Anomaly model: Amazon look out for vision model which was trained using sample data set, tested and hosted for use as described in the previous section.
5. Metadata data store for SAP Integration: We also use a DynamoDB table for storing the metadata
6. Secret Manager: SAP credentials for accessing the OData service via Lambda
7. SAP Gateway: SAP gateway would be required for exposing the OData API’s
8. SAP S/4 HANA or SAP ECC.

Solution Deployment
There are three key steps in this deployment process. The first part is to setup the model

Setting up the Anomaly Detection Model using Amazon Lookout for Vision
You can build the anomaly detection model using lookout for vision using the console or the SDK.

To keep it simple, we are going to use the cloud formation stack for creating a Sagemaker notebook instance. The instance also comes with the required sample circuit board images and a Jupyter notebook for creating the look out for vision project, train the model, and host the model. To access the cloud formation template and instructions for deploying the vision model follow the instructions in samples repository here.

SAP configuration
Prerequisites
1. SAP S/4 HANA system with embedded gateway where VPC traffic is allowed to access the OData services
2. In case your SAP gateway is deployed as a hub deployment, then the SAP gateway system should be accessible from the VPC.
3. You should have master and configuration data set up for Notification type and other relevant QM configurations as required.

In the SAP backend system activate the OData services, the following OData services are delivered as part of SAP S/4 HANA appliance and for this sample integration, we are relying on those services for creating defects and attaching images to the defect.

1. Activate service API_DEFECT_SRV via transaction /IWFND/MAINT_SE

Service activation

2.  Activate service API_CV_ATTACHMENT_SRV

Service activation

3. Create a material in material master for which inspection needs to be performed. We are using the Material CB-FL-001 that was created in the material master.

Material Master

4. Create a service user in SAP with required authorizations. The authorization should include the start authorizations for the OData service in the back-end system and the business authorizations for creating defects.

AWS deployment and configuration of integration solution

Prerequisites

  • AWS Cloud Development Kit (CDK) lets you define your cloud infrastructure as code in one of five supported programming languages. For ease of deployment this sample integration solution building blocks has been packaged as CDK for creating the necessary resources. For AWS CDK installation steps, refer to the AWS CDK documentation.
  • Create secret manager resource for storing the SAP credentials, these credentials will be used by lambda for authenticating with SAP for accessing OData services.
  • AWS Cloud 9 environment as development environment. See AWS Cloud9 setup instructions here to create an EC2 environment, with t2.micro as the instance type and Amazon Linux2 as the underlying OS.

Deployment

The entire integration has been packaged and can be cloned from the samples repository here. The stack deployment can be verified in AWS Cloudformation. A sample successful deployment would display like below:

CLoud formation output

1.Maintain the DynamoDB table created through this stack with required metadata as shown below for creating the defect in SAP.

DynamoDB Item

Here is the sample JSON which can be used for creating the DynamoDB Item , update the material and plant as required.

{
 "notiftype": "06",
 "equipment": "CAM-01",
 "DefectClass": "92",
 "DefectCode": "1",
 "DefectCodeCatalog": "9",
 "plant": "1710",
 "DefectCategory": "06",
 "DefectCodeGroup": "QM-E",
 "material": "CB-FL-001"
}

Testing the Solution

Now that we’ve successfully deployment the solution, let’s run a test!

1. From the console navigate to s3 bucket that got created by stack

2. Navigate to the following folder, where the image of product captured by an equipment will be ingested as part of inspection. To simulate an anomaly, we will be manually uploading a defective image to this folder from the additional circuit board data set that is made available part of the repository we used for model training .

ObjectID bucket

3. To simulate an anomaly, from the cloned git repo navigate to the following folder circuitboard>extra_images, and upload the file extra_images-anomaly_5.jpg as shown below to the s3 bucket

image upload to s3

The defective image(extra_images-anomaly_5.jpg) is missing a led bulb component compared to the normal image of a circuit board

Anomaly normal image

3. The S3 object notification will trigger the lambda, the Lambda function will orchestrate the process of detecting whether an image is anomalous by making a call to look out for vision, and then the inference result is then passed back to SAP for creating defect. You can check the lambda invocations and logs in the Lambda console as shown below

Lambda Invocation Cloud watch logs

The logs written by Lambda to cloud watch shows that the image was sent to the look out for vision model as part of anomaly detection and the inference result was passed back to the SAP OData service for creating a defect in SAP and the defect number was logged.

4. Validate the defect created in SAP. Launch the Fiori launchpad and search for the app “manage defects” which is delivered by SAP as part of QM and search for the defect that got created through the Lambda function.

FIORI application for SAP defects

Clean up
Don’t forget to clean up your AWS account to avoid ongoing charges for resources you created in this blog. In order to delete all resources created by this CDK app, run the following command

cdk destroy

In order to delete the Lookout for vision model and data sets, choose CloudFormation – Select your stack and hit delete stack.

Conclusion
In summary, the benefits of Amazon Lookout for Vision include the ability to scale computer vision-based quality inspection at a lower total cost in order to reduce inspection variation and improve quality yields. However, the output of machine learning in industrial operations only provides valuable results if action is taken on the machine learning insights. To reduce the burden of change management and ensure action is taken on the Lookout for Vision insights, this blog has demonstrated how to automatically record the Lookout for Vision defects in SAP Quality management. As a result, users can integrate Lookout for Vision into their existing SAP QM driven quality management process to accelerate the value of computer vision-based machine learning. Learn more about Amazon Lookout for Vision by going to the Amazon Lookout for Vision Resources page.

New innovations for SAP on AWS customers

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Introduction

Another SAP Sapphire is upon us! After a few years off from an in-person Sapphire event, we are super excited to be back in Orlando this year to meet with customers, partners, and colleagues face to face. We’ll have a full team of SAP technical experts and members of the SAP on AWS leadership team on hand, and definitely encourage you to book time with us—we’d love to learn more about your business plans and help you get the most value out of your SAP investments.

As we get ready for the event, I wanted to take some time to recap some new AWS innovations for SAP customers in 2022. Across industries, nearly every organization is modernizing and transforming their business processes and customer experiences using cloud technology. For so many of our customers this means migrating and modernizing their SAP systems. We continue to invest in supporting this journey, working backwards from customer needs across four key areas that they consistently tell us they’re focused on:

  • Simplifying and safeguarding migrations of SAP systems to AWS
  • Providing a secure, reliable, and performant infrastructure for SAP workloads
  • Streamlining SAP system management, reducing operational overhead, and managing risk with automation
  • Helping drive new data insights from all enterprise data, including SAP

In this blog, I will briefly cover our releases this year as they relate to these four important customer needs, and how you can start taking advantage of them to support your own transformation journey.

Simplifying migrations of SAP systems to AWS

Since customers started using AWS to support their production SAP workloads way back in 2011, we have released a number of tools, programs, and services to simplify the migration process. For example, AWS Application Migration Service automatically converts your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. We also built AWS Launch Wizard, —a first of it’s kind service, which allows you to deploy any HANA-based SAP system on AWS in a few hours through infrastructure and software automation vs. weeks or months using traditional approaches.

In addition to tools that help them autotomize and deploy their SAP systems, customers have also told us they want blueprints that help them plan and execute their migrations end-to-end. To meet this need, we recently launched AWS Migration Hub Orchestrator—with support for HANA-based SAP migrations a few weeks back at the AWS Summit in San Francisco. AWS Migration Hub Orchestrator is designed to automate and simplify the migration of applications to AWS. Orchestrator removes many of the manual tasks involved in migrating large-scale enterprise applications and managing dependencies between different tools. It gives you a set of predefined and customizable workflow template, tasks, tools, and automation opportunities that orchestrate complex workflows and interdependent tasks to simplify the process of migrating to AWS. Then, it guides you to deploy the target SAP environment using the aforementioned AWS Launch Wizard, extract application info from the newly deployed stack, and then migrate the application using an SAP and HANA database-specific replication mechanism like HANA System Replication (HSR). For a walkthrough of the process, check out the Migration Hub Orchestrator launch blog.

We’ve also partnered with AWS Partner Magnitude and SAP to support the launch of Magnitude SourceConnect, which is built natively on AWS and available only to AWS customers. It significantly simplifies consolidating disparate ERP systems— including SAP ECC and non-SAP systems, on a unified instance of SAP S/4HANA Central Finance. For additional details on how it can support your S/4HANA on AWS transformation, read the SourceConnect launch blog that we co-authored with SAP.

Providing a secure, reliable, and performant infrastructure for SAP workloads

Once customers land on the AWS platform, they count on our infrastructure to provide a secure, reliable, and stable foundation for their mission-critical business processes. To that end, we also continue to invest to ensure AWS is the world’s best infrastructure platform for SAP workloads.

We’ve added additional regional support for our Amazon EC2 R6i, m6i (the first generally available, SAP-certified cloud instances built on 3rd Gen Intel® Xeon® Scalable processors) and R5b instance types (which offer an industry-leading 260,000 IOPS per instance), giving customers more flexibility to match their SAP workloads with the right combination of compute, memory, and storage throughput.

In March, we announced the general availability of Amazon EC2 X2idn and X2iedn instances, which are also powered by 3rd generation Intel Xeon Scalable processors. These instances deliver up to 50% higher compute price performance and up to 45% more SAPS than the previous generation X1 instances.

In April, we certified Amazon EBS Io2 Block Express storage volumes for SAP workloads, which offer up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency. With io2 Block Express volumes, customers get SAN-like performance and five 9’s of durability with a cloud-based block store. This enables customers to run even the most IOPS-intensive, business-critical SAP applications on AWS and benefit from the ability to instantly scale, provision, and pay for just the capacity and performance that they need.

And for customers who choose the RISE with SAP construct, the unique design of our Global Infrastructure makes AWS the only cloud provider that can offer a 0 Recovery Point Objective (RPO) for RISE deployments– helping protect customers from costly data loss.

Reducing operational overhead and manage risk with SAP automation

Another solution customers consistently ask for is tooling and automation capabilities to help manage and operate their SAP systems more effectively. This feedback has led us to release a number of features and services over the years, including our Amazon CloudWatch Application Insights for SAP HANA, AWS Backint Agent, and the aforementioned AWS Launch Wizard.

In January, we announced that you can now add nodes to SAP systems deployed with AWS Launch Wizard from within the Launch Wizard console post-deployment if your performance needs increase. This new functionality allows you to scale the infrastructure supporting SAP applications (S/4HANA, BW/4HANA, and NetWeaver) deployed with Launch Wizard using the same guided, best-practice-aligned deployment process.

In February, we announced Launch Wizard support for Red Hat Enterprise Linux (RHEL) versions 7.7 and 7.9. Additionally, with this launch, we made it possible to bring existing RHEL subscriptions to support new SAP deployments with Launch Wizard. This gives you the flexibility to choose between license-included images from AWS Marketplace, or images from the Red Hat Cloud Access program.

In April we also announced that you can clone previous SAP deployments from Launch Wizard for use in future deployments. This eliminates the need to re-enter every parameter manually for subsequent deployments, allowing you to save time and reduce error by instead focusing on the few that make each deployment unique. For instance, when you deploy your production system, you can clone the parameters from your pre-production system also deployed with AWS Launch Wizard. Launch Wizard will pre-populate those parameters, and all you have to do is change the few that are unique to the production system like the SAPSID, instance numbers, and host name, while keeping common components like the SAP software and infrastructure the same.

Customers continue to tell us that using our automation capabilities for SAP is helping accelerate and safeguard their transformation projects. Storengy, a subsidiary of Engie, recently implemented SAP S/4HANA as part of their green energy transition. Leveraging our SAP automation and AWS Launch Wizard, AWS Backint Agent, and the AWS Well-Architected Framework, they’ve been able to dramatically shorten their project timeline, reduce operating costs by 40%, and strengthen their security posture. Check out the Storengy case study to learn more.

Driving new insights with SAP data

Our customers increasingly want to combine their SAP and non-SAP data in a single data lake and analytics solution to bridge siloed data and drive deeper business insights. Last year, we launched the Amazon AppFlow SAP OData Connector to make it easier for customers to get value out of SAP data with AWS services. We outlined how to get started with AppFlow and SAP and some of the benefits in a previous blog.

Following this launch, customers told us they also want to use the AWS data platform to enhance their SAP business process by enriching data using higher level services such as Artificial Intelligence or Machine Learning, and then feeding the data back to their SAP applications. In January, we delivered against that request by enabling bi-directional data flows between SAP applications and AWS data lake, analytics services and AI/ML services in just a few clicks. For a full breakdown of how you can setup a bi-directional data flow to extend your SAP business processes on AWS, read this blog.

Join us at SAP Sapphire Orlando 2022

As you can see, we continue to have a relentless focus on driving SAP on AWS capabilities to meet your needs. Many of these industry leading capabilities can also be leveraged through RISE with SAP. We’re excited to be partnering with SAP closely on these efforts. To discuss how these and other innovations can support your transformation goals, we’ll have experts on hand at SAP SAPPHIRE in booth PA410— please stop by and chat with us. As I mentioned above, you can also book time with a member of the team for a more in-depth discussion.

On Monday, May 9th, we are also holding an in-person SAP on AWS Customer Forum. If you are a SAP on AWS customer attending the Sapphire Orlando event, please feel free to register here. I will be joined by Stefan Goebel, Head of Strategic Engineering Partnerships at SAP, Rich Gustafson (SVP & Cloud CTO, SAP NS2), along with many of our SAP on AWS customers to discuss how organizations are accelerating their modernization journeys and responding to the business challenges they face. AWS is also hosting a Sapphire Welcome Reception with Accenture later that night and an SAP on AWS Customer Appreciation Event at the Chocolate Kingdom on Tuesday night.

If you’ll be in person at SAPPHIRE this year, we can’t wait to see you. If not, I hope that you will explore some of the new capabilities that we have released for SAP customers this year. As I mentioned before, more than 90% of what we build is based on customer requests— so keep the feedback coming. I can assure you— there is a lot more coming for SAP on AWS in 2022!

–Steve

Accelerate your finance transformation on AWS with Magnitude SourceConnect

$
0
0

Feed: AWS for SAP.
Author: Ulf Liljensten.

This post was co-authored by Ulf Liljensten, Partner Development Specialist at AWS and Carsten Hilker, Product Manager, SAP S/4HANA Central Finance at SAP SE

Introduction: Finance Transformation Challenges

For many enterprises, sophisticated SAP estates combined with highly complex system landscapes of non-SAP systems is a reality. These SAP systems support diverse and complex business processes and may contain custom code developed under many years in order to support differentiated business processes. When considering the journey to SAP S/4HANA, finance departments face a formidable challenge: On the one hand, S/4HANA promises transformative benefits: a universal journal with a single source of financial truth, in-memory technology with instant access to transactional and analytical data, and the agility that comes with the new streamlined financial processes. On the other hand, moving ERP instances systems to an S/4HANA environment can be a daunting proposition for the broader organization. Business continuity must be ensured. Costs for the migrations must be contained. On top of all this, the complexities associated with any large-scale ERP migration must be managed.

To address this situation, companies are evaluating the pros and cons of different S/4HANA strategies. A new implementation of S/4HANA lets customers start fresh and gives an opportunity to keep the ERP in a clean state without customizations that are costly to maintain.  New innovations in S/4HANA can be freshly adopted. However, with this approach the existing, proven IT capabilities are retired – all of them, and at the same time. Any specialized functionality in these systems must either be replaced with new S/4HANA functionality or – if this is not possible – re-developed in the new system.  Another, less disruptive, approach is the S/4HANA system conversion. With this approach, the existing system is kept and upgraded to SAP S/4HANA. This eliminates the need to start completely from scratch with a new ERP system. However, for companies with highly complex SAP environments this approach also presents a challenge: a significant part of the functionality in S/4HANA is inherently different from that in the classical SAP ECC, and some legacy SAP ECC extensions that the company may be using may not work with SAP S/4HANA.

To give customers an option with the lowest possible disruption, SAP has developed a third option that allows a gradual transformation path starting with Finance: SAP S/4HANA Central Finance.

Enabling finance transformation with SAP S/4HANA Central Finance

With S/4HANA Central Finance, all financial operations are centralized into a single system based on S/4HANA. Customers can replicate financial documents from more than one SAP or non-SAP ERPs into a consolidated general ledger, giving them the benefits of the new best practices and analytics developed in SAP S/4HANA without disruptive changes to the existing ERPs. In the SAP Central Finance system, a customer can leverage the financial capabilities of SAP S/4 HANA such as contextual information at your fingertips, one centralized, organization-wide financial view and real-time reporting capabilities, all without the disruption that comes from the retirement of existing systems.

Harmonize source ERP systems with Magnitude SourceConnect

While the Central Finance approach is desirable for many customers, adopting it effectively requires data integration from various ERPs. Integrating SAP data is relatively straightforward: SLT allows customers to connect directly to SAP ECC and replicate the data into Central Finance. With non-SAP source systems, the story is different. These systems vary in their data formats, metadata definitions and in the technology used to store data. As a result of these difficulties, the integration of non-SAP source systems into a consolidated general ledger is frequently put off into a distant second phase in the SAP Central Finance implementation, effectively kicking the can down the road and reducing the value of a central, single, source of financial truth.

To remedy this, AWS partner Magnitude teamed up with SAP to develop a solution based on predefined, software-based extractors to integrate non-SAP systems with SAP Central Finance. These extractors support 14 different non-SAP source systems out of the box and are constantly maintained to always be kept up to date with source systems changes. They support full data replication as well as capturing updates to write back relevant financial data to the source ERPs, and are tailored for SAP S/4HANA Central Finance. With this solution, customers do not need to engage in time-consuming, error prone development of custom extractors. As a result, non-SAP source systems can be integrated from the very beginning of the SAP Central Finance deployment and their full value can be harvested from day one.

SourceConnect brings several benefits:

·       Data Harmonization – Before any source system can be integrated with SAP Central Finance, both sides must speak the same language. SourceConnect automatically maps the master data of the non-SAP system with Central Finance, the loading of relevant master data from SAP and non-SAP system.

·       Transaction replication – brings non-SAP financial postings into the universal journal at a detail level and in real time. Each source connector supported up to 23 transaction types, depending on the source system. It is also possible to import source data via flat files, or to customize predefined extractors if there is a need. Depending on the level of complexity, this customization typically takes about 4-6 weeks.

·       Drill down – Finance users often need to see the operational details behind the S/4HANA Universal Journal, even when these originate from a non-SAP system. Drill down presents these transactions in a standard Fiori interface, removing the need for training of end-users in multiple systems.

·       Syncback- when financial transactions, such as clearing of invoices, are executed in SAP Central Finance this needs to be reflected in the source system. Syncback will automatically keep customer and vendor balances in sync between the non-SAP source systems and Central Finance.

·       Reconciliation – with this feature, you can track, audit and compare postings between source ERP systems and Central Finance. Any discrepancies can be analyzed and reported on in a familiar Fiori interface.

The result is lower integration cost, lower overall project cost – and vendor provided maintenance and support beyond the implementation.

Accelerating SAP S/4HANA deployment with AWS Launch Wizard

As customers go through the implementation stages of SAP S/4HANA Central Finance, fast and easy deployment of S/4HANA systems are important to maintain project momentum. Customers need quickly size, configure and deploy production ready SAP environments that follow AWS cloud application best practices. Here, AWS Launch Wizard for SAP can help. Depending on the application performance requirements, AWS Launch Wizard identifies the appropriate AWS resources to deploy and run your SAP application and provides an estimated cost of deployment. When the settings are finalized, Launch Wizard will provision and configure the selected resources, including installation of the S/4HANA software, high availability configuration, and more. From this point, the SAP environment can be managed from the standard AWS console.

The full picture – SAP S/4HANA Central Finance and SourceConnect running on AWS.

Get started

AWS, SAP and Magnitude have partnered to make financial transformation as non-disruptive, risk-free and with as fast time to value as possible for our customers. Our customers are benefitting from this partnership in the form of faster, cheaper and better journey to SAP S/4HANA. To build your environment based on the best practices we see from our extensive customer base, reach out to us and start the conversation! The S/4HANA journey may be a major challenge facing your business – but AWS, Magnitude and SAP are here to help and make Central Finance a fast, safe and rewarding transformation.

To learn more about how Magnitude can help accelerate your S/4HANA Central Finance transformation, visit Magnitude’s Source Connect page or talk to a member of your AWS, SAP or Magnitude account team.

Monitor and Optimize SAP Fiori User Experience on AWS

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

SAP Fiori is the user interface component of modern SAP applications such as S/4HANA, which enables business users to execute their business critical processes within SAP. It is based on SAP’s own HTML5 implementation called SAPUI5 and relies on the HTTPs protocol and modern web browsers as the client. As you are operating SAP Fiori, it is important that you have a monitoring capability for all aspects of your SAP Fiori application.

On the client side, you may want to answer questions such as “Is the application loading quickly for my users?”, “Is the application throwing errors?”, or “What parts of my application are my users interacting with most?”. Amazon CloudWatch Real User Monitoring (RUM) is a real user monitoring tool to collect and view client-side data about your web application performance from actual user sessions in near real time which can help to answer these questions by providing visibility into how real users interact with, and experience web applications such as SAP Fiori. It is an AWS managed real user monitoring service which you can use to easily monitor and optimize to your SAP Fiori application.

What is Amazon CloudWatch RUM ?

Amazon CloudWatch RUM is part of CloudWatch’s digital experience monitoring along with Amazon CloudWatch Synthetics and Amazon CloudWatch Evidently. By providing near real-time data on client-side application behavior, CloudWatch RUM helps application developers and DevOps engineers quickly identify and debug a range of potential issues, thereby reducing mean time to resolve (MTTR) and improving how users experience your SAP Fiori instance.
CloudWatch RUM provides a number of curated dashboards which give SAP Fiori the ability to continuously monitor applications explore issues. Using these dashboards you can monitor: performance metrics like page load time and core web vitals, error metrics like JavaScript and HTTP errors, user flows and dropoffs and user interactions like button clicks.
When you have identified an issue, CloudWatch RUM helps you identify how many user sessions were impacted, helping you prioritize issues.
When you need to fix an issue, CloudWatch RUM helps you diagnose the issue by surfacing debugging data such as error messages, stack traces, and session records. Additionally, CloudWatch RUM enables you to obtain complete distributed traces, from client-side to backend infrastructure nodes, by integrating with CloudWatch ServiceLens and AWS X-Ray.

Figure 1. Interaction between SAP Fiori, the SAP Fiori user, CloudWatch RUM, and the DevOps engineer

Figure 1. Interaction between SAP Fiori, the SAP Fiori user, CloudWatch RUM, and the DevOps engineer

Amazon CloudWatch RUM and SAP Fiori

To use CloudWatch RUM with your SAP Fiori instance, you must instrument your SAP Fiori instance with the open source Amazon CloudWatch RUM web client. As of release 1.5.0, the CloudWatch RUM web client supports route change timing in single page applications (SPA) like SAP Fiori. With this capability, the load time for every Fiori Apps or Tiles is captured and reported through CloudWatch RUM.
These are the questions that RUM can help to address for SAP Fiori customers:

  • Performance
    • How quickly are the Fiori apps and launchpad loading ? are they slow ?
    • Is any slowness related to specific user location/country ?
    • What can I do to improve the performance of Fiori Launchpad and Fiori Tiles ?
  • Troubleshooting
    • What are the Fiori apps that are generating errors ?
    • Do these errors occur on specific browser or devices ?
    • Which errors do I need to prioritize fixing?
  • Behavior
    • What are my users most frequent workflows in Fiori?
    • Which browsers and devices are used to access Fiori?
    • What are the 10 most frequently used Fiori apps?
    • Where are my Fiori users located?

How do I add CloudWatch RUM to SAP Fiori ?

  • Navigate to CloudWatch, then under “Application Monitoring”, select RUM.
  • Select “Add App Monitor”, then specify the details below.

Figure 2. How to add CloudWatch RUM

Figure 2. How to add CloudWatch RUM

  • Select “Add app monitor”.
  • Copy the JavaScript snippet that is shown to a temporary notepad.
  • Select Done.

Creating SAP Fiori Launchpad PlugIn

In order to install RUM client, you create SAP Fiori Launchpad PlugIn within your Fiori Launchpad by following these steps:

  • In your PC, Install Visual Studio Code, and install Fiori App Generator as per this blog.
  • In the Visual Studio – Fiori App Generator, you create a new SAPUI5 freestyle app, called “zrumplugin” (example).
  • Within “zrumplugin” app, you replace the Component.js file with the code that you copied to a temporary notepad in the last section. It should look similar to the sample code below.

// START - This is the javascript provided by RUM.
// Required changes : 
// 1. ensure the client rum used is latest version (minimum 1.5.0)
// 2. disableAutoPageView: true, as we are recording manually every changes of semantic navigation (hash code)
// 3. routeChangeTimeout: 5000, it is assumed that maximum changes from every click is 5 seconds before timeout
// 4. sessionEventLimit: 0, this is useful for very active user that consumes more than 200 route changes in a session
(function (n, i, v, r, s, c, x, z) {
    x = window.AwsRumClient = {
        q: [],
        n: n,
        i: i,
        v: v,
        r: r,
        c: c
    };
    window[n] = function (c, p) {
        x.q.push({
            c: c,
            p: p
        });
    };
    z = document.createElement('script');
    z.async = true;
    z.src = s;
    document.head.insertBefore(z, document.head.getElementsByTagName('script')[0]);
})(
    'cwr',
    '99999999-9999-9999-9999-999999999999',
    '1.0.0',
    'us-east-1',
    'https://client.rum.us-east-1.amazonaws.com/1.5.0/cwr.js', {
        sessionSampleRate: 1,
        guestRoleArn: "arn:aws:iam::999999999999:role/RUM-Monitor-us-east-1-999999999999-3650668140561-Unauth",
        identityPoolId: "us-east-1:99999999-9999-9999-9999-9999999999#",
        endpoint: "https://dataplane.rum.us-east-1.amazonaws.com",
        telemetries: ["performance", "errors", "http"],
        allowCookies: true,
        enableXRay: false,
        disableAutoPageView: true,
        routeChangeTimeout: 5000,
        sessionEventLimit: 0
    }
);
// END - This is the javascript provided by RUM.
// START - This is required to record every navigation within the Fiori (Single Page Application)
sap.ui.define([
    "sap/ui/core/UIComponent",
    "sap/ui/Device",
    "zrumplugin/model/models"
], function (UIComponent, Device, models) {
    "use strict";

    return UIComponent.extend("zrumplugin.Component", {

        metadata: {
            manifest: "json"
        },

        /**
         * The component is initialized by UI5 automatically during the startup of the app and calls the init method once.
         * @public
         * @override
         */
        init: function () {
            // call the base component's init function
            UIComponent.prototype.init.apply(this, arguments);

            // enable routing
            this.getRouter().initialize();

            // set the device model
            this.setModel(models.createDeviceModel(), "device");

            //Called after the plugin is loaded
            cwr('recordPageView',
                this.cleanHash(location.hash)
            );

            //Called when the hash is changed
            $(window).hashchange(function () {
                cwr('recordPageView',
                    this.cleanHash(location.hash)
                );
            }.bind(this));
        },

        cleanHash: function (sHash) {
            //Remove Guids and numbers from the hash to provide clean data
            //TODO:Remove everything between single quotes

            //return sHash.replace(/(({){0,1}[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}(}){0,1})|(d)/g,"");
            if (sHash.indexOf("#") > -1) {
                if (sHash.split("#")[1].indexOf("?") > -1) {
                    if (sHash.split("#")[1].split("?")[0].indexOf("&") > -1) {
                        return sHash.split("#")[1].split("?")[0].split("&")[0];
                    } else {
                        return sHash.split("#")[1].split("?")[0];
                    }
                } else {
                    if (sHash.split("#")[1].indexOf("&") > -1) {
                        return sHash.split("#")[1].split("&")[0];
                    } else {
                        return sHash.split("#")[1];
                    }
                }
            } else {
                return sHash;
            }
        }
    });
});
// END - This is required to record every navigation within the Fiori (Single Page Application)
  • Run “npm run build” to check the SAPUI5 code.
  • Run “npm run deploy” to deploy the SAPUI5 code into the SAP Fiori system.

Creating Fiori Catalog and attaching to PFCG Role

Next, you will need to assign the SAP Fiori Launchpad plugin to your users by following these steps using SAPGUI:

  • Create Fiori Catalog “ZCATALOG_FM” using SAP Transaction /UI2/FLPD_CONF (Fiori Launchpad Designer).
  • Create Role “ZNEXUS_RUM”using SAP Transaction PFCG then add Fiori Catalog “ZCATALOG_FM”.
  • Assign Role “ZNEXUS_RUM” to all users within SAP Transaction PFCG.

Figure 3. Fiori Catalog and PFCG Role

Figure 3. Fiori Catalog and PFCG Role

Test the RUM Instrumentation

First, navigate to SAP Fiori Launchpad (https://..hostname../sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html, please change ..hostname.. to your own Fiori hostname, and then navigate to a few Fiori Apps to test. You can use different browsers, devices and even simulate different location with VPN mechanism. Once you have exercised the Fiori Apps, navigate back to the CloudWatch console to view the results (Cloudwatch – Application Monitoring – RUM).
Figure 4 shows the overall number of Page loads with load time, user’s location with performance, number of sessions, sessions with errors, errors by device within one week interval. It helps you to identify whether the slow performance is due to user’s location or certain device used.
Figure 4. RUM Overview

Figure 4. RUM Overview

Figure 5 shows each pages and its load time as well as error rate when user navigate to the Fiori App based on the semantic object and action (example: Customer-clearOpenItems refers to Clear Incoming Payments Fiori App) You can find the complete list of Fiori Apps with its semantic object and action in Fiori Apps Library.
Figure 5. Pages Performance

Figure 5. Pages Performance

Figure 6 shows the Largest contentful paint, First input delays and Cumulative layout shift metrics. These metrics reflect how users experience the application while it is loading. For example, “How long does it take for content to appear?” “How long does it take to respond to input?”, and “To what extent does the UI shift?”. You can use this to identify user-impacting performance issues for Fiori Launchpad. The Positive, Tolerable and Frustrating thresholds are defined based on Core Web Vitals, as a guide to aim for better user experience.
Figure 6. Web Vitals

Figure 6. Web Vitals

Figure 7 shows page load steps over time, which provides detailed timing data for each step associated with fetching and rendering the Fiori Launchpad and App. Page load steps over time helps you diagnose performance issues by telling you which step in the page load process is taking the longest.
Figure 7. Page loads steps over time

Figure 7. Page loads steps over time

Figure 8 shows errors and session metrics, which you can use to understand and prioritize the most frequent errors encountered by users. An example below would be the HTTP 404 Not Found response status code indicates that the server cannot find the requested resource. This shows broken or dead links in the SAP Fiori Launchpad which can be fixed by developer or SAP..
Figure 8. Errors and Sessions

Figure 8. Errors and Sessions

Figure 9 shows a breakdown of the browsers that were used to access the Fiori instance, including the number of errors that occurred on each browser. This can help you understand in which browsers your apps are failing, so you can focus your debugging effort using these combination.
Figure 9. Browsers and Devices

Figure 9. Browsers and Devices

Figure 10 shows the performance of FIori apps by browser. This can help you understand performance issues that are unique to certain browsers, so you can focus your debugging effort on this browser.
Figure 10. Average page load time by browser and throughput

Figure 10. Average page load time by browser and throughput

Figure 11 shows the user journey describing which Fiori Apps that the users have clicked and navigate through to complete their business processes. This can help identify the top business processes for you to focus during integration and regression testing of SAP Fiori apps.
Figure 11. User Journey

Figure 11. User Journey

Conclusion

In this blog post, we saw how SAP customers can benefit from using Cloudwatch RUM to monitor and optimize SAP Fiori Launchpad performance. Using Cloudwatch RUM instrumentation, you can monitor the performance of SAP Fiori through the following metrics:

  • Number of page loads with load time, user’s location with performance, number of sessions, sessions with errors, errors by device within certain time interval
  • Page load time as well as error rate when user navigate to certain Fiori App
  • The largest contentful paint, first input delays and cumulative layout shift metrics, which reflect how users experience the application while it is loading
  • Detailed timing data for each step associated with fetching and rendering the Fiori Launchpad and Apps
  • Errors and session metrics, which you can use to understand and prioritize the most frequent errors encountered by users
  • Breakdown of the browsers used by users used and focus on errors generated on certain browser
  • Understand the user journey on Fiori Launchpad when they click and navigate to complete their business processes.

By doing this continuously you will gain end-users trust on SAP systems, improve adoption of business processes and increase users productivity.

Besides SAP Fiori, AWS CloudWatch RUM can also be integrated with other web applications such as SAP Enterprise Portal, SAP Cloud for Customers, and so on. You can find out more about SAP on AWS and Amazon CloudWatch Real User Monitoring from the AWS product documentation.

Securing SAP with AWS Network Firewall: Part 1 – Architecture design patterns

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

Cloud Security is job zero at AWS. We have a Shared Responsibility Model where customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS services.

A common question that new customers ask is how they can secure their mission critical SAP workloads. AWS has recently introduced AWS Well-Architected Framework for SAP (SAP Lens), which describes the best practices when implementing security control based on your requirement. One of the best practice is to ensure that security and auditing are built into the SAP network design.

Aligning with this best practice, AWS added a new option with the launch of AWS Network Firewall. Network Firewall simplifies the deployment of essential network protections for Amazon Virtual Private Clouds (VPCs), through a managed service that can scale automatically with the network traffic, so customers don’t have to worry about deploying and managing any infrastructure.

In this first blog post in the series, we will introduce the value propositions for AWS Network Firewall for SAP workloads on AWS, and share recommended architecture patterns for use when securing SAP on AWS workloads.

Value for SAP on AWS customers

AWS Network Firewall has the following features that are beneficial for SAP customers:

  • High availability and automated scaling – mission critical workloads like SAP can’t afford to fail due to service failures or variations in network traffic. Network Firewall provides an availability Service Level Agreement (SLA), with a monthly uptime percentage of at least 99.99%.
  • Stateful firewall – secure protocols that SAP systems uses.
  • Web filtering – further secure SAP Fiori, when in a centralized deployment model.
  • Intrusion prevention – an intrusion prevention system (IPS) provides active traffic flow inspection with real-time network and application layer protections.
  • Alert and flow logs – integrated with AWS monitoring tools such as Amazon CloudWatch, Amazon CloudWatch Logs, AWS Config and AWS CloudTrail to provide traceability and auditability when it comes to SAP System security.
  • Rule management and customizationsupports AWS Managed Rules, which are groups of rules based on threat intelligence data, to enable you to stay up to date on the latest security threats without writing and maintaining your own rules. This will benefit customers immediately as it provides the base layer of security for SAP systems. While this can simplify the customer’s responsibilities under the Shared Responsibility Model, it is still important to evaluate if all your security requirements are being met.

Architecture Pattern Overview

The centralized security inspection architecture shown in Figure 1 is a high level view of how the traffic between VPCs and Internet are being inspected by AWS Network Firewall. This will form the basis of the architecture patterns for VPCs containing SAP workloads, which may be customer managed, partner managed, or SAP managed under a RISE with SAP model. This concept also allows the use of multi-account approach based on AWS Security Reference Architecture (SRA) and Inspection Deployment Models with AWS Network Firewall. You will need a Firewall Endpoint in its own VPC and subnet, and this needs to be connected to your Transit Gateway. You then route select traffic through that Endpoint that you want to inspect/block/filter.

Figure 1 – Centralized security inspection architecture for AWS Network Firewall.

North South Inspection Flow:

  • NS1 – User from internet accessed through Inbound VPC.
  • NS2 – Packet routed to the Firewall Endpoint.
  • NS3 – SAP received and answers queries.
  • NS4 – Packet routed to the Firewall Endpoint.
  • NS5 – Response sent through Outbound VPC.

East West Inspection Flow:

  • EW1 – SAP access S3 bucket for backup or interface.
  • EW2 – Packet routed to the Firewall Endpoint.
  • EW3 – S3 bucket received and answers queries.
  • EW4 – Packet routed to the Firewall Endpoint.
  • EW5 – Response is received by SAP.

To apply the concept, we shall describe the AWS Network Firewall Architecture into 3 patterns for SAP based on its use cases and how the various users and systems traffic flow to and/or from SAP Systems.

  • Internal Access.
  • Internet egress access.
  • Internet ingress access.

We will describe in detail the patterns in the subsequent sections.

A1. Architecture design pattern for internal access

In this scenario, the SAP applications are accessed through corporate network, where users or applications are connected through AWS Direct Connect or AWS Virtual Private Network.

The Incoming traffic (from on-premises to workload VPC) and outgoing traffic (from workload VPC to on-premises) are being inspected through AWS Network Firewall. Examples for this type of traffic, which flows through secured networks only, are demonstrated in Figure 2. These include:

  • SAP GUI to SAP S/4HANA or other related NetWeaver based solutions.
  • Web Browser access from internal user to SAP Fiori system.

Figure 2 – Architecture design pattern for internal access.

Traffic flow description:

  1. User access SAP S/4HANA from SAPGUI.
  2. Network Firewall inspects packet due to Route table.
  3. Packet gets routed to Workload VPC.
  4. SAP S/4HANA answers queries and responds back.

A2. Architecture design pattern for Internet egress access

In this scenario, the SAP applications are accessing the Internet directly, but we only want to support network sessions that originate from our secured networks.
The outgoing traffic (from workload VPC to Internet) are being inspected through AWS Network Firewall. Examples for this traffic are demonstrated in Figure 3. These include:

  • Regular patch updates, such as YaST online update for SUSE Linux Enterprise Server.
  • Interfaces that require invocation of web services from on-premises SAP workloads such as SAP S/4HANA to external cloud-based solutions such as SAP Ariba, SuccessFactors, Cloud for Customer, Concur, and other SAP or non-SAP services.

Figure 3 – Architecture design pattern for Internet egress access

Traffic flow description:

  1. SAP S/4HANA send data out to internet.
  2. Network Firewall inspects packet due to Route table.
  3. Packet gets routed to Outbound VPC to be routed to internet..

A3. Architecture design pattern for Internet ingress access

In this scenario, originating network connections from the Internet to our SAP applications based in secure networks. The Incoming traffic (from Internet to workload VPC) and outgoing traffic (from workload VPC to Internet) are being inspected through AWS Network Firewall and AWS WAF (Web Application Firewall). Examples for this traffic are demonstrated in Figure 4. These include:

  • SAP Support access through SAP Router.
  • SAP Business Technology Platform (BTP) interfaces that goes through SAP Cloud Connector.
  • Internet/remote users that access SAP Fiori through web browsers.

Figure 4 – Architecture design pattern for Internet ingress

Traffic flow description:

  1. User, SAP BTP, SAP Support, Third party access SAP S/4HANA from Internet.
  2. Network Firewall inspects packet due to Route table.
  3. Packet gets routed to Workload VPC, then SAP S/4HANA answers queries and responds back.

Overall Architecture Design Patterns

The pattern in Figure 5 captures the complete picture of all the architecture patterns above. This architecture also includes possible extensions to another Region through Transit Gateway, should there is a need for further expansion of the architecture (for example in a Multi Region Disaster Recovery).

Figure 5 - Overall architecture design pattern

Figure 5 – Overall architecture design patterns

Cost Consideration

For all elements of your workload, it is important to understand the potential cost impact upfront. We recommend you capture metrics on your current network traffic volumes and use these to calculate the total cost of ownership of implementing the architecture patterns described in this blog post.

You can leverage CloudWatch, VPC Flow logs and third party solutions such as Cisco ThousandEyes and SolarWinds to capture these metrics of network traffic volumes across AWS and/or On-Premise Network.

With AWS Network Firewall, you pay an hourly rate for each firewall endpoint. You also pay for the amount of traffic, billed by the gigabyte, processed by your firewall endpoint. Data processing charges apply for each Gigabyte processed through the firewall endpoint regardless of the traffic’s source or destination. For more detailed pricing you can refer to Network Firewall Pricing page.

With AWS Transit Gateway you are charged for the number of connections that you make
to the Transit Gateway per hour and the amount of traffic that flows through AWS Transit Gateway. For more detailed pricing you can refer to Transit Gateway Pricing page

Conclusion

We have discussed the architecture patterns when implementing AWS Network Firewall for SAP workloads running on AWS. These patterns cover both internal and external (Internet) access scenarios, and include common use cases for each pattern, along with a description of how the network traffic will flow through the pattern. The patterns are derived from AWS Security Reference Architecture (SRA) prescriptive guidance that leverages AWS Organizations. This architecture can be further enhanced depending on customer requirements such as high availability setup across two Availability Zones, etc.

More information on working with AWS Network Firewall patterns on AWS may be found through the blog posts: Deployment models for AWS Network Firewall, Deployment models for AWS Network Firewall with VPC routing enhancements and Design your firewall deployment for Internet ingress traffic flows.

In the following blog posts in this series, we will dive deeper into using AWS Managed Rules and/or your own rules for Network Firewall with SAP workloads on AWS, as well has the operations aspects of monitoring Network Firewall.

In summary, AWS Network Firewall allows us to secure mission critical SAP workloads against malicious traffic. It is a managed service by AWS, which scales automatically with traffic, it has high availability and high performance that can support critical business applications such as SAP.

You can find out more about SAP on AWS and AWS Network Firewall from the AWS product documentation.

Integrating SAP Systems with AWS Services using SAP Open Connectors

$
0
0

Feed: AWS for SAP.
Author: Adren D Souza.

Introduction

SAP customers are accelerating innovation and reforming business process by using AWS services. Customers such as Zalando, Invista and Bizzy have modernised their SAP landscape and streamlined operations by integrating SAP with AWS technologies. SAP’s RISE with SAP solution provides consumption credits for SAP Business Technology Platform (SAP BTP), which customers can use for integration and extension scenarios. Customers frequently ask how SAP systems can be integrated with AWS services using SAP BTP to cover wide range of use cases such as analytics, machine learning, video and image recognition and many more.

This blog post shows how you can integrate SAP systems with AWS services using SAP Open Connectors, which is a component of SAP Integration Suite available on the SAP Business Technology Platform. SAP Open Connectors has prebuilt connectors for Amazon Simple Storage Service (Amazon S3) and Amazon SQS. In addition, you can accelerate integration by creating custom connectors to integrate with other AWS services. This blog is an extension of the AWS Adapter described in the blog Integrating SAP Systems with AWS Services using SAP Business Technology Platform

Overview

I will show how you can create a custom SAP Open Connector to connect to Amazon Rekognition. Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well. Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that’s stored in Amazon S3. Amazon Rekognition is always learning from new data, and we’re continually adding new labels and facial comparison features to the service.

The custom SAP Open Connector that you will create will make API calls to Amazon Rekognition Image to analyze images. The custom connector will then be used in the SAP Integration Suite Integration Flow to identify the product image from S3 bucket and retrieve products matching the image from SAP system.

SAP Open Connector architecture with AWS services

Walkthrough

Below are the steps that will be performed in this blog.

  1. Configure access to AWS services
  2. Create custom SAP Open Connector to connect to Amazon Rekognition
  3. Test SAP OData Service
  4. Create credentials in Security Material of the SAP Integration Suite
  5. Create Integration Flow in the SAP Integration Suite
  6. Test Integration Flow with Amazon Rekognition

Prerequisites

For this walkthrough, you should have the following prerequisites:

1. Configure access to AWS services

Create an IAM user in your AWS account with programmatic access. Attach AmazonRekognitionReadOnlyAccess permission to this user and read only permission to S3 bucket where the product image is uploaded. Download access key ID and secret access key, which will be used later in SAP Open Connector configuration.

2. Create custom SAP Open Connector to connect to Amazon Rekognition

2.1. In the SAP Integration Suite home page, select the tile Extend Non-SAP Connectivity to open SAP Open Connectors page.

Select Connectors from the left navigation menu, and choose Build New Connector. Provide connector details as shown in the below screenshot. You can upload a connector logo (optional). Choose Save & Next

SAP Open Connector - Create a new custom connector

2.2. In the “Setup” tab, for Base URL, provide Amazon Rekognition endpoint URL. Refer Amazon Rekognition endpoints and quotas to get endpoint information for your AWS region. The us-east-1 region is used in this example.

Amazon Rekognition SAP Open Connector set up - Properties

2.3. To add authentication information to Amazon Rekognition API request, you need to use Signature Version 4 signing process. In “Authentication” section, select awsv4 as authentication type. Enter your AWS region name (in this example it is us-east-1) in “AWS Region Name”. Enter rekognition in “AWS Service Name”.

Amazon Rekognition SAP Open Connector set up - Authentication

2.4. The headers Content-Type and X-Amz-Target must be included in the request to Amazon Rekognition. This can be achieved by using PreRequest Hook. In “Hooks” section, create a PreRequest Hook with the following code. Choose Save

let vendorHeaders = request_vendor_headers;
vendorHeaders['Content-Type'] = 'application/x-amz-json-1.1';
vendorHeaders['X-Amz-Target'] = 'RekognitionService.DetectLabels';
done({
	'request_vendor_headers': vendorHeaders
});

2.5. Navigate to “Resources” tab. Here you will add a resource which will be used for the request.

Choose “ADD RESOURCES” -> “Blank”. In the pop up, enter detectlabels in “Cloud Connector Resource Name”. Check only the POST radio button and add the resource.

Choose pencil icon (edit) in detectlabels resource. Enter /detectlabels in “Maps to” field. Under “Configuration”, select Execution Type as Function REST and enter Description.

Amazon Rekognition SAP Open Connector set up - Resource properties

You’ve to add request model for the resource. Request model provides the structure of the JSON body for POST request. In the detectlabels resource, under “Models” section, choose Request Model. Enter detectlabelsPostReq in “Model Display Name”.

Enter the following JSON and choose Save

{
  "detectlabelsPostReqImageS3Object": {
    "properties": {
      "Bucket": {
        "type": "string",
        "x-samplevalue": "bucket"
      },
      "Name": {
        "type": "string",
        "x-samplevalue": "input.jpg"
      }
    },
    "title": "S3Object",
    "type": "object"
  },
  "detectlabelsPostReq": {
    "properties": {
      "Image": {
        "type": "detectlabelsPostReqImage"
      },
      "MaxLabels": {
        "format": "int32",
        "type": "integer",
        "x-samplevalue": 10
      },
      "MinConfidence": {
        "format": "int32",
        "type": "integer",
        "x-samplevalue": 75
      }
    },
    "title": "detectlabelsPostReq"
  },
  "detectlabelsPostReqImage": {
    "properties": {
      "S3Object": {
        "type": "detectlabelsPostReqImageS3Object"
      }
    },
    "title": "Image",
    "type": "object"
  }
}

You’ve to add response model for the resource. Response model provides the structure of the response. In the detectlabels resource, under “Models” section, choose Response Model. Enter detectlabelsPostRes  in “Model Display Name”. Enter the following JSON and choose Save

{
  "detectlabelsPostResLabelsParents": {
    "properties": {
      "Name": {
        "type": "string",
        "x-samplevalue": "string"
      }
    },
    "title": "Parents"
  },
  "detectlabelsPostResLabelsInstances": {
    "properties": {
      "BoundingBox": {
        "type": "detectlabelsPostResLabelsInstancesBoundingBox"
      },
      "Confidence": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 99.99885559082031
      }
    },
    "title": "Instances"
  },
  "detectlabelsPostResLabelsInstancesBoundingBox": {
    "properties": {
      "Height": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 0.863124430179596
      },
      "Left": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 0.06545531749725342
      },
      "Top": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 0.13158991932868958
      },
      "Width": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 0.8849014043807983
      }
    },
    "title": "BoundingBox",
    "type": "object"
  },
  "detectlabelsPostRes": {
    "properties": {
      "LabelModelVersion": {
        "type": "string",
        "x-samplevalue": "2.0"
      },
      "Labels": {
        "items": {
          "type": "detectlabelsPostResLabels"
        },
        "type": "array"
      },
      "OrientationCorrection": {
        "type": "string",
        "x-samplevalue": "string"
      }
    },
    "title": "detectlabelsPostRes"
  },
  "detectlabelsPostResLabels": {
    "properties": {
      "Confidence": {
        "format": "double",
        "type": "number",
        "x-samplevalue": 99.99885559082031
      },
      "Instances": {
        "items": {
          "type": "detectlabelsPostResLabelsInstances"
        },
        "type": "array"
      },
      "Name": {
        "type": "string",
        "x-samplevalue": "string"
      },
      "Parents": {
        "items": {
          "type": "detectlabelsPostResLabelsParents"
        },
        "type": "array"
      }
    },
    "title": "Labels"
  }
}

Choose Save on the upper right. Navigate to “API Docs” section. The detectlabels resource is visible.

2.6. To authenticate with Amazon Rekognition, you need to create a connector instance. Select Authenticate Instance option on the left and enter Name for the instance, AWS Access Key ID and Secret Access Key of the IAM user created in Step 1. Choose Create Instance

2.7. Select Test in the API docs tile to navigate to API Docs. You can test the connector by making request to Amazon Rekognition.

Select the instance created in above step and select /detectLabels resource. Choose Try it out. Enter the following payload in body section. Replace your-S3-bucket-name and your-S3-image-key-name with your own information

{
   "Image": { 
      "S3Object": { 
         "Bucket": "your-S3-bucket-name",
         "Name": "your-S3-image-key-name"
      }
   },
   "MaxLabels": 5,
   "MinConfidence": 80
}

MaxLabels is the maximum number of labels to return in the response.

MinConfidence is the minimum confidence that Amazon Rekognition Image must have in the accuracy of the detected label for it to be returned in the response.

The request should resemble the example in the following screenshot. Choose Execute.

Amazon Rekognition SAP Open Connector set up - Executing the instance - Request body

If the request is sent successfully to Amazon Rekognition, you will see a server response code of 200. You will see a response that resembles the example in the following screenshot. The response shows an array of labels detected in the image and the level of confidence by which they were detected.

Amazon Rekognition SAP Open Connector set up - Executing the instance - Response body

2.8. Make note of the authorization token from the Curl field. The format of the authorization token will be as follows.

Authorization: User <value>, Organization<value>, Element<Value>

Make note of Request URL field. These values will be used later in Integration Flow.

Amazon Rekognition SAP Open Connector set up - Executing the instance - Response

You’ve now created a custom SAP Open Connector which can connect to the Amazon Rekognition service. This connector will be used in the Integration Flow.

3. Test SAP OData Service

I will use the Enterprise Procurement Model (EPM) OData service (EPM_REF_APPS_SHOP_SRV) in this demo. If you want to know more about what this OData service provides, see OData Exploration with EPM . This OData service has an entity set called “Products”, which gives the list of Products. This OData service and entity set will be used in the Integration Flow.

For instructions on how to activate the OData service, see Activate Available OData in SAP Gateway. To test OData service, you can use the SAP Gateway Client (Transaction Code /IWFND/GW_CLIENT), with the following Request URI.

/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products?$format=json&$select=Id,Description,SubCategoryId

The following screenshot shows JSON output of “EPM_REF_APPS_SHOP_SRV” OData service from SAP Gateway Client. The HTTP Response section shows Description, Id and SubCategoryId of each product.SAP Gateway Client showing output of OData Service

4. Create credentials in Security Material of the SAP Integration Suite

The security material is used to store credentials of Amazon Rekognition Open Connector and the SAP system which will be referenced in the Integration flow.

In SAP Integration Suite home page, select Design, Develop, and Operate Integration Scenarios. Select Monitor on the left menu and choose Security Material tile under Manage Security section.

4.1. To create user credential for Amazon Rekognition Open Connector, choose Create -> User Credentials. Enter the Name for security material and select Open Connectors in Type. Enter values for User, Organization and Element from step 2.8. Choose Deploy to deploy Amazon Rekognition Open Connector credentials.

SAP Integration Suite - Create Security Material for Open Connector4.2.To create user credential for SAP system, choose Create -> User Credentials. Enter the Name for security material and select User Credentials in Type. Enter values for User and Password of the SAP system which will be used in the Integration Flow. Choose Deploy to deploy SAP credentials.

SAP Integration Suite - Create Security Material for SAP System

5. Create Integration Flow in the SAP Integration Suite

In this step, you will create an Integration Flow which will connect to the Amazon Rekognition SAP Open Connector that you created in Step 2 and the OData service of SAP system in Step 3.

5.1. In SAP Integration Suite home page, select Design, Develop, and Operate Integration Scenarios

Select Design on the left menu and choose Create on the upper right. Enter Name, Technical Name, and Short Description fields. Choose Save on the upper right to create a new integration package.

5.2. Navigate to Artifacts tab and choose Add -> Integration Flow. Enter Name, ID and Description to create an Integration Flow.

SAP Integration Suite - Create Integration Flow

5.3. Select the Integration Flow. You will see the graphical designer where you can define the integration process. Choose Edit on the upper right.

In the graphical editor, choose Sender step. Click the arrow icon on Sender and drag it to Start step. In Adapter Type dialog window, select HTTPS adapter.

In the property sheet below, select Connection tab, and enter /product/details in Address field. A sender can call the Integration Flow through this endpoint address. Uncheck “CSRF Protected” field.

SAP Integration Suite - Design Integration Flow - Sender

5.4. From the palette (the grey bar on the top containing integration flow steps), choose Call -> External Call -> Request Reply and add it upon the arrow in integration process canvas. Request Reply is used to call the custom Amazon Rekognition SAP Open Connector defined in next step and get back a response.

5.5. In this step, I will show how you can connect to the custom Amazon Rekognition SAP Open Connector in the Integration Process.

From the palette, choose Participant -> Receiver. Add the Receiver below Integration Process canvas and name it as “OpenConnectors”. Click the arrow icon on Request Reply 1 and drag it to the Receiver (Open Connectors). Select OpenConnectors in Adapter Type dialog window. In the property sheet, enter Request URL from Step 2.8 in Base URI field of Open Connector.

In Credential Name field, select Security Material that you created in Step 4.1, and select the resource /detectlabels of Amazon Rekognition SAP Open Connector. Change method to POST. Leave all other parameter values as default.

SAP Integration Suite - Design Integration Flow - Open Connector

5.6. In this step, I will add Groovy script to parse the output of Amazon Rekognition SAP Open Connector. Groovy is a scripting language for the Java platform with java like syntax, supported by the Apache Software Foundation

From the palette, choose Transformation -> Script -> Groovy Script and add it upon the arrow in Integration Process canvas.  Click on the + sign next to Groovy Script step to create the script. Replace the contents of the file with the following code and choose OK on the upper right.

This code parses output from Amazon Rekognition SAP Open Connector and creates a filter query which will be used in the OData query to SAP system. The filter query is set in a property named oFilter

import com.sap.gateway.ip.core.customdev.util.Message
import groovy.json.JsonSlurper

def Message processData(Message message) {
    def filterQuery = '';
    def labels;

	def body = message.getBody(java.lang.String) as String
    if (body) {
      def json = new JsonSlurper().parseText(body);
      labels = json.get('Labels');
    }
	
    labels.each {val ->
        if (filterQuery.trim().length() > 0) 
            filterQuery += ' or ';
                
        filterQuery += "substringof('" + val.Name + "', SubCategoryId)";      
    }
	 
    message.setProperty("oFilter", filterQuery);
	return message
}

From the palette, choose Call -> External Call -> Request Reply and add it upon the arrow in integration process canvas. Request Reply is used to call the OData service in SAP system defined in next step and get back a response. At this step, the integration flow will look as below.

SAP Integration Suite - Design Integration Flow - Request Reply for SAP System

5.7. Add Receiver to Integration process, as in step 5.5 and name it as SAP. Click the arrow icon on Request Reply 2 and drag it to the Receiver (SAP). Select OData -> OData V2 in Adapter Type dialog window.

In the property sheet of OData, select Connection tab, enter the OData URL (https://<sap-host>:<sap-port>/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV) of SAP system in the Address field. Choose Basic in the Authentication drop down, and enter name of the security material created that you created in Step 4.2.

SAP Integration Suite - Design Integration Flow - Receiver - SAP system

Select the “Processing” tab and choose Select button in Resource Path field to open a new dialog window with details of SAP OData service. Choose Step 2

Select Products OData entity set in Select Entity. Select only the fields Id, Description and SubCategoryId. These fields will be displayed in the output when the integration flow is executed.

Choose Finish to close the dialog box. In “Processing” tab of OData property sheet, enter the following string in Query Options field. oFilter property was created in the Groovy script in Step 5.6. Leave all other fields as default.

$filter=${property.oFilter}&$select=Id,Description,SubCategoryId

5.8. To convert the output to JSON format, add XML to JSON converter. From the palette, choose Transformation -> Converter -> XML to JSON Converter and add it upon the arrow in Integration Process canvas.

The following screenshot shows the complete Integration Flow. Choose Save on the upper right and then Deploy to deploy the Integration Flow.

SAP Integration Suite Integration Flow

6. Test Integration Flow with Amazon Rekognition

I will use Postman to test Integration Flow.

To get the URL of the SAP Integration flow, navigate to “Monitor” -> “Manage Integration Content” and select All tile in SAP Integration Suite. Select your Integration flow. The URL will be in the Endpoints tab. Enter this URL in Postman and select POST method.

To get the username and password, navigate to “Instances and Subscriptions” in the BTP Home page and select default_it-rt_integration-flow under “Instances”. Choose View Credentials. The clientid is the username and clientsecret is the password. Enter these credentials in the “Authorization” tab of postman.

In the Postman “Body” tab, enter the following payload. Replace your-S3-bucket-name and your-S3-image-key-name with your own information. Choose Send

{
   "Image": { 
      "S3Object": { 
         "Bucket": "your-S3-bucket-name",
         "Name": "your-S3-image-key-name"
      }
   },
   "MaxLabels": 5,
   "MinConfidence": 80
}

Test Integration Flow using Postman - Request Body

If the request is sent successfully to SAP Integration Suite Integration Flow, the response pane in Postman shows a status of 200 OK. You see a response that resembles the example in the following image. The response shows list of all products in SAP system matching the product image in S3 bucket.

Test Integration Flow using Postman - Response

Conclusion

In this blog, I’ve shown how you can build a custom SAP Open Connector to connect to Amazon Rekognition service and how this connector can then be leveraged in SAP Integration Suite Integration Flow.

In addition to prebuilt SAP Open Connectors to Amazon S3 and Amazon SQS, you can build feature-rich custom connectors to other AWS services such as Amazon Translate which is a Neural Machine Translation (MT) service for translating text between supported languages, Amazon Simple Email Service (Amazon SES) which is a service that provides an easy, cost-effective way for you to send and receive email using your own email addresses and domains. Customers who are leveraging SAP BTP can connect to AWS services using SAP Open Connectors and simplify integration.

Visit SAP on AWS to learn why more than 5000 active SAP customers are using AWS as their platform of choice and innovation.

Run your most IOPS-intensive SAP workloads on AWS with Amazon EBS io2 Block Express

$
0
0

Feed: AWS for SAP.
Author: Sreenath Middhi.

Introduction

AWS has been supporting SAP workloads in the cloud since 2008, and SAP customers in the cloud since 2011. Over these years, our customers running SAP workloads have witnessed unprecedented growth in the data generated by their enterprises. As a consequence, customers have been looking for low latency and enhanced performance for their databases to provide a better customer experience. To address these challenges, we have been delivering continuous innovations for our customers.

For example, in 2014, We introduced Amazon Elastic Block Store General Purpose (SSD) volume type, which provided the ability to burst to 3,000 I/O operations per second (IOPS) per volume, independent of volume size. Fast forward to 2020, we worked with SAP to certify Amazon EBS io2 volumes for SAP. io2 volumes deliver 100x better volume durability and a 10x higher IOPS to storage ratio compared to Amazon EBS io1 volume type. Similarly, we launched io2 Block Express volumes in 2021, the next-generation server storage architecture that delivers the first Storage Area Network (SAN) designed for the cloud. io2 Block Express volumes deliver up to 4x higher throughput, IOPS, and capacity than io2 volumes, and are designed to deliver sub-millisecond latency and 99.999% durability.

We certified Amazon EBS io2 Block Express volumes for SAP workloads in March 2022. Customers running SAP workloads on AWS can now leverage io2 Block Express.

Graph depicting AWS innovation and the improvement in EBS performance over the years

Performance Innovations

Performance Improvements with io2 Block Express
io2 Block Express delivers the following performance improvements compared to io2 volumes, at no additional cost:

  • Sub-millisecond average latency
  • Storage capacity up to 64 TiB (65,536 GiB)
  • Provisioned IOPS up to 256,000, with an IOPS:GiB ratio of 1,000:1. Maximum IOPS can be provisioned with volumes 256 GiB in size and larger (1,000 IOPS × 256 GiB = 256,000 IOPS).
  • Volume throughput up to 4,000 MiB/s. Throughput scales proportionally up to 0.256 MiB/s per provisioned IOPS. Maximum throughput can be achieved at 16,000 IOPS or higher.

Because of the higher throughput, IOPS, and capacity delivered by io2 Block Express volumes, customers do not need to stripe multiple volumes together in order to go beyond single-volume performance. This helps customers to reduce the management overhead associated with striping the volumes.

 IOPS yield for different throughput levels

IOPS yield for different throughput levels

io2 Block Express volumes are currently supported with Amazon EC2 R5b, X2idn, and X2iedn instances. R5b instances are powered by the AWS Nitro System. This instance family offers up to 60Gbps of Amazon EBS bandwidth and 260,000 IOPS. Similarly, X2idn, and X2iedn instances are next generation memory-optimized instances delivering 45 percent higher SAP Application Performance Standard (SAPS) performance than comparable X1 instances. X2idn and X2iedn instances support 100 Gbps of network performance with hardware-enabled VPC encryption and support 80 Gbps of Amazon EBS bandwidth and 260k IOPS with Amazon EBS encrypted volumes.

io2 Block Express volumes with Amazon EC2 R5b, X2idn, and X2iedn instances allow customers to exceed even extreme storage performance demands of SAP HANA systems and other IOPS intensive database workloads. With io2 Block Express and Amazon EC2 R5b, X2idn, and X2iedn instances, customers can load SAP HANA tables into memory faster after a system start or on an on-demand basis.

In our internal test, we were able able to load 1.7TB of SAP HANA data into memory in about 5 minutes with x2iedn.32xlarge instance and io2 Block Express storage.

Similarly, faster backup and restore times are possible with io2 Block Express. In our internal tests, with AWS Backint agent for SAP HANA, we were able to backup a 1TB SAP HANA database in about 8 minutes with x2idn instance and io2 Block Express.

To provision io2 Block Express volumes, you can create them via the Amazon EC2 console, AWS Command Line Interface (AWS CLI), or using an SDK with the Amazon EC2 API when you create R5b instances. In the process of provisioning R5b instance, when you choose Provisioned IOPS SSD (io2) for storage, the volumes will be created in Block Express Format.

Summary
io2 Block Express volumes is the next generation of Amazon EBS storage server architecture and is now certified for SAP workloads on AWS. io2 Block Express volumes are supported with R5b, X2idn, and X2iedn instances. By leveraging io2 block express, you can gain significant improvements with SAP HANA database start times, get higher IOPS and faster backup/restore times.

To learn more about Amazon EBS io2 Block Express Volumes, visit the Amazon EBS Provisioned IOPS page, EBS Provisioned IOPS documentation, and Amazon EBS io2 Block Express Announcement.


Blue/Green deployments for SAP Commerce applications on Amazon EKS

$
0
0

Feed: AWS for SAP.
Author: Francesco Bersani.

This blog describes the most common issues that customers are experiencing in on-premises SAP Commerce deployments. It gives a concrete way to achieve blue/green deployments on Amazon Elastic Kubernetes Service to have faster and more secure implementations of SAP Commerce applications.

The deployment procedure is based on what is described in the SAP official documentation regarding performing rolling update on the cluster.

The SAP Commerce application is deployed with an Helm Chart that supports blue/green deployment strategy.
The deployment is then automated with a simple Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipeline using AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

This blog assumes that the required infrastructure to run SAP Commerce has already been provisioned.
It is expected that the following components have been already deployed and configured:

  • An Amazon Virtual Private Cloud (VPC) with public, private, and isolated subnets has been created.
  • Amazon EKS cluster has been provisioned in the defined Amazon VPC.
  • The AWS Load Balancer Controller has been deployed in the Kubernetes cluster
  • An Amazon Route53 public-hosted zone has been created.
  • The ExternalDNS configured with the created Amazon Route53 hosted zone has been deployed in the Kubernetes cluster.
  • An Amazon RDS for MySQL has been provisioned in the Amazon VPC subnets and it’s reachable from the private subnets.
  • A database schema in the MySQL database has been created for SAP Commerce.

SAP Commerce is a popular e-commerce platform provided by SAP and consists of an enterprise java web application.

Traditionally, in on-premises environments, SAP Commerce is deployed in a mutable infrastructure of servers or VMs. An SAP Commerce production environment consists of several application server nodes to scale to customer demands.

The automated CI/CD pipeline performs a rolling update strategy at the cluster node level at every new release.
Sequentially each node is removed from the Load Balancer to avoid serving customer requests, and upgraded to the new release.
After the node upgrade, the node is included back to the Load Balancer, and it’s ready to serve customer requests using the codebase of the new release.

SAP Commerce deployment in on-premises environment

Since the cluster consists of several application nodes, the deployment can take several hours before the completion and have all the cluster nodes upgraded to the new version.

Long running deployments are a significant obstacle for e-commerce business and marketing teams that need to quickly cope with innovative solutions to remain competitive and thriving on the market.

By running SAP Commerce workloads on Amazon EKS, it is possible to implement multiple deployment strategies, including blue/green deployments.

The blue/green deployment technique enables to release of applications by shifting traffic between two identical environments that are running different versions of the application.
Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability.

Blue/Green deployment approach of a SAP Commerce application

Blue/Green deployment approach of a SAP Commerce application

The deployment procedure

Before a new release (v2) is deployed, only one release (v1) actually is deployed in the cluster.

Blue/Green deployment step one

Blue/Green deployment step one

At the database level, using SAP Commerce type system definitions allows handling multiple schema modifications related to multiple codebases of different releases on the same database schema.

When the new (v2) release is deployed, two Kubernetes jobs are created in sequence to:

Blue/Green deployment step two

Blue/Green deployment step two

With blue/green deployment strategy, the new release (v2) is then deployed in the new green slot alongside the current version (v1), in the Amazon EKS cluster.

Both versions are then available simultaneously in different slots (blue and green) in the same production environment.
At this moment, the end customers continue to browse the current production release (v1) on the blue slot, and the testing team can access the new release (v2) on the green slot to enforce the testing strategies.

Blue/Green deployment step three

Blue/Green deployment step three

When the new release (v2) can be promoted to become productive, a switch is performed at the load balancer to direct the end customer request to the green slot and the old release (v1) to the green slot.
End customers can immediately access the application on the new release (v2) at the full scaling.

Blue/Green deployment step four

Blue/Green deployment step four

The old release (v1) is still available in the cluster and can be switched back to production in case of problems on release v2. Otherwise. it is possible to decommission the blue slot to free up the resources and allow to perform deployments of new releases.

Blue/Green deployment step five

Blue/Green deployment step five

The artefact consists of a Helm chart to handle the application deployment and a CI/CD pipeline definition to automate the deployment process.

The Helm chart

A Helm chart has been implemented to support blue green deployments strategies with SAP Commerce on Kubernetes (AmazonEKS).
The Helm chart comes with a set of scripts to perform installation and upgrade the helm chart in the CI/CD pipeline deployment automation.

The helm chart

The helm chart

The deployment pipeline

A pipeline has been defined to use AWS CodePipeline and AWS CodeBuild to install and upgrade the helm chart to deploy the SAP Commerce application.

The pipeline is created and started when a release-* branch is created in AWS CodeCommit.
To support a multi-branch strategy with AWS CodePipeline, this solution has been adopted.

The deployment pipeline

The deployment pipeline

The helm chart supports the creation of Kubernetes Namespace and Kubernetes Service Account.
This blog assumes that these resources are created as prerequisites of the helm chart installation.
Additional logic can be implemented while designing the pipeline to handle the creation of the Namespace and Service Account when deploying the helm chart.

When deploying for the first time the helm chart, it is assumed that the Type System of the first release is DEFAULT. This allows us to deploy without the need to create a dedicate type system for that release. The next release will be deployed in specific Type Systems that are created before the release is deployed with the same name of the release.

It is assumed that the Type System of the new release is not existing in the database. This means that when deploying a new release, a new Type System is always created.
When designing your own pipeline a mechanism must be implemented to ensure that the Type System for the new release is created only when necessary (eg; using an AWS Lambda Function).

To setup the SAP Commerce deployment pipeline for blue/green deployments, please follow the installation guide available in the github repository.
Please make sure to follow the cleanup instruction to remove all the AWS resources and avoid incurring into additional costs.

A blue/green deployment strategy can be a game changer to speed up the deployment of new releases in e-commerce contexts.
I hope this blog can help you to get some insights to modernize and improve your SAP Commerce deployment pipeline.

Next Steps

This blog describes a simple proof of concept to deploy an SAP Commerce application with helm and a simple deployment pipeline.
To manage a production workload of SAP Commerce, additional steps including monitoring, observability, security, and performance tuning, are required.

Share SAP OData Services securely through AWS PrivateLink and the Amazon AppFlow SAP Connector

$
0
0

Feed: AWS for SAP.
Author: Krishnakumar Ramadoss.

Introduction

Amazon AppFlow is a fully managed service that enables customers to securely transfer data between Software-as-a-Service (SaaS) and enterprise applications, including SAP, Salesforce, Zendesk, Slack, and ServiceNow, and AWS services, including Amazon S3 and Amazon Redshift, in just a few clicks.

Often, customers first move their SAP workloads onto AWS to reduce costs, improve agility, and strengthen security. However, that is just the first step to fully harnessing the power of SAP on AWS. Customers can then begin to build capabilities to extract value from their SAP data by combining it with non-SAP data in AWS data lakes. This allows you to enrich and augment SAP systems by leveraging native AWS services to optimize manufacturing outcomes, track business performance, accelerate product lifecycle management, and so on.

Based on the feedback we received from our customers, last year we announced a new Amazon AppFlow connector for SAP OData. This helps AWS customers to securely transfer their SAP contextual data to AWS and vice-versa by connecting to the OData APIs that are exposed through their SAP Gateway. With this enhancement, customers can create secure bi-directional data integration flows between SAP enterprise applications (SAP ECC, S/4HANA, SAP BW,BW/4HANA) and Amazon S3 object storage with just a few clicks.

Amazon AppFlow SAP OData Connector supports AWS PrivateLink, which adds an extra layer of security and privacy. Customers can use AppFlow’s private data transfer option to ensure that data is transferred securely between Amazon AppFlow and SAP. As part of private flows, Amazon AppFlow automatically creates AWS PrivateLink endpoints. The lifecycle of interface endpoints is completely managed by Amazon AppFlow under the hood.

In this blog, we will show how to expose SAP resources in a private and secure manner over AWS PrivateLink for setting up private data flows in Amazon AppFlow using the SAP OData connector. The blog will provide architectural and instructions on how to setup AWS PrivateLink for AppFlow SAP OData Connector.

Architecture

AWS PrivateLink uses Network Load Balancers (NLB) to connect interface endpoints to the VPC endpoint service. NLB functions in the network transport layer (layer 4 of the OSI model) and can handle millions of requests per second. In the case of AWS PrivateLink, the NLB is represented inside the consumer VPC, in this case, Amazon AppFlow as an Endpoint Network Interface.

With AWS PrivateLink,customers can create an endpoint service by placing their SAP instances behind a NLB and enabling consumers like Amazon AppFlow to create an interface VPC endpoint in an Amazon AppFlow managed VPC that is associated with the endpoint service. As a result, customers can use Amazon AppFlow private flows to securely transfer data.

The following steps explain how to create a private flow in the customer’s AWS account for establishing the SAP OData service for consumption in Amazon AppFlow private data flows.

  • The SAP Gateway is a component of NetWeaver Application Server ABAP 7.40 running on an Amazon Elastic Compute Cloud (EC2) instance optimized for SAP workloads.
  • The instance is placed within a virtual private network (VPC) consisting of a private subnet per Availability Zone (AZ).
  • An internal Network Load Balancer (NLB) is provisioned in front of the SAP application instance, and an endpoint service is connected to the NLB.
  • For establishing a TLS connection, a TLS listener is created in the Network Load balancer, and an SSL certificate is obtained from the AWS Certificate Manager for the registered domain.
  • A private DNS name is enabled for the endpoint service by verifying the domain using a public DNS provider, in this case, Amazon Route 53.
  • Create an Amazon AppFlow data flow using the SAP OData connector to extract data from SAP by creating a private connection profile with endpoint service details.
  • When a private connection profile is created, Amazon AppFlow creates VPC Interface endpoints with Amazon AppFlow managed VPC in the background for establishing connectivity to SAP OData services over a private link for secure data transfer during flow execution.

Configurations that are required for setting up private data flows using the SAP OData connector:

Prerequisites:

  • As part of the prerequisites, you need to have the internal-facing NLB configured with your SAP system as backend target groups, with a mapping to more than 50% of the subnets (in separate AZs) in a given region. This will have lower latency and better fault tolerance.
  • VPC Endpoint services are available in the AWS region in which they are created and can be accessed from the same region. Amazon AppFlow flows need to be created in the same region where endpoint services are made available.
  • A private DNS name must be enabled for the endpoint service. For the endpoint service to verify the domain ownership, you must own the domain using a public DNS provider like Amazon Route 53.
  • The SAP instance must have a certificate installed for end-to-end TLS communication. You can use Amazon private CA for obtaining server certificates and installing them on the SAP system via transaction code STRUST. If you want to terminate the SSL at the load balancer, then you can skip this step and just forward the traffic to the target group on a port where SAP does not require a certificate for SSL communication.

Building a VPC endpoint service for exposing SAP Gateway over PrivateLink involves the following steps:

Step 1:  Create VPC and Private Subnets

Start by determining the network you will need to serve the SAP Gateway resources. Keep in mind that you will need to service the application out of more than 50% of AZ within any region you choose. Amazon AppFlow will expect to consume your service in multiple AZs because, as per AWS recommendations, you should architect your applications to span across multiple AZs for fault-tolerance purposes.

In this example, the SAP Gateway instance is placed in a private subnet in one of the AZs in the US-East-1 region. To serve the SAP Gateway resources in that region over a private link, you would need to create subnets in more than 50% of the AZs in that region to provision network load balancers in those subnets, which would cover more than 50% of the AZs in that region for low latency and fault tolerance.

Subnets

These subnets are private, and the route tables are configured to allow only the VPC traffic.

Step 2:Request a certificate for your domain using ACM

Request a certificate for your domain name using the AWS Certificate Manager (ACM). In this example, we have extended an existing domain by creating a subdomain in the Amazon Route 53 private hosted zone and obtaining a certificate from ACM as shown below. ACM’s domain verification process necessitates the presence of a domain ownership record in a public DNS provider.

Domain

Note: To prevent certificate mismatch issues, it’s best practice to provision a wildcard certificate for your domain. See Wildcard Names under ACM Certificate Characteristics for more information.

wildcardnames

For verifying the domain ownership, add the CNAME records to the domain you own that are provided by a public DNS provider like Amazon Route 53.

DNS VerificationHosted zone

You can also import your own public certificate into ACM if you have obtained the certificate for your domain from an external Trusted Certificate Authority.

Step 3:  Create a target group for SAP instance, NLB, and a TLS listener

  • Create a target group with the protocol TLS and port 443, register the SAP Gateway instance on the private subnet as the target, and forward traffic to the instance on TLS port 44300. In this example, we have configured the SAP instance to listen to secure HTTPS traffic on port 44300.
targetgroups
review-traget
  • Create an internal facing Network Load Balancer, and make it highly available in more than 50% of the AZs in the region where SAP resources will be served for low latency and fault tolerance. Enable cross-zone load balancing on the NLB as well.

  • Create a TLS listener for your network load balancer and forward the traffic to the target group that was created earlier.

tls-listener

  • During configuration, choose the certificate from ACM as the default SSL certificate. Then, select the SSL certificate that you created in step 2.

ssl-certAfter the NLB is provisioned and active, you will create a CNAME record in DNS (Route 53 in our case) for a friendly URL (used in Amazon AppFlow as an application host URL, e.g. privatelink.beyond.sap.aws.dev ) pointing to the NLB internal DNS name (<name>.elb.<region>.amazonaws.com).

Step 4: Create a VPC endpoint service and connect with NLB

Create a VPC endpoint service by selecting the Network Load Balancer that was created in the earlier step, and also in the additional settings, associate the private DNS name hosted in the private hosted zone.

endpoint-settings

Uncheck the check box “Acceptance required”. This would allow Amazon AppFlow to connect to your service without requiring acceptance from your side.

endpoint-settings2

You must perform a domain ownership verification check for each VPC endpoint service with a private DNS name. You must have the domain on a public DNS provider. For performing domain verification, maintain a DNS record of type TXT with the verification name and values through your DNS provider. You can verify the domain of a subdomain. For example, you can verify beyond.aws.dev in this example, instead of privatelink.beyond.sap.aws.dev.

endpointverification

Once the domain is verified, you will notice the domain verification status changes from “pending verification” to “verified.”

verification-state

Step 5: Allow principals to access VPC Endpoint Services.

To restrict the VPC endpoint services to being consumed by a specific consumer, you must maintain the allowed list of principals in the endpoint service. In this case, we have allowed Amazon AppFlow connection requests by adding the principal appflow.amazonaws.com to connect to this endpoint service.

allow-principals

This concludes the setup of the endpoint service. Now let’s go and create a private connection in Amazon AppFlow using the endpoint service name.

To create a secure,private connection in Amazon AppFlow

The required inputs to create a new Amazon AppFlow SAP OData private connection are as shown below.

Enable the PrivateLink connectivity and specify the AWS PrivateLink endpoint service name that was created in the earlier step.

Note: When using oAuth 2.0 in a private connection, the authorization URL must be reachable from the network where the connection is being setup. This is because the OAuth connection involves browser interaction with the SAP Login Page, which cannot happen over AWS PrivateLink.

Create a private flow in Amazon Appflow.

The configuration steps to create a new Amazon AppFlow SAP OData flow are as follows.

flow-config

  • Configure Flow and connect to the source by picking the SAP private connection.
  • Discover SAP OData Services
  • Choose an SAP Service Entity
  • Define the flow trigger (whether it is on-demand or scheduled )
  • Map Fields, Define Validations, and Set Filters
  • Run Flow

You can find further details on how to create a flow in the Amazon AppFlow SAP OData Connector documentation.

Conclusion

In this post, we showed you how to use AWS Private Link to expose SAP NetWeaver Gateway in your AWS account and establish private connectivity between Amazon AppFlow and SAP System OData resources in your AWS account for a secure data transfer. Customers running SAP Systems on-premises can also use the Amazon AppFlow SAP OData Connector by using AWS Virtual Private Network or AWS Direct Connect connections to configure AWS PrivateLink, as an alternative to using the public IP address of the SAP System OData endpoint.

To get started, visit the Amazon AppFlow page. To learn why AWS is the platform of choice and innovation for more than 5000 active SAP customers, visit the SAP on AWS page.

SAP ODP based Change Data Capture with Amazon AppFlow SAP OData Connector

$
0
0

Feed: AWS for SAP.
Author: Krishnakumar Ramadoss.

Authors: Krishnakumar Ramadoss, Rozal Singh, Manoj Muthukrishnan, Ram Borhade, Rajendra  Narikimelli, Damian Gonazalez, Ganesh Suryanarayanan

Introduction

5,000+  customers are running their SAP workloads on AWS. Besides saving costs, customers are reaping significant benefits from the depth and breadth of AWS services. Tens of thousands of data lakes are already deployed on AWS. Customers are benefiting from storing their data in Amazon S3 and analyzing that data with the broadest set of analytics and machine learning services to increase their pace of innovation. AWS offers the broadest and most complete portfolio of native ingest and data transfer services, plus more partner ecosystem integrations with S3.

Since we launched the Amazon AppFlow SAP OData connector in 2021, the most common request from customers was to provide them with a capability to extract data in increments using the built-in Change Data Capture (CDC) capabilities offered by the SAP Operational Data Provisioning (ODP) framework.

Based on customer feedback, today we are announcing the launch of a new Amazon AppFlow feature that supports data transfers from SAP applications to AWS Services using the SAP ODP framework in just a few clicks and comes with built-in Change Data Capture capabilities. With this launch, customers can use the Amazon AppFlow SAP OData connector to seamlessly perform full and incremental data transfers, including SAP Operational Delta Queue (ODQ) from SAP ERP/BW applications like SAP ECC, SAP BW, S/4 HANA, and BW/4 HANA.

Amazon AppFlow

Amazon AppFlow is a fully managed data integration service that helps customers build data flows to securely transfer data between AWS services, SaaS applications, and SAP ERP applications. These data flows help accelerate the creation of data marts and data lakes on AWS for analytical purposes or for combining the data from various source systems.

In this blog, we will walk through how you can setup an Amazon AppFlow data flow to extract data using the newly released feature of the SAP OData connector to connect to ODP providers that are exposed as OData services in your source SAP system.

The high-level architecture of ODP-based extraction with Amazon AppFlow data flows

ODP Architecture(1)

With ODP-based data transfers, business context is preserved. Retaining this business context from SAP data sources reduces business logic–mapping efforts to integrate data with business objects from other SAP and non-SAP data sources.

The ODP framework works on “provider” and “subscriber” models to enable data transfers between SAP systems and SAP to non-SAP data targets. Amazon AppFlow uses this ODP framework to support full-data extraction and change data capture through the Operational Delta Queues (ODQ) mechanism.

ODP Provider – The data provided by the source SAP system is called the ODP Provider. Below is a list of various ODP providers supported by the framework.

ODQ (Operational Delta Queue) – In the case of full or delta extractions, the data from the source system is written as data packages to an ODQ by the ODP provider using an update process.

ODP Consumers/ODQ Subscribers – The target applications that retrieve the data from the delta queue and continue processing the data are referred to as “ODQ Subscribers” or, more generally, “ODP Consumers” In this case, Amazon AppFlow plays the role of a consumer or subscriber.

ODP providers, in turn, can act as a data source for OData services, enabling REST-based integrations with external consumers like Amazon AppFlow. The ODP-Based Data Extraction via OData document details this approach on how customers can generate a service for consumers to extract ODP data via OData.

In Amazon AppFlow data flows, customers can use the SAP OData connector to connect to ODP providers that are exposed as OData services. The connector supports full extraction, which would allow customers to extract data in micro-batches from the ODQ.

Also, the connector supports incremental data transfers, which uses the built-in CDC capabilities provided by the ODP framework, allowing customers to seamlessly get the changed data at the source using a delta token with the operation of change, e.g., Inserted/Deleted/Updated. The support for the built-in CDC feature would allow customers to seamlessly pull the data from supported ODP source providers in micro batches using the intrinsic delta tokens provided by the ODP framework, thus making the data transfers more optimal.

Key Benefits of using ODP-based data extraction with Amazon AppFlow

  • Amazon AppFlow is a fully managed service that allows customers to build data flows in just a few clicks. It’s a low-code, no-code service.
  • Since the data extraction works at the SAP application layer, the business context of the data is retained.
  • Seamless integration into the well-established SAP ODP/OData framework to minimize the ramp-up or setup efforts.
  • Amazon AppFlow APIs can be used to integrate with other applications seamlessly.

Prerequisites for creating an ODP-based OData flow

  • The provider of your data source must be ODP enabled.
  • To generate an OData service based on ODP data sources, SAP Gateway Foundation must be installed locally in your ERP/BW stack or in a hub configuration. For your ERP/BW applications such as SAP ECC, SAP BW, S/4 HANA, and BW/4 HANA, the SAP NetWeaver AS ABAP stack must be at 7.50 SP00. The NetWeaver AS ABAP of the hub system (SAP Gateway) must be 7.50 SP00 or higher for remote hub setup. When the SAP ERP or SAP BW applications run on a lower version of NetWeaver that is less than 7.50 SP00, the hub scenario is recommended.
  • You must create an OData service from the SAP ODP source and register it for consumption in your service gateway. Refer to the SAP documentation for more details on generating and registering an OData service for an ODP provider. Additionally, you can refer to our workshop where we showcased the setup of OData services based on ODP data sources
  • The Amazon AppFlow SAP OData connector supports only secure connections; thus, you must enable a secure setup for connecting over HTTPS. Note: Amazon AppFlow supports private connectivity using an AWS PrivateLink for secure data transfers. Please see the Amazon AppFlow with AWS PrivateLink blog here for more information on how to set up a private flow for SAP OData connector.
  • You must implement SAP note 1931427 if you are running your ERP/BW application on a lower version of NetWeaver, such as 7.40 SP04 or lower.
    It is recommended to implement the following SAP notes in your local gateway or in a hub system: 2854759 , 2878969, 3062232, 3023446, 2888122.

Creating an Amazon AppFlow data flow utilizing the SAP ODP based OData service

Use the identified ODP provider and its generated OData service, and the service is either registered locally or in an SAP Gateway hub system.

Create an SAP OData connection from the Amazon AppFlow initial screen. Extend the sidebar and select Connections. Then, choose SAP OData from the connectors drop-down. Select the “Create Connection” tab and furnish the required information. Refer setup instructions for more details about the input values.

Then, select Flows from the sidebar and “Create flow

  • Configure Flow and connect to the source by picking the SAP connection
  • Discover SAP OData Services, including the ODP based OData services
  • Select SAP Service Entity
  • Define Flow trigger (On Demand or On Schedule)
  • Map Fields, Define Validations and Set Filters
  • Run Flow

You can find further details on how to create a flow in Amazon AppFlow SAP OData Connector documentation.

Schedule flow vs On Demand flow in the context of SAP ODP based OData services

Scheduled Flow

Amazon AppFlow will initiate a fill data transfer by choosing this option, and subsequent runs at the defined frequency will run incremental transfers using the SAP ODQ mechanism. For the incremental flows, Amazon AppFlow makes use of the delta tokens provided by the ODP framework for the following delta data transfers

Note: The initial execution will also reset this entity’s previous delta queue subscriptions. Make sure that no active scheduled flows exist for the same ODP data source.

Amazon AppFlow automatically detects OData services that expose ODP-based data sources and creates a subscription (Initial Data with Delta Init) to ODQ during the initial flow run. You can also monitor the ODQ subscriptions in the SAP provider system via transaction ODQMON after a successful flow execution.

On-demand flow:

Running an on-demand flow using an ODP-enabled OData service will not create a delta queue subscription in ODQ of the SAP provider source system; instead, the data is retrieved in full.

After the successful OnDemand flow run, check transaction code ODQMON in the SAP provider source system. You will not see any active subscriptions for the OnDemand flows

Things to consider

The ODQ subscription termination does not automatically happen when you delete the SAP ODP-based OData flow in Amazon AppFlow. This leftover subscription has to be handled in the SAP provider system. Use the transaction code ODQMON to manage the subscriptions and end them if they are no longer required. You can schedule re-organization jobs in the SAP provider system to clean up the queue.

Summary

The Amazon AppFlow SAP OData Connector is a serverless managed service that extracts SAP data directly into Amazon S3 via OData. This capability paves the way for integrating SAP data into cloud-native AWS services. This launch further enhances the Amazon AppFlow SAP OData connector to use the SAP ODP framework. This feature simplifies extraction of data from multiple SAP ERP/BW data sources, including Transactional Data, Master Data, and Presentation Data, with built-in Change Data Capture capabilities.

To get started, visit the Amazon AppFlow page. To learn why AWS is the platform of choice and innovation for more than 5000 active SAP customers, visit the SAP on AWS page.

Architecture Options for Extracting SAP Data with AWS Services

$
0
0

Feed: AWS for SAP.
Author: Ferry Mulyadi.

Introduction

Gartner found that nearly 97% of data sits unused by organisations, and more than 87% of the organisations are classified as having low maturity levels in terms of business intelligence and analytics capability. This capability deficit could severely restrict a company’s growth and introduce risk to its existence as it cannot reinvent itself. Every company must move quickly to assess its data analytics capabilities and chart a course for transformation to a data-driven enterprise. It is a crucial part of becoming more responsive to customers and to market opportunities, and more agile given the rapidly changing nature of technology and the marketplace

Here are a few AWS customers which benefited by being data-driven enterprise :

  • Moderna is a biotechnology company pioneering a new class of messenger RNA (mRNA) medicines. Leveraging its mRNA platform and manufacturing facility with the AWS-powered research engine, Moderna delivered the first clinical batch of its vaccine candidate (mRNA-1273) against COVID-19 to the National Institute of Health (NIH) for the Phase 1 trial 42 days after the initial sequencing of the virus. By building and scaling its operations on AWS which includes SAP S/4HANA, Amazon Redshift and Amazon Simple Storage Service (S3), Moderna is able to quickly design research experiments and uncover new insights, automate its laboratory and manufacturing processes to enhance its drug discovery pipeline, and more easily comply with applicable laws and regulations during production and testing of vaccine and therapeutic candidates.
  • Zalando (Europe’s largest online fashion platform) started migrating its SAP systems to AWS to increase agility, simplify IT maintenance, and build a future-ready data architecture as part of its digital transformation. With a hybrid data lake on AWS that is tightly integrated with one of the world’s largest SAP S/4HANA systems, Zalando has reduced its cost of insight by 30% while improving customer satisfaction. Zalando built its data lake with services like Amazon Redshift, AWS Glue, Amazon S3.

The first step to getting more out of your SAP data is getting it to your AWS data lake. This enables you to uncover new opportunities and solve business challenges. In this blog, we will discuss Architecture options to extract SAP Data to AWS based on your SAP ERP or S/4HANA versions.

We will focus on AWS Services such as Amazon Appflow, AWS Glue, AWS Lambda, Amazon API Gateway, as well as SAP solutions such as SAP Data Services, SAP Data Intelligence in order to provide a baseline scenarios.

There are a number of AWS Partners solutions that can help with extraction, processing and analytics of SAP Data such as Qlik, Bryteflow, HVR, Linke, Boomi and others. However they will not be discussed in this blog, but you can visit AWS marketplace or contact your AWS contact point to find out more. If you need assistance on implementing these AWS Services, you can contact AWS Professional Services or AWS partners which are listed in AWS Partner Discovery Portal.

Solution Considerations

The key considerations when extracting data from SAP Systems fall into two major categories, 1/ Commercial, and 2/ Technical.

Commercial Considerations

Buy vs Build

To integrate AWS with SAP, developers can implement minimum line of code. While running custom code can be cost effective at first, however it typically requires you to maintain the custom code. On the other hand, there are a number of SAP solutions (such as SAP Data Services) or AWS Managed Services (such as Amazon AppFlow ) or other commercial off the shelf (COTS) solutions, that are highly specialized. They come with a large set of pre-built capabilities for ease of use. It is important to consider the full total cost of ownership (TCO).

Middleware Software vs. Cloud Native

Leveraging middleware software for integration between SAP and AWS means additional administrative effort (installation, patching and upgrade) as well as runtime costs (software license). In order to address this, AWS introduced a managed service that eliminates the administrative effort and runtime costs to integrate SAP and AWS. AppFlow provides a no-code and serverless option to extract SAP Data, as well as write this data back to SAP..

SAP License Impact

When extracting data from SAP and writing back data to SAP, you will need to consider your SAP Licensing requirements.
Note : Before implementing data extraction or write back to/from SAP systems please verify your licensing agreement.

Price vs Value

When you buy off-the-self software such as SAP Data Services, you can procure a perpetual license which allows you to use the software for an indefinite period of time by paying a single fee. With a perpetual license, it can be difficult to determine costs vs business value for a certain initiative. When you use cloud native services such as Appflow, you pay-per-use based on the number of flows and data volume that are required. This pay-per-use, or utility, model enables you to understand the true cost versus achieved business value of a certain initiative.

Technical Considerations

Pull vs. Push the Data

At a high level there are two type of mechanism to extract SAP Data :

  • To pull data from SAP and then push it to AWS services like Amazon S3. This method is usually executed as batch and requires and SAP system to be accessible by the extraction tools. Some customers may have security concerns around this approach and therefore it may be less preferred for them.
  • To push data from SAP to AWS Services. This method is good for near real time extraction using available methods such as SAP Intermediate Documents (IDOCs).

Delta Handling

For relatively small tables, for example master data tables, repetitive full loads may be acceptable when extracting SAP data. For large tables, for example transactional data tables, the transfer of deltas may be preferred for performance and cost reasons. With delta extraction only the data that has changed since last extraction is identified. Common SAP delta mechanisms are Application Link Enabling (ALE) Change Pointers, Operational Data Provisioning (ODP) Delta Queues, Change Data Capture (CDC), and timestamp fields via querying the last changed date and time.

SAP Upgrade Impact

For SAP Customers who are running SAP ECC 6.0 or prior (SAP Business Suite), a concern would be the upgrade impact of the SAP Data Extraction mechanism that is being established. This challenge may lead to a solution that avoids database-level extraction, because of the fact that major changes to database schema can be expected when an upgrade happens to S/4HANA.

Decision Tree

Taking into account the solution considerations above and considering practical aspects of SAP systems, we have created a decision tree (below) to help guide customers to choose which method is appropriate to extract your SAP Data.

An important practical consideration is SAP Gateway availability. SAP Gateway allows you to leverage the OData protocol to consume SAP Data via RESTful APIs. OData is an Open Data Protocol which is an OASIS standard, It is ISO/IEC approved and runs on HTTPS protocol. It supports secure connectivity over the internet and also supports a hybrid multi-cloud construct with capability to scale with data volume. SAP Gateway will provide you with a broad range of options for extracting SAP data, without restricting yourself to a legacy protocol such as RFC or IDOC.

  • If you have SAP Gateway, then the next consideration would be the SAP ERP version that you are currently running:
    • If you are running the latest SAP S/4HANA, you will have many prebuilt OData services that you can leverage for extraction. In the latest S/4HANA, there are more than 2000+ prebuilt OData Services. Most of these OData Services are built with the Fiori User Interface in-mind. For a large data extraction you may still want to leverage the SAP BW Extractors through ODP because they include delta, monitoring, and troubleshooting mechanisms. SAP BW Extractors provide application context, thus reducing transformation works at the target system or data lake.
    • If you are running on ECC 6.0 EHP7/8, you will have limited prebuilt OData services, but you can still leverage SAP BW Extractors through ODP for most of the extraction.
  • If you do not have SAP Gateway you are most likely running SAP ECC 6.0 EHP8 or prior. You may be concerned about the upgrade impact on your extraction mechanisms after you perform a SAP upgrade. In order to minimize this impact we recommend use of the Standard SAP BW Extractors through ODP, Standard BAPIs or Standard IDOCs.
    • Custom BW Extractors, BAPIs, IDOCS, Database and Files extraction methods are viable however these may increase your Total Cost of Ownership TCO because you will have to build, operate and maintain the custom code yourself.
    • You still can use RFCs/BAPIs and IDOCs in S/4HANA however since these are legacy protocols, your extraction tool choices and network connectivity options may be restricted because these protocols were built for LAN and WAN environments. There will be challenges to traverse the internet and these protocols may not perform optimally in a hybrid cloud environment. The recommendation is therefore still to consider OData as a first choice because it is an Open Data Protocol which is flexible to implement and is supported in a hybrid multi-cloud environment.

Decision Tree of SAP Data Extraction

Figure 1. Guideline Decision Tree for Extracting SAP Data with AWS Services.

Architecture Design Pattern Characteristics

Below is a summary of the Architecture Design Patterns and their characteristics that are tagged in the decision tree above. This will help you decide on an extraction method for your SAP Data.

Number Architecture Pattern Extraction Method Delta Handling Middleware Services Pros and Cons
S/4HANA or ECC 6.0 EHP7/8, OData, with SAP Gateway
A1 S/4HANA or ECC 6.0 EHP7/8 with pre-built OData Services Pre-Built Standard OData Services Consider timestamp field

Amazon AppFlow

AWS Glue/Lambda

SAP Data Intelligence

SAP Data Services

  • Amazon Appflow is a serverless and no-code managed AWS service which can extract and write back to SAP.
  • AWS Glue/Lambda require you to deploy code, maintain and upgrade when necessary.
  • SAP Data Intelligence subscription is part of SAP BTP (Business Technology Platform) with a pay-per-use model. SAP Data Services requires a perpetual license.
A2 S/4HANA or ECC 6.0 EHP7/8 with Data Extractors (BW extractor) through OData Standard BW Extractors (ODP Based) Delta is handled within ODP
  • Low upgrade impact because the BW Extractors are standard.
A3 S/4HANA or ECC 6.0 EHP7/8 with Custom OData Services Custom OData (ABAP CDS View) Consider timestamp field
  • Custom ABAP CDS views and custom OData Services maintenance fixes will be required especially during upgrade.
ECC 6.0 EHP8 or prior, RFC, no SAP Gateway
A4 ECC 6.0 EHP7/8 or earlier with Data Extractors (BW Extractors) thru RFC Standard BW Extractors (ODP Based) Delta is handled within ODP

Amazon AppFlow

AWS Glue/Lambda

SAP Data Services

  • AWS Glue/Lambda require you to deploy code, maintain and upgrade when necessary.
  • SAP Data Services requires a perpetual license.
  • Any Custom BW Extractors and BAPI will require you to develop the code, maintain and modify/upgrade when necessary.
Custom BW Extractors To be built within BW Extractors
A5 ECC 6.0 EHP7/8 or earlier with BAPI thru RFC Standard BAPI Consider timestamp field
Custom BAPI Consider timestamp field
ECC 6.0 EHP8 or prior, HTTP-XML, no SAP Gateway
A6 Any Versions of ECC or S/4HANA with IDOCs Standard IDOCs Delta is handled within IDOCs API Gateway/AWS Lambda
  • Maintenance of IDOC requires SAP knowledge. If your system is S/4HANA, we recommend use of OData which provides better options and limits upgrade impact for further enhancements.
  • IDOC is able to process near real-time push for delta changes as well as batches.
  • AWS Lambda functions requires you to develop code, maintain and upgrade when necessary.
  • Custom IDOCs require you to develop code, maintain and upgrade when necessary.
Custom IDOCs Delta is handled within IDOCs
ECC 6.0 EHP8 or prior, JDBC, no SAP Gateway
A7 Any Versions of ECC or S/4HANA with Database Database Consider timestamp field AWS Glue/Lambda
  • Data structures at the Database level have limited or no application context. SAP application knowledge is required to perform transformation on extracted data at the target.
  • The SAP ECC or S/4HANA DB license is a runtime license. This limits direct access to the database. This mechanism may require additional database enterprise licenses.
  • Major changes to the database schema should be expected when upgrades occur for ECC to S/4HANA systems.
ECC 6.0 EHP8 or prior, Files, no SAP Gateway
A8 ECC 6.0 EHP7/8 or earlier with BAPI thru Files Flat Files Consider timestamp field AWS Glue/Lambda
  • This method can be used in may versions like SAP ERP 6.0, S/4HANA but it requires custom development and maintenance effort. It may also be costly to upgrade.
  • If your system is S/4HANA or has been upgraded to S/4HANA OData based extraction is recommended instead.

Conclusion

In this blog, we have discussed the Architecture Patterns for extracting SAP Data to AWS. Each of the patterns is described along with their pros and cons based on key considerations such as delta handling, licensing, running costs and upgrade impact. With the decision tree provided you can assess and decide on which pattern is suitable for your scenario.

Here are some further references that you may find useful. They outline more end to end scenarios that become possible once your SAP data has been extracted to AWS.

You can find out more about SAP on AWS, Amazon AppFlow, AWS Glue, AWS Lambda, from the AWS product documentation.

AWS and SAP BTP: driving more value from your SAP ERP journey to the cloud

$
0
0

Feed: AWS for SAP.
Author: Amr Elmeleegy.

This blog is co-authored by Dan Kearnan, Sr. Director SAP Business Technology Platform, SAP and Amr Elmeleegy, World Wide SAP Business Development Manager, AWS.

Why customers are moving ERP to the cloud

In a bid to transition from legacy to modern data platforms, organizations of all stripes and sizes are moving their on-premises ERP systems to the cloud. The rationale is irrefutable: moving to the cloud offers significant IT TCO reduction benefits . However, enterprises moving to the cloud merely for TCO reduction purposes are missing significant untapped business and ROI potential.

Customers like Zalando Payments GmbH (ZPS), Georgia Pacific, Lion, and HPE successfully leveraged the SAP Business Technology Platform (SAP BTP) and AWS cloud services as part of their ERP cloud transformations. The combination of BTP and AWS resulted in benefits that exceeded cost reductions delivered as a result of sharing IT infrastructure using cloud virtualization capabilities. When looking at these key business benefits in detail, there are 3 that stand out that drove significant business value from the move to the cloud. Let’s look at each one in more detail.

Extend your S/4HANA system while keeping the core clean

Enterprises need to stay agile and adapt rapidly to new business conditions and changing customer demands. Extension allows companies to build and enhance all their application investments to meet their customer’s dynamic needs and provide continual value. Customers tell us they want to keep their ERP extensions with them as they move to the cloud. Contrary to conventional wisdom, moving your SAP systems to the cloud doesn’t mean sacrificing your ability to customize and extend your ERP business processes in return for cloud TCO reductions. With SAP BTP and AWS, you can get both cloud economic advantages and flexibility to extend and customize your ERP.

Customers moving their SAP environments to the AWS cloud can leverage a wide variety of SAP BTP and AWS native services to build extensions and customizations while keeping their S/4HANA and ERP digital core clean for faster and cheaper future upgrades. Specifically, customers that have invested heavily in acquiring and training in-house ABAP developers can continue to leverage these resources by directing them to the SAP BTP, ABAP Environment services to build multi-tenant SaaS extensions directly in ABAP language and have them deployed in the AWS cloud as side-by-side extensions. This BTP service comes with an improved ABAP language optimized for the Cloud, more efficient support for developers, better tools for administrators, and migration tools for ERP custom code. The SAP BTP ABAP environment service is only available on the AWS cloud and can be found in AWS Data Centers in Europe (Frankfurt), North America (US East) and Japan (Tokyo).

Customers that do not have in-house ABAP development expertise still have a wide variety of options to extend their SAP environment while moving to the cloud. SAP BTP offers low code/no code services like SAP Appgyver that allow business users to build extensions without code. SAP BTP also offers pro-code services like the SAP Business Application Studio, and packaged business extensions like the SAP S/4HANA Cloud for intelligent intercompany reconciliation.  Today, the SAP Appgyver and S/4HANA Cloud for intelligent intercompany reconciliation are only available on the AWS cloud and the SAP Business Application Studio is available in 8 AWS regions – more than any other RISE cloud provider.

Additionally, SAP on AWS customers can build extensions using AWS native services like AWS Lambda – a serverless, event-driven compute service that lets customers run code for virtually any type of application or backend service without provisioning or managing servers. With AWS Lambda, there are no new programing languages, tools, or frameworks to learn allowing organizations to take advantage of their existing in-house IT development resources. Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API allowing enterprises to use any additional programming languages to author their functions.

SAP on AWS customer Bristol Myer Squib – a global biopharmaceutical company- moved from ECC to S/4HANA on AWS and built half a dozen side by side applications using SAP BTP including apps that create trails for Bill of Material changes for auditing and compliance purposes and apps that provide MRP controllers with self service capabilities to update configuration files without the need to reach out to IT.

Imbed intelligence into Business Processes with AI/ML/IoT/and RPA services

Customers tell us that their move to SAP S/4HANA (RISE with SAP) is only the start of their digital transformation journey and not the end. Once customers are on SAP S/4HANA, they immediately start looking for other oppurtunities to imbed intelligence into their finance, supply chain, and order to cash business processes. This is where SAP BTP’s AI, ML, IoT and RPA powered services come into play including services like the SAP Conversational AI that allows customers to build powerful AI chatbots and connect them to SAP S/4HANA (e.g. SAP Fiori Co-Pilot), SAP IoT that powers SAP’s predictive maintenance use case and allows customers to connect IoT sensor data with business objects and processes, and the SAP Document Classification that helps customers apply ML to automate the management and processing of business documents such as contracts and invoices. 11 of the total 12 BTP AI, ML, IoT and RPA powered services are available only on the AWS cloud (see table 1).

List of SAP BTP AI, ML, IoT and RPA services supported by RISE Cloud Providers

Table 1: List of SAP BTP AI, ML, IoT and RPA services supported by RISE Cloud Providers. For full list of BTP services visit https://discovery-center.cloud.sap

SAP on AWS customers like HPE leveraged the SAP BTP Conversational AI to integrate chatbots with SAP Governance Risk and Compliance (GRC) solutions to make it more convenient for users to request authorizations and reset passwords adhering to the HPE audit Policy. Lion – a global beverage company based out of Sydney Australia and an SAP and AWS customer – leveraged SAP BTP and native AWS IoT services to help their dairy farmers reduce milk spoilage due to temperature fluctuations by capturing IoT sensor data from their milk containers and feeding them back into SAP systems – savings hundreds of thousands of dollars a year from milk spoilage.

Delivering real-time insights to the fingertips of business users

It’s essential that organizations have a consolidated view across all their data assets and are able to achieve insight and make real time decisions, especially during times of rapid change. SAP customers moving their ERP environments to cloud are able to take advantage of over 100 prebuilt Analytics content packages offered by SAP BTP as well as the SAP Analytics Cloud. SAP Analytics Cloud offers SAP customers the unique ability to combine analytics and planning together to fully unlock the value of their SAP operational data. SAP customers use SAP Analytics Cloud to help business users make decisions rooted in real time operational and transactional data coming out of SAP ERP and S/4HANA. SAP Business users rely on SAP Analytics Cloud every day to answer foundational business questions such as “Who are my top revenue generating customers?”, “What were my fastest growing products last year?”, “What are my top blocked customer invoices?” to name a few. Today, AWS offers the SAP Analytics Cloud BTP service in 8 AWS regions – 4 times more regions than all other RISE cloud providers combined.

In 2019, SAP launched the SAP Analytics Cloud Embedded Edition to allow SAP business users to answer these questions without the need to punch out into a separate standalone analytics solution or move their mission critical SAP data between different cloud providers and exposing them to cyberattacks. This new service allows business users to consume SAP Analytics Cloud Dashboards and Reports directly in their SAP systems where the transactional data is being generated. Today, AWS is the only RISE cloud provider that offers the SAP Analytics Cloud Embedded Edition. The service is available in the AWS’s Sydney, São Paulo, Frankfurt, Tokyo, Singapore, and US East regions.

For customers with more advanced analytics requirements, such as leveraging geospatial for mobile asset tracking or accessing SAP data that is federated across different SAP cloud solutions, SAP BTP has you covered. Customers can use the SAP HANA Spatial BTP Service to embed GPS data into their SAP analytic and the SAP Graph BTP service that offers enterprises a single connected and unified view of all their business data. Today, AWS is the only cloud provider that offers the SAP Graph and SAP HANA Spatial BTP services.

SAP customers that want to combine SAP data with non-SAP data, or that have prohibitively large data sets, can leverage a wide range of native AWS cloud analytics services including AWS Simple Storage Service Amazon S3 and Amazon Redshift to build complementary SAP Data Lakes. SAP on AWS customers can take advantage of SAP Data Warehouse Could data federation capabilities to query non-SAP data sitting in an AWS Data Lake, Amazon S3, or AWS Redshift environment directly from their SAP Data warehouse cloud environment.   SAP on AWS customers can also leverage the AWS Appflow service with prebuilt SAP integrations to capture data directly from their SAP systems with a few simple clicks and without the need to write code.

Zalando, a leading European online fashion platform, integrated its SAP systems after moving them to the AWS cloud with 36 native AWS services and created a hybrid data architecture that lowered its cost of ownership of its SAP data architecture by 30%. This allowed Zalando to invest in building chatbots and introduce image recognition technology to speed up invoice processing enhancing its customer service. Zalando’s sister company, Zalando Payments GmbH (ZPS), uses the SAP Business Technology Platform, SAP BW4HANA, and the SAP Analytics Cloud to calculate how much its accounts are holding in third-party funds at any given time and deliver the results to a large circle of internal business users.

SAP on AWS customer Georgia Pacific, saves millions of dollars annually by using AWS data analytics services to collect and analyze shop floor manufacturing data on material quality, moisture content , temperature and machine calibration to reduce tears during their paper production process, reducing downtime and material waste and increasing profits.

Chart your course to success in the cloud with SAP and AWS

SAP and AWS have delivered ground breaking co-innovation since 2008. AWS offering the widest selection of SAP BTP services and in the greatest number of regions is just one example of our joint innovation. To learn more about how SAP BTP+AWS can help you accelerate your journey to the cloud watch the short video below, check out the below infographic, and visit us at aws.amazon.com/sap.

* All SAP BTP data referenced in this blog is as of 08/18/2022. Refer to the SAP Discovery Center for the latest list of SAP BTP services and Cloud Service Provider (CSP) availability.

SAP HANA Fast Restart Option on AWS

$
0
0

Feed: AWS for SAP.
Author: Rozal Singh.

Over the years, enterprises have enjoyed the power of in-memory SAP HANA databases to help with the performance of critical business processes. However, as their usage and data footprint increase, one operational challenge is the startup time required for data to be loaded into memory following an application restart. This requirement results in longer system outages for some patching and SAP HANA based failure scenarios.

As an example, SAP Administrators are often asked by the business – “How much downtime do you need to apply a particular SAP HANA SP Stack upgrade?” In the breakdown of the downtime activities, the SAP HANA startup time could consume a significant portion of the downtime window, often in the range of 10-30 minutes or more depending on the database size.

With SAP HANA 2.0 SPS04+, SAP introduced the Fast Restart Option, leveraging the native Linux tmpfs feature, to help enterprises reduce the business downtime by significantly reducing startup times.  The feature is quick to implement and requires no additional resources, so should be evaluated for all SAP HANA based workloads where startup time is important.

In this blog, we will explain the concept, demonstrate the difference in startup times and provide details for implementation. The SAP HANA Fast Restart feature can be implemented for SAP HANA databases running in AWS on SAP certified Amazon Elastic Compute Cloud (EC2) instances.

To understand the solution, let’s first look at SAP HANA Memory Management.

The SAP HANA database supports two types of table storage, column store and row store. SAP have optimized HANA to use column store, and this is the default.

The column store is made up of two data structures – MAIN and DELTA. MAIN storage is compressed and optimized for read operations whereas DELTA storage is the target for all write operations. Data changes are moved from DELTA to MAIN storage via the delta merge operation. More details can be found in the SAP Documentation: Memory Management in the Column Store

The SAP HANA Fast Restart option uses tmpfs storage, a volatile temporary file system that resides in RAM, to preserve and reuse MAIN data fragments. This is effective in minimising memory load times in cases where the operating system is not restarted, applicable to the following scenarios.

●       SAP HANA Restart

●       SAP HANA Service Restart, including Index Server Crash.

●       SAP HANA Upgrade/Service Pack

Fig. 1 shows the memory use case as well as how tmpfs can grow and shrink dynamically. The following three parameters are important for the setup:

1)    basepath_persistent_memory_volumes: The location of the tmpfs filesystems.

2)    persistent_memory_global_allocation_limit: By default, no limit is specified. You have the option to limit the maximum size of the persistent memory on a host.

3)    table_default: By default, the value is ON. You have the option to set it to OFF and manually control the persistent memory usage at any of the three levels – table, partition or column using the PERSISTENT MEMORY switch.

More information on the above parameters and their usage can be found in SAP Documentation: Persistent Memory

SAP HANA Fast Restart

Fig. 1 SAP HANA Fast Restart

Every customer on SAP HANA 2.0 SP4 or higher should consider implementing this feature as there is no impact to SAP HANA online performance and sizing KPIs. Also, to have consistency across the landscape, enable SAP HANA Fast Restart across your system environments – non-production and production.

For testing purposes, I created multiple tables with test data in the DBACOCKPIT schema. Let’s look at the restart times for a database size of 1,062 GB on an x2idn.32xlarge EC2 instance with 128 vCPUs, and 2048 (GiB) of memory with and without SAP HANA Fast Restart.

Operating System: SUSE Linux Enterprise Server 12 SP4
SAP HANA Version: 2.00.050.00.1592305219 (fa/hana2sp05)
HANA DB Size: 1,062 GB
Instance Type: x2idn.32xlarge
Storage Type: gp2/gp3 (configuration based on SAP HANA on AWS storage configuration guide)

The data size on disk and the memory can be seen below:

SAP HANA Database Size

Startup load time without Fast Restart:

Index Server Trace:

[57833]{-1}[12/-1] 2022-08-03 08:53:44.790361 i TableReload      TRexApiSystem.cpp(00376) : Starting preload of table HDB::DBACOCKPIT:LOADGENen
[57831]{-1}[-1/-1] 2022-08-03 09:16:28.450727 i Service_Startup  TREXIndexServer.cpp(02059) : Pre-/Re-Loading of column store tables finished.

It took 23 minutes to completely load the column store.

Startup load time with Fast Restart:

Index Server Trace:

[77218]{-1}[13/-1] 2022-08-03 09:25:26.339358 i TableReload      TRexApiSystem.cpp(00376) : Starting preload of table HDB::DBACOCKPIT:LOADGENen
[79544]{-1}[-1/-1] 2022-08-03 09:26:28.037447 i Service_Startup  TREXIndexServer.cpp(02059) : Pre-/Re-Loading of column store tables finished.

It took 1 minute to completely load the column store.

Result: Using Fast Restart Option, HANA STARTUP load time reduced significantly. In this case the startup time reduced from 23 minutes without Fast Restart to 1 minute with Fast Restart.

Often greater benefits come with greater complexities and cost, however in the case of SAP HANA Fast Restart that’s not the case. Fast Restart offers easy implementation and is included with SAP HANA 2.0 SPS04 and higher at no additional cost.

Steps to implement SAP HANA Fast Restart on an SAP HANA certified EC2 instance:

Step 1 – Determine how much memory each CPU socket has:

cat /sys/devices/system/node/node*/meminfo | grep MemTotal | awk 'BEGIN {printf "%10s | %20sn", "NUMA NODE", "MEMORY GB"; while (i++ < 33) printf "-"; printf "n"} {printf "%10d | %20.3fn", $2, $4/1048576}'

Example output:

NUMA NODE |              MEMORY GB
--------------------------------------------
        0 |               1000.034
        1 |               1000.067

Step 2 – Create the mount points. Create 1 tmpfs per NUMA node. I am creating 2 mount points as x2idn.32xlarge has 2 NUMA nodes with 1000GB memory each.

mkdir -p /hana/tmpfs0/<SID>
mkdir -p /hana/tmpfs1/<SID>
chown -R <sid>adm:sapsys /hana/tmpfs*/<SID>
chmod 777 -R /hana/tmpfs*/<SID>

Step 3 – Add the following lines to /etc/fstab

tmpfs<SID>0 /hana/tmpfs0/<SID> tmpfs rw,relatime,mpol=prefer:0
tmpfs<SID>1 /hana/tmpfs1/<SID> tmpfs rw,relatime,mpol=prefer:1

Step 4 (Optional) – To limit the memory allocated to the TMPFS filesystems, it is possible by passing the parameter “size”. In the below example I am limiting the memory to 250G.

tmpfs<SID>0 /hana/tmpfs0/<SID> tmpfs rw,relatime,mpol=prefer:0,size=250G
tmpfs<SID>1 /hana/tmpfs1/<SID> tmpfs rw,relatime,mpol=prefer:1,size=250G

Step 5 – Mount the filesystems you have just added to /etc/fstab

mount -a

Step 6 – Alter the following parameters using HANA Studio, or hdbsql (run as the user <dbsid>adm).

hdbsql -u system -p <password> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('persistence','basepath_persistent_memory_volumes') = '/hana/tmpfs0/<SID>;/hana/tmpfs1/<SID>' with reconfigure;"

hdbsql -u system -p <password> "ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') SET ('persistent_memory', 'table_default') = 'on' WITH RECONFIGURE;"

Step 7 (Optional) – Alter (or use default) parameters related to persistent_memory_global_allocation_limit and table_default. I am using default values for both the parameters.

persistent_memory_global_allocation_limit = Max Size (no limit is specified – default)
table_default = ON (default)

Step 8 – Restart SAP HANA in order to activate the changes

HDB start

Step 9 – Check tmpfs consumption.

Note: The first restart after implementing Steps 1 to 6 will take the same amount of time as without tmpfs. The subsequent restarts will be faster.

df -h | head -1; df -h| grep tmpfs<SID>

tmpfs consumption

Step 10 – Check table consistency (optional)

CALL CHECK_TABLE_CONSISTENCY('CHECK_PERSISTENT_MEMORY_CHECKSUM', NULL, NULL);

The setup of SAP HANA FSO creates an association between the tmpfs configuration of your SAP HANA system and the CPU and Memory specifications of the EC2 instance. If you wish to maintain the flexibility to change instance sizes, consider setting up a systemd service to re-size tmpfs. The following steps provide sample guidance for doing this.

Step 1 – Navigate to /etc/systemd/system

Step 2 – Create your service. E.g. sap-hana-tmpfs.service. You can use this script as an example to create your service:

https://github.com/aws-quickstart/quickstart-sap-hana/blob/main/scripts/sap-hana-tmpfs.service

Step 3 – Create the script which is called by the service created in Step 2. E.g. sap-hana-tmpfs.sh script at /etc/rc.d/sap-hana-tmpfs.sh. You can use this script as an example:

https://github.com/aws-quickstart/quickstart-sap-hana/blob/main/scripts/sap-hana-tmpfs.sh

Step 4 – Reload the service files to include the new service:

sudo systemctl daemon-reload

Step 5 – Start the service:

sudo systemctl start <your_service_name>

Step 6 – Check status of your service

sudo systemctl status <your_service_name>

Note: After changing the instance type, check the HANA parameters manually to ensure the configuration is according to your requirements after changing the instance type.

For enterprises running their mission critical applications on SAP HANA Databases, minimizing downtime is key and leveraging the SAP HANA Fast Restart Option can help to reduce the business downtime without modifying the underlying infrastructure.

In addition to your customer account team and AWS Support channels, we have recently launched re:PostA Reimagined Q&A Experience for the AWS Community. Our SAP on AWS Solution Architecture team regularly monitor the SAP on AWS topic for discussion and questions that could be answered to assist our customers and partners. If your question is not support-related, consider joining the discussion over at re:Post and adding to the community knowledge base.

To learn why AWS is the platform of choice and innovation for more than 5000 active SAP customers, visit the SAP on AWS page.

--- Article Not Found! ---

$
0
0
***
***
*** RSSing Note: Article is missing! We don't know where we put it!!. ***
***