Quantcast
Channel: SAP – Cloud Data Architect
Viewing all 140 articles
Browse latest View live

Amazon EC2 High Memory instances for SAP HANA: simple, flexible, powerful

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Steven Jones is a Technology Director and Global Technical Lead for the AWS Partner Organization.

At Amazon, we always try to start with the customer’s needs and work backward to build our products and services. Back in 2017, our customers, who were already running production deployments of SAP HANA on Amazon Elastic Compute Cloud (Amazon EC2) X1e instances with 4 TB memory, needed to support the growth of their enterprise data. So they started asking us for Amazon EC2 instances with even larger amounts of RAM.

We asked our customers what features and capabilities were most important to them. Consistent feedback was that they expected the same, familiar experience of running SAP HANA on Amazon Web Services (AWS). They especially wanted the ability to use the same network and security constructs like Amazon Virtual Private Cloud (Amazon VPC), security groups, AWS Identity and Access Management (IAM), and AWS CloudTrail; to manage these systems via APIs and the AWS Management Console; to use elastic storage on Amazon Elastic Block Store (Amazon EBS); and to be able to scale easily when needed.

In a nutshell, customers told us they didn’t want to compromise on performance, elasticity, and flexibility just to run larger instances.

Breaking the mold

We started our journey with the mission to build a product that could meet these requirements and delight our customers. In the fall of 2018, we announced the general availability of Amazon EC2 High Memory instances with up to 12 TB of memory, certified by SAP and ready for mission-critical SAP HANA workloads. Today, Amazon EC2 High Memory instances are available in three sizes—with 6 TB, 9 TB, and 12 TB of memory. You can launch these EC2 bare metal instances within your existing AWS environments using the AWS Command Line Interface (AWS CLI) and/or AWS SDK, and connect to other AWS services seamlessly.

In this blog post, I’ll discuss some of the key attributes that our customers love about EC2 High Memory instances.

These Amazon EC2 High Memory instances are powered by what we call the Nitro system, which includes dedicated hardware accelerators that offer and manage connectivity to Amazon VPC and Amazon EBS. By offloading these functions that have been traditionally supported through a hypervisor, these bare metal instances enable applications to have direct access to the underlying physical hardware. At the same time, the Nitro system enables full and seamless integration of these instances into the broader range of AWS services.

EC2 virtual and bare metal instances

The ability to run SAP HANA on these instances ultra-close to your application servers within the same virtual private cloud (VPC) enables you to achieve ultra-low latency between your database and application servers and get consistent, predictable performance.

The ability to run database and application servers at close proximity offers the best outcome for running your SAP estate, including the SAP HANA database, in the cloud. High Memory instances support the AWS CLI/SDK for launching, managing, and resizing instances, elastic storage capacity from Amazon EBS, and benefit from direct connectivity to other AWS services.

Traditional deployments vs best outcome

The Nitro system enables EC2 High Memory instances to operate as fully integrated EC2 instances, while presenting them as bare-metal servers. All the CPU and memory on the host are directly available for your SAP workloads without a hypervisor, allowing for maximum performance. Each EC2 High Memory instance size is offered on an 8-socket host server platform powered by Intel® Xeon® Platinum 8176M (Skylake) processors. The platform provides a total of 448 logical processors that offer 480,600 SAP Application Performance Standard (SAPS). We’ve published both ERP (Sales & Distribution) and BW on HANA benchmarks to transparently disclose the performance of this platform for both OLTP and OLAP SAP HANA workloads.

EC2 High Memory instances are also Amazon EBS-optimized by default, and offer 14 Gbps of dedicated storage bandwidth to both encrypted and unencrypted EBS volumes. These instances deliver high networking throughput and low latency with 25 Gbps of aggregate network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking.

Specifications for the High Memory instances

Finally, if you want to implement a very large, compute-heavy S/4HANA system with high memory requirements, you now have the option of running S/4HANA in scale-out mode on EC2 High Memory instances. You can scale-out to up to 4 nodes on 12 TB High Memory instances. In total, this provides up to 48 TB of memory and 1,792 logical processors/1.9 million SAPS, an unprecedented option in the cloud. For more information, see Announcing support for extremely large S/4HANA deployments on AWS.

Unprecedented flexibility

Our customers love the ability to size their infrastructure on AWS based on current needs rather than overprovisioning up front to meet future demands. EC2 High Memory instances provide the same scalability for SAP HANA workloads as our virtualized EC2 instances. In fact, you can start with what you need now and easily scale to meet your demand as and when your needs dictate.

For example, start with a 6 TB EC2 High Memory instance now, and within 6 months easily convert to a 9 TB or 12 TB instance, if needed. You can simply resize to a 9 or 12 TB instance with a few API calls. Since the persistent block storage on the backend is based on Amazon EBS, this too can be extended as needed with a few API calls. Typically, with other private hosting options, this requires lengthy outages and shuttling data around to migrate servers.

The following diagram shows an example of resizing a 6 TB High Memory instance to a 12 TB High Memory instance in minutes. To see how simple this really is, watch this segment from a demo with Whirlpool from AWS re:Invent 2018. Also, learn how Whirlpool is using EC2 High Memory instances in an innovative way.

Resizing a 6 TB High Memory instance to a 12 TB High Memory instance

Commercially, these instances are available on a 3-year reservation and also offer the flexibility of moving to larger sizes during the 3-year reservation period. This flexibility offers the best total cost of ownership (TCO), and prevents over-provisioning. You can start with an instance size that meets your database sizing requirements today, and then move to larger instance sizes when the growth in your database size requires it. Spend only for what you need today, and not for what you might need a year or two from now.

A fully integrated experience

When it comes to management, you might think that because these are bare metal instances, they need to be managed or architected differently. Not so! You can use the AWS CLI/SDK and AWS Management Console. In addition, you can also use existing AWS architecture patterns, frameworks, and processes to secure, maintain, and monitor your SAP HANA instances running on EC2 High Memory instances.

For example, because these instances are natively integrated with all other AWS services, you can use services such as the following:

  • IAM to securely manage the access to your EC2 High Memory resources.
  • Amazon CloudWatch to monitor your instance.
  • AWS Systems Manager to gain operational insights.
  • AWS CloudTrail for governance and compliance.

And finally, the truly transformative capabilities come from being able to seamlessly integrate with other AWS services like Amazon Sagemaker for machine learning or AWS IoT services, for example.

Full integration with AWS services

If you’re ready to get started, you have several options to migrate your existing workload to EC2 High Memory instances. Build your new systems with a few API calls, or use an Amazon Machine Image (AMI) or one of the available AWS Quick Starts for SAP. Then, follow the SAP System Migration guidelines by using SAP HANA system replication, database export, or backup/restore.

To further minimize system downtime during migration, use our SAP Rapid Migration Test program (also known as FAST). Use downtime and cost-optimized options to build a resilient environment that meets your high availability and disaster recovery requirements with EC2 High Memory instances as well. See our SAP on AWS technical documentation site to find resources on migration and other operational aspects for running SAP HANA on AWS.

Summary

AWS pioneered running SAP HANA database in the cloud, and today continues to offer the most comprehensive portfolio of instances and certified configurations. Here is a quick view of our SAP-certified scale-up and scale-out deployment options of EC2 instances for SAP HANA OLAP and OLTP workloads. Later this fall we will be releasing two additional sizes with 18 and 24 TB of RAM to give you even more options for large scale-up workloads.

Scale-up and scale-out options

SAPPHIRE 2019 – If you are at the SAPPHIRE NOW 2019 conference at Orlando, stop by booth #2000 to learn more about Amazon EC2 High Memory instances. See them in action with live demos, and talk to one of our solutions architects to learn more about how easy it is to get started. We also have several other exciting things to share during SAPPHIRE to help you use your investments in SAP workloads beyond just infrastructure. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website. Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Stay tuned for more exciting news, and register for one of our upcoming webinars. Build on!


Accelerate your innovations by using SAP Cloud Platform on AWS

$
0
0

Feed: AWS for SAP.
Author: KK Ramamoorthy.

The business benefits of running SAP workloads on AWS are already well proven with thousands of customers now running such workloads. Tangible benefits like those experienced by ENGIE, an international energy provider, include not only cost savings but also flexibility and speed. For example, as mentioned in an ENGIE case study, ENGIE was able to reduce the expected delivery time of new business frameworks by two times, when they upgraded their SAP platform to SAP S/4HANA on AWS. They did this all while rightsizing their HANA infrastructure, by using AWS-enabled high availability architecture patterns.

Although these are very tangible business benefits, customers are also increasingly looking at driving business innovations by extending core SAP business processes in the areas of Big data & analytics, Internet of Things (IoT), Apps & APIs, DevOps, and Machine learning. In fact, we discussed this SAP extension approach using AWS native services in a Beyond infrastructure: How to approach business transformation at startup speed blog post last year. Over the course of a year, we’ve been working with customers using many of the approaches detailed in that post.

As more and more customers move their SAP estates to AWS, they have also asked for help with additional reference architectures and integration patterns to extend these significant investments, through a combination of SAP Cloud Platform and AWS services.

A solid foundation for building SAP extensions

A building is only as strong as its foundation, and this is also true for any technology platform. As detailed in the SAP Cloud Platform Regions and Service Portfolio document, more than 160 SAP Cloud Platform services are now supported across seven global AWS Regions (Montreal, Virginia, Sao Paulo, Frankfurt, Tokyo, Singapore, and Sydney). Out of this total number of services, 37 of them—which include some of the more foundational digital transformation services like SAP Leonardo Machine Learning Foundation, SAP Leonardo IoT, and SAP Cloud Platform Enterprise Messaging—run exclusively on AWS infrastructure. Global availability, scalability, and elasticity are vital components of any platform as a service (PaaS). With the depth and breadth of SAP Cloud Platform services on AWS, you now have unparalleled opportunities to build SAP extensions on a solid infrastructure foundation.

global map

bar chart

Interoperability between platforms

You also have multiple options to extend, integrate, and interoperate between AWS services and SAP Cloud Platform, beyond just the services provided natively via SAP Cloud Platform. Let’s look at a few examples.

Simplify cross-cloud connectivity using SAP Cloud Platform Open Connectors

SAP Cloud Platform Open Connectors provides pre-built connectors to simplify connectivity with AWS services and to consume higher-level APIs. This service abstracts cross-cloud authentication and connectivity details, so your developers can focus on building business solutions and not worry about lower-level integration services.

For example, using Open Connectors, you can integrate Amazon DynamoDB with your web applications running on SAP Cloud Platform. Another example is integrating higher-level artificial intelligence (AI) services like Amazon Rekognition with predictive analytics solutions on SAP Cloud Platform.

Getting started with Open Connectors is easy. You can create a new connector or use an existing one.

To access an AWS service—for example, an Amazon Simple Storage Service (Amazon S3) bucket:

  1. Launch the Open Connectors configuration from the SAP Cloud Platform console, and then provide the service endpoint URL.
  2. In Authentication, choose awsv4, and then provide the required AWS authentication information.

Now you can access the AWS service as REST API calls in your applications and other services in SAP Cloud Platform.

authentication

Integrate SAP Cloud Platform API Management using AWS Lambda

AWS developers can also consume SAP Cloud Platform services by using AWS Lambda and SAP Cloud Platform API Management. This pattern is especially attractive for customers who want to mesh business processes powered by SAP S/4 HANA applications with AWS services by using SAP Cloud Platform Connectivity.

This pattern also opens up access to other higher-level services running on SAP Cloud Platform, and connectivity to other software-as-a-service (SaaS) applications such as SAP SuccessFactors, SAP Concur, and SAP Ariba. For example, let’s say you want to build a voice-enabled application for accessing inventory information from a backend SAP enterprise resource planning (ERP) application running on Amazon Elastic Compute Cloud (Amazon EC2). You can expose the inventory data as APIs using SAP Cloud Platform API Management, and consume it in a Lambda function over HTTPS. Then, you can create an Alexa skill and connect it to this Lambda function, to provide your users with functionality for voice-enabled inventory management.

diagram

See it at SAPPHIRE NOW

These are just a few examples of how to start integrating SAP Cloud Platform services with AWS services. Want to learn more? Stay tuned for a special blog series devoted to this topic and, if you are at SAPPHIRE NOW 2019, come visit our Build On bar in AWS booth 2000. You will experience feature-rich demos and can talk 1:1 with an SAP on AWS expert, to learn more about building your SAP innovation journeys on AWS. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website. Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Hope to see you soon, and Build On!

AWS and SAP announce IoT interoperability solution

$
0
0

Feed: AWS for SAP.
Author: KK Ramamoorthy.

Co-authored by KK Ramamoorthy, Principal Partner Solutions Architect, and Brett Francis, Principal Product Solutions Architect

Today, SAP announced its collaboration with Amazon Web Services IoT and the general availability of interoperability between SAP Leonardo IoT and AWS IoT Core. The new collaboration makes it straightforward and cost-effective for you to deploy IoT solutions using the global scalability of the AWS IoT platform and business processes powered by SAP Leonardo IoT. The collaboration provides two new interoperability options.

The cloud-to-cloud option, which integrates SAP Leonardo IoT with AWS IoT Core, is generally available now. With this option, you can build SAP Leonardo IoT solutions that connect to backend intelligent suite solutions like SAP S/4 HANA and AWS IoT with a click of a button. Deployed device models in SAP Leonardo IoT are synced with the AWS IoT device registry and establish a one-to-one connection with SAP business processes. Without a single line of code written, customer data from IoT sensors is received by the AWS IoT platform, aggregated based on business rules established by the thing model, and posted to SAP Leonardo IoT.

The edge-to-edge option (coming soon) enables SAP business processes to execute locally with AWS IoT Greengrass. Essential business function (EBF) modules based on SAP Leonardo IoT Edge will run within the AWS IoT Greengrass environment, reducing latency while optimizing usage of bandwidth and connectivity. The EBF modules extend Intelligent Enterprise business processes to the edge.

While the cloud-to-cloud interoperability combines the power of both AWS and SAP cloud solutions, customers are also looking at increasingly bringing the power of cloud to the edge. They are looking at measuring, sensing, and acting upon data locally while using the cloud as a control plane for deployments and security. This is especially true for business processes that have poor or no internet connection or for businesses that require split-second local processing like running machine learning inferences. A company can take advantage of AWS IoT Greengrass to ensure local data connections are not lost, and then it can use AWS IoT Core to process and aggregate data from multiple, remote facilities.

With this collaboration, SAP will bring the power of SAP’s EBF modules based on SAP Leonardo IoT Edge to AWS IoT Greengrass. Our joint customers now will be able to use AWS IoT as the control plane to deploy powerful SAP edge solutions. For example, an Oil & Gas company will be able to ingest data from various sensors in their oil rigs using AWS IoT Greengrass and use SAP’s EBF modules to execute business processes locally.

Enterprises are constantly looking at ways to improve process efficiency, reduce cost, meet compliance requirements, and develop newer business models by having access to data in real time. Data generated by IoT sensors can provide valuable insights and help line of business owners make meaningful decisions, faster. Consider Bayer Crop Science, a division of Bayer that reduced the time taken to get seed data to analysts from days to a few minutes using AWS IoT. Many other customers are seeing similar business benefits (see the case studies).

However, collecting raw data from IoT sensors will soon become “noise” if that data does not have business context. This problem grows exponentially as enterprises deploy millions and billions of sensors. Until today, such customers had to build costly, custom solutions to marry the sensor data with business context, and they had to build complex custom integrations between IoT platforms and business solutions to bring sense to the data.

AWS and SAP are now able to help our joint customers deploy IoT solutions at scale without having to worry about complex custom integrations between solutions. For example, using AWS IoT, a manufacturing company can deploy, secure, and manage sensors on machines in their production lines. They can then ingest sensor telemetry data and seamlessly stream it to SAP Leonardo IoT where business rules can be applied to the data to determine asset utilization, analyze preventive maintenance needs, and identify process optimizations.

Below is a high-level architecture of both cloud-to-cloud and edge-to-edge interoperability options.

interoperability options diagram

Cloud-to-Cloud integration

The interoperability between SAP Leonardo IoT and AWS IoT is achieved by using a set of AWS resources that are automatically provisioned by the SAP Leonardo IoT platform with AWS CloudFormation. These resources enable the ingestion of device telemetry data and stream the data to an Amazon Kinesis stream by using AWS IoT Rules Engine rules.

Device data streamed in Amazon Kinesis is then picked up by AWS Lambda functions and sent to a customer-specific SAP Leonardo IoT endpoint where further business rules and application integrations are implemented. Processing and error logs are written to Amazon CloudWatch.

The interoperability automatically sets up cross-account authentication using secure stores in the AWS Cloud and SAP Leonardo IoT. After the initial setup is complete, you can use the Thing Modeler in SAP Leonardo IoT to create a thing model and sync it to AWS IoT to create matching AWS IoT things.

Customers can use AWS IoT Device Management functionality to onboard, monitor, track, and manage the physical devices. As the devices start sending telemetry information to AWS IoT Core, the telemetry information is seamlessly integrated with SAP Leonardo IoT using the resources created during initial setup.

Edge-to-Edge integration (coming soon)

AWS IoT Greengrass extends AWS to edge devices so that the devices can act locally on the data they generate, while still leveraging the cloud for management and durable storage. With the edge-to-edge option, you can also extend support for your business processes powered by SAP, by running EBF modules based on SAP Leonardo IoT Edge within AWS IoT Greengrass.

You can deploy EBF modules within AWS IoT Greengrass by using AWS IoT Core as the control plane for deployment and security. Once deployed, device telemetry data can be streamed directly from AWS IoT Greengrass to local EBF module endpoints. EBF modules can then invoke local business rules or call an SAP Leonardo IoT cloud endpoint for further processing.

See for yourself at SAPPHIRE NOW 2019

Want to learn more about this integrated IoT solution? Visit the AWS booth (No. 2000) at SAPPHIRE NOW. You will experience feature-rich demos, and you can leverage 1:1 time with an SAP on AWS expert to innovate for your own solution. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website.

Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Hope to see you soon and Build On!

SAP and AWS enable enterprises to innovate like startups

$
0
0

Feed: AWS for SAP.
Author: Bas Kamphuis.

SAP and AWS announce their project Embrace, a new partnership program aimed at simplifying customers’ SAP S/4HANA journey and accelerating their ability to innovate.

Orlando, FL — May 9, 2019 — Today, SAP announced the Embrace project, a collaboration program with Amazon Web Services (AWS) and Global Service Integrators. Embrace puts the customer’s move to SAP S/4HANA on AWS in the language and context of their industry through reference architectures.

AWS is a proud participant in the program, which will simplify the move to S/4HANA and accelerate enterprises’ ability to innovate like startups in an industry-relevant context. By using a Market Approved Journey, which is supported by reference architectures that combine SAP Cloud Platform with AWS services, we are easing the dataflow across platforms.

SAP’s Intelligent Enterprise portfolio and AWS Cloud services, with co-innovated automation, are enabling enterprises around the globe to quickly transform themselves to become “start-up-like”: using intelligent technologies natively, innovating business models faster, serving their customers globally by default, and running at the lowest-cost. Together, SAP and AWS are building a set of unique offerings aimed at retiring the technical debt that exists in today’s IT landscapes and at accelerating innovation by providing instant and on-demand access to higher-level services like Machine Learning, Data Lakes, and IoT.

Here is an example of how AWS is helping ENGIE in their SAP S/4HANA journey:

“Our CEO, Isabelle Kocher, has defined a new strategy for ENGIE focused on decarbonization, digitalization, and decentralization. This transformation strategy has placed a much greater emphasis on data and operational efficiency,” explains Thierry Langer, Chief Information Officer of the Finance division at ENGIE. “48 hours after the AWS team provided a comprehensive demonstration of what was possible on AWS, we decided to migrate our non-production environment to AWS. After we completed that step, we chose to migrate our entire platform and production environment as well. I saw the difference between running SAP on premises and on AWS, and there was no question it would be advantageous for us to migrate to AWS.”

SAP and AWS will be leveraging our existing collaboration efforts to support the Embrace program:

S/4HANA Move Xperience: A comprehensive offering allowing customers to transform and test their legacy ERP systems on SAP S/4HANA. The offering leverages automation, a prescriptive set of tools and architectures, and packaged partner offerings to migrate and convert in a matter of days. Customers can start with their existing SAP ERP system or choose a fresh but accelerated start by leveraging the SAP Model Company industry templates. The test environments will be made available to customers free of charge on SAP-certified, production-ready AWS environments. The S/4HANA Move Xperience offering enables customers to validate their business case and to assess the effort involved in transforming to S/4HANA.

SAP Cloud Platform on AWS: Today, over 160 SAP Cloud Platform services are now supported across seven global AWS Regions (Montreal, Virginia, Sao Paulo, Frankfurt, Tokyo, Singapore, and Sydney). SAP and AWS are now connecting both platforms to enable developers to easily design and deploy solutions that are context-aware through the semantic and metadata layers of underlying SAP business applications. On Wednesday, we launched the IoT cloud-to-cloud option and introduced the edge-to-edge offering (coming soon). We also introduced the AWS Lambda for SAP Cloud Platform API Management reference architecture, which was a frequent request by customers who want to mesh business processes powered by SAP S/4HANA applications with AWS services by using SAP Cloud Platform Connectivity.

Market Approved Journeys: To further simplify and help customers innovate faster within their industries, SAP, AWS, and our joint Global System Integration partners will release Market Approved Journeys. These innovation paths map how the services and solutions of both companies enable customers to integrate and extend beyond the core ERP solution. Through Market Approved Journeys, customers and partners can leverage interoperability and design patterns that are common in their industry, and then they can turn their focus to how they want to differentiate their organizations.

According to research firm Gartner, Inc., “Two-thirds of all business leaders believe that their companies must pick up the pace of digitalization to remain competitive.”* As leaders in enterprise application software and cloud services, SAP and AWS are aligning closely to provide customers with the safe and trusted path to digital transformation.

If you are attending SAPPHIRE NOW 2019 in Orlando this week, I hope you stop by the AWS booth (#2000) and talk to our SAP on AWS experts about how we can help you simplify your SAP S/4HANA journey and innovate faster!

* Gartner, Smarter with Gartner, Embrace the Urgency of Digital Transformation, Oct. 30, 2017, https://www.gartner.com/smarterwithgartner/embrace-the-urgency-of-digital-transformation/

AWS Single Sign-On integration with SAP Fiori in S/4HANA

$
0
0

Feed: AWS for SAP.
Author: Sabari Radhakrishnan.

This post is by Patrick Leung, a Senior Consultant in the AWS SAP Global Specialty Practice.

As part of Amazon Web Services (AWS) professional services in the SAP global specialty practice, I often assist customers in architecting and deploying SAP on AWS. SAP customers can take advantage of fully managed AWS services such as Amazon Elastic File System (Amazon EFS) and AWS Backup to unburden their teams from infrastructure operations and other undifferentiated heavy lifting.

In this blog post, I’ll show you how to use AWS Single Sign-On (AWS SSO) to enable your SAP users to access your SAP Fiori launchpad without having to log in and out each time. This approach will provide a better user experience for your SAP users and ensure the integrity of enterprise security. With just a few clicks, you can enable a highly available AWS SSO service without the upfront investment and on-going maintenance costs of operating your own SSO infrastructure. Moreover, there is no additional cost to enable AWS SSO.

Solution overview

The integration between AWS SSO and an SAP Fiori application is based on the industry standard: Security Assertion Markup Language (SAML) 2.0. It works by transferring the user’s identity from one place (AWS SSO) to the service provider (SAP Fiori) through an exchange of digitally signed XML documents.

To configure and test AWS SSO with SAP, you need to complete following steps:

  1. Activate the required SAP parameters and web services in the SAP system.
  2. Create the SAML 2.0 local provider in SAP transaction SAML2.
  3. Download the SAP local provider SAML 2.0 metadata file.
  4. Configure AWS SSO and exchange the SAML 2.0 metadata files.
  5. Configure the attribute mappings.
  6. Assign users to the application.
  7. Configure the trusted provider in SAP transaction SAML2.
  8. Enable the identity provider.
  9. Configure identity federation.
  10. Test your SSO.

Step 1. Activate the required SAP parameters and web services in the SAP system

  1. Log in to the business client of your SAP system. Validate the single sign-on parameters in the SAP S/4HANA system by using SAP transaction RZ10. Here are the profile parameters I used:
    login/create_sso2_ticket = 2    
    login/accept_sso2_ticket = 1    
    login/ticketcache_entries_max = 1000    
    login/ticketcache_off = 0    
    login/ticket_only_by_https = 1    
    icf/set_HTTPonly_flag_on_cookies = 0    
    icf/user_recheck = 1    
    http/security_session_timeout = 1800    
    http/security_context_cache_size = 2500    
    rdisp/plugin_auto_logout = 1800    
    rdisp/autothtime = 60    
    
  2. Ensure that the HTTPS services are active by using SAP transaction SMICM. In this example, the HTTPS port is 44300 with a keep alive time of 300 seconds and a processing timeout of 7200 seconds.
    icm monitor service display screen
  3. Use SAP transaction SICF to activate the following two Internet Communication Framework (ICF) services:
    • /default_host/sap/public/bc/sec/saml2
    • /default_host/sap/public/bc/sec/cdc_ext_service

Step 2. Create the SAML 2.0 local provider in SAP transaction SAML2

  1. In the business client of the SAP system, go to transaction code SAML2. It will open a user interface in a browser. In this example, the SAP business client is 100. For Enable SAML 2.0 Support, choose Create SAML 2.0 Local Provider.
    screen for enabling saml support
    You can select any provider name and keep the clock skew tolerance as the default 120 seconds.
  2. Choose Finish. When the wizard finishes, you will see the following screen.
    screen showing saml enabled

Step 3. Download the SAP local provider SAML 2.0 metadata

Choose the Metadata tab, and download the metadata.

screen for downloading saml metadata

Step 4. Configure AWS SSO

  1. In the AWS SSO console, in the left navigation pane, choose Applications. Then choose Add a new application.
    add a new application option
  2. In the AWS SSO Application Catalog, choose Add a custom SAML 2.0 application from the list.
    add a custom saml 2 point 0 application
  3. On the Configure Custom SAML 2.0 application page, under Details, type a Display name for the application. In this example, I am calling my application S/4HANA Sales Analytics.
    details section
  4. Under AWS SSO metadata, choose the Download button to download the AWS SSO SAML metadata file.
    aws sso metadata download screen
  5. Under Application properties, in the Application start URL box, enter the Fiori application URL. The standard Fiori launchpad URL is https://:/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html?sap-client=. I am using the default values for the Relay state and Session duration.
    application properties section
  6. Under Application metadata, upload the local provider metadata that you downloaded in step 3.
    section for uploading saml metadata file from s a p
  7. Choose Save changes.

Step 5. Configure the attribute mappings

In this example, the user mapping will be based on email.

  1. On the Attribute mappings tab, enter ${user:subject} and use the emailAddress format.
    attribute mappings screen
  2. Choose Save changes.

Step 6. Assign users to the application

On the Assigned users tab, assign any user who requires access to this application. In this example, I am using an existing user in AWS SSO. AWS SSO can be integrated with Microsoft Active Directory (AD) through AWS Directory Service, enabling users to sign in to the AWS SSO user portal by using their AD credentials.

assigned users tab

Step 7. Configure the trusted provider in SAP transaction SAML2

  1. Go to SAP transaction code SAML2 and choose the Trusted Providers tab.
    trusted providers screen
  2. Upload the AWS SSO SAML metadata file that you downloaded in step 4.
    screen for selecting metadata
  3. Choose Next for Metadata Verification and Select Providers.
  4. For Provider Name, enter any alias as the trusted identity provider.
    provider name screen
  5. For Signature and Encryption, change the Digest Algorithm to SHA-256 and keep the other configurations as default.
    screen for selecting encryption
    SHA-256 is one of the successor hash functions to SHA-1 and is one of the strongest hash functions available.
  6. For Single Sign-On Endpoints, choose HTTP POST.
    s s o endpoints screen
  7. For Single Sign-On Logout Endpoints, choose HTTP Redirect.
    s s o logout endpoints screen
  8. For Artifact Endpoints, keep the default.
    artifiact endpoints screen
  9. For Authentication Requirements, leave everything as default and choose Finish.
    authentication requirements

Step 8. Enable the identity provider

  1. Under List of Trusted Providers, choose the identity provider that you created in step 7.
  2. Choose Enable to enable the trusted provider.
    enable the trusted provider
  3. Confirm that the identity provider is active.
    screen showing trusted provider as active

Step 9. Configure identity federation

Identity federation provides the means to share identity information between partners. To share information about a user, AWS SSO and SAP must be able to identify the user, even though they may use different identifiers for the same user. The SAML 2.0 standard defines the name identifier (name ID) as the means to establish a common identifier. In this example, I use the email address to establish a federated identity.

identity federation diagram

  1. Choose the identity provider that you enabled in step 8, and choose Edit.
    edit trusted provider information
  2. On the Identity Federation tab, under Supported NameID Formats, choose Add.
    add name i d formation details
  3. Select E-mail in the Supported Name ID Formats box.

    This automatically sets the User ID source to Assertion Subject Name ID and the User ID Mapping Mode to Email.
    name i d format details
  4. Choose Save.
  5. In your SAP application, use SAP transaction SU01 to confirm that the user email address matches the one in your AWS SSO directory.

Step 10. Test your SSO

At your AWS SSO start URL, you should see your application. In this example, this is S/4HANA Sales Analytics.

a w s s s o start u r l

Voilà! Choose the application to open your Fiori launchpad without entering a user name and password.

s a p fiori launchpad

Conclusion

The beauty of this solution is in its simplicity: The AWS SSO service authenticates you, enabling you to log in to your SAP Fiori applications without having to log in and out each time.

AWS SSO supports any SAML 2.0-compliant identity provider, which means that you can use it as a centralized access point for your enterprise applications. AWS SSO also includes built-in SSO integrations with many business applications, such as Salesforce, ServiceNow, and Office 365. This offers a great way to standardize your enterprise application single sign-on process and reduce total cost of ownership.

AWS Transfer for SFTP for SAP file transfer workloads – part 1

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Kenney Antoney Rajan, an AWS Partner Solutions Architect.

Many organizations that use enterprise resource planning (ERP) software like SAP run and maintain Secure File Transfer Protocol (SFTP) servers to securely transfer business-critical data from SAP to external partner systems. In this series of blog posts, we’ll provide steps for you to integrate your SAP Process Integration and Process Orchestration (SAP PI/PO) and SAP Cloud Platform Integration with AWS Transfer for SFTP (AWS SFTP). We’ll also show you how to use the data that AWS SFTP stores in Amazon Simple Storage Service (Amazon S3) for post-processing analytics.

Use cases

There are many SAP scenarios where an SFTP server is useful for SAP file workloads. For example:

  • Transportation industry. A company can use an SFTP server as a middle layer to place files that contain sales order data. The external transportation company processes the order information from the SFTP server to schedule transportation.
  • Retail industry. A company can send their material data from SAP to the SFTP destination for a data lake solution to process the data. The data lake solution polls and processes raw data files sent from SAP and internal sales applications, to get insights such as fast selling items by material types.

Benefits of using AWS SFTP

Regardless of industry, laws and legislation in many countries mandate that every company keep private information secure. Organizations that require an SFTP server for their SAP integration can now use AWS SFTP to distribute data between SAP ERP and external partner systems, while storing the data in Amazon S3.

AWS SFTP manages the infrastructure behind your SFTP endpoint for you. This includes automated scaling and high availability for your SFTP needs, to process business-critical information between SAP and external partner systems. To learn how to create an AWS SFTP server, see Create an SFTP Server in the AWS documentation.

Architecture overview

You can integrate your SAP environment with AWS SFTP using SAP PI/PO, which acts as an integration broker to facilitate connection between systems. The following diagram shows the high-level architecture of how your SAP PI/PO systems can integrate with AWS SFTP and perform post-processing functions.

Authentication options

To establish a connection with AWS SFTP, you’ll use SAP PI/PO authentication options:

  • SAP key-based authentication. Convert the Secure Shell (SSH) private key that’s generated as a part of the AWS SFTP server creation process to Public Key Cryptography Standards (PKCS)12 type keystore. You do this to integrate SAP PI/PO communication channels with AWS SFTP.
  • SAP PI/PO password-based authentication. Use AWS Secrets Manager to enable username- and password-based authentication. You do this to integrate SAP PI/PO communication channels with AWS SFTP.

SAP key-based authentication

You can use Open SSL to create X.509 and P12 certificates on your local SSH key pair directory, as shown in the following diagram. Enter the password and note it down for SAP keystore setup. The generated key will be in binary form.

SAP NetWeaver Administrator keystore configuration

  1. Log in to SAP NetWeaver Administrator Key Storage Views, and enter a name and description to create a new key storage view.

  1. Select Import Entry, and then choose PKCS#12 Key Pair type from the drop-down menu, to import the .p12 file created as part of the earlier Open SSL step.

 

  1. To decrypt the file and complete the import, use the same password that you used earlier, and then choose Import.

  1. Make a note of the fingerprints to integrate the SAP PI/PO systems with the AWS SFTP server to finish configuring the SAP PI/PO integration directory.

Integrating the SAP PI/PO SFTP communication channel with the AWS SFTP endpoint

Next, you’ll configure a key-based authentication method in SAP PI/PO to transfer your file workloads from SAP ERP Central Component (SAP ECC) to the AWS SFTP destination.

To test the SAP PI/PO integration, you can transfer a MATMAS material intermediate document (IDoc) from the SAP system to the AWS SFTP destination.

In this blog post, it’s assumed that you’ll configure the software and business component in the SAP PI/PO System Landscape directory, import the MATMAS IDoc structure, and map the raw IDoc structure (XML) to comma-separated value (CSV) formatted type using message, service, and operational mappings in the SAP PI/PO Enterprise Services Repository function. You can also use the raw MATMAS intermediate document structure (XML) for testing.

In addition, you’ll need to configure sender and receiver communication channels and integration configuration in the SAP PI/PO integration directory function.

In the SAP PI/PO integration directory configuration, select SFTP adapter type and update the AWS SFTP endpoint and fingerprint created during the SAP NetWeaver Administrator keystore configuration. Update the values for the authentication method and file parameter key in the SAP PI/PO communication channel screen as follows:

  • Authentication method: Private Key
  • Username: The username for the SFTP server created as part of the AWS SFTP setup process.
  • Private Key View: The key view created previously in the SAP NetWeaver Administrator keystore configuration.
  • Private Key Entry: The key entry type created previously in SAP NetWeaver Administrator keystore configuration.
  • Filename: The filename or naming convention that will be transferred from SAP to the AWS SFTP server.
  • Filepath: The Amazon S3 bucket path that’s created as part of the AWS SFTP setup process. This filepath is the AWS SFTP S3 destination where your transferred files will be stored.

SAP PI/PO password-based authentication

  1. See the Enable password authentication for AWS Transfer for SFTP using AWS Secrets Manager blog post to enable password authentication for the AWS SFTP server using AWS Secrets Manager. Note down the username and password from AWS Secrets Manager to enter in the authentication configuration of the SAP PI/PO integration directory.

  1. Update the SAP PI/PO integration directory configuration with the new AWS SFTP endpoint and fingerprint created as part of password authentication. Update the values for your authentication method and file parameter key as follows:
  • Authentication method: Password.
  • Username: The username created as part of password authentication, as mentioned in the previous step.
  • Password: The password created as part of password authentication, as mentioned in the previous step.
  • Filename: The filename or naming convention that will be transferred from SAP to the AWS SFTP server.
  • Filepath: The Amazon S3 bucket path created as part password authentication. This filepath is the SFTP destination where your transferred files will be stored.

  1. To test the integration, trigger a MATMAS IDoc using an SAP ECC BD10 transaction to send a material master XML file to the AWS SFTP S3 directory through the SAP PO/PI integration.

The file is now successfully placed in the AWS SFTP S3 directory file-path configured in the SAP PI/PO communication channel.

Post-processing analytics using AWS serverless options

AWS serverless options include the following:

  • Building serverless analytics with Amazon S3 data
  • Creating a table and exploring data

Building serverless analytics with Amazon S3 data

With your data stored in Amazon S3, you can use AWS serverless services for post-processing needs like analytics, machine learning, and archiving. Also, by storing your file content in Amazon S3, you can configure AWS serverless architecture to perform post-processing analytics without having to manage and operate servers or runtimes, either in the cloud or on premises.

To build a report on SAP material data, you can use AWS Glue, Amazon Athena, and Amazon QuickSight. AWS Glue is a fully managed data catalog and extract, transform, and load (ETL) service. As you get your AWS Glue Data Catalog data partitioned and compressed for optimal performance, you can use Amazon Athena ad-hoc query analysis on the data that’s processed by AWS Glue. You can then visualize the data using Amazon QuickSight, a fully managed visualization tool, to present the material data using pie charts.

See the Build a Data Lake Foundation with AWS Glue and Amazon S3 blog post to learn how to do the following:

  • Create data catalogs using AWS Glue
  • Execute ad-hoc query analysis on AWS Glue Data Catalog using Amazon Athena
  • Create visualizations using Amazon QuickSight

Creating a table and exploring data

Create a table with your material file stored in Amazon S3 using AWS Glue crawler. AWS Glue crawler scans through the raw data available in an S3 bucket and creates a data table with a data catalog. Using AWS Glue ETL jobs, you can transform the SAP MATMAS CSV file into parquet format, which is well suited for you to query the data with Amazon Athena.

The following figure shows how the material table named parquetsappparquet was created from the SAP MATMAS file stored in Amazon S3. For detailed steps on creating a job in parquet format, see the Build a Data Lake Foundation with AWS Glue and Amazon S3 blog post.

After completing the data transformation using AWS Glue, select the Amazon Athena service from the AWS Management Console and use Athena Query Editor to execute a SQL query on the SAP material table created in the earlier step.

Amazon QuickSight is a data visualization service that you can use to analyze data. Create a new Amazon Athena data source using Amazon QuickSight, and build a visualization of your material data. In the following example, you can see the count of materials by material type using Amazon QuickSight visualization. For more detailed instructions, see the Amazon QuickSight User Guide.

Conclusion

In part 1 of this blog series, we’ve shown how to integrate SAP PI/PO systems with AWS SFTP and how to use AWS analytics solutions for post-processing analytics. We’ve used key AWS services such as AWS SFTP, AWS Secrets Manager, AWS Glue, Amazon Athena, and Amazon QuickSight. In part 2, we’ll talk about SAP Cloud Platform integration with AWS SFTP for your file-based workloads.

SAP IDoc integration with Amazon S3 by using Amazon API Gateway

$
0
0

Feed: AWS for SAP.
Author: KK Ramamoorthy.

Our customers who run SAP workloads on Amazon Web Services (AWS) are also invested in data and analytics transformations by using data lake solutions on AWS. These customers can use various third-party solutions to extract data from their SAP applications. However, to increase performance and reduce cost, they’re also asking for native integrations that use AWS solutions.

A common pattern that these customers use for extracting data from SAP applications is through IDoc Interface/Electronic Data Interchange. SAP NetWeaver ABAP-based systems have supported IDocs for a long time and are a very stable framework that powers master and transactional data distributions across SAP and non-SAP systems.

Architectural approaches for integrating SAP IDocs with Amazon Simple Storage Service (Amazon S3) have been published previously in the SAP community, such as in the blog post Integrating SAP’s IDOC Interface into Amazon API Gateway and AWS Lambda. However, those approaches don’t cover the security aspect, which is key for production use. It’s important to secure business-critical APIs to protect them from unauthorized users.

In this blog post, I show you how to store SAP IDocs in Amazon S3 by using API Gateway, with AWS Lambda authorizers and Amazon Cognito both providing the authentication layer.

An AWS Lambda authorizer is an Amazon API Gateway feature that uses a Lambda function to control access to your APIs. To learn more about AWS Lambda authorizers, see Use API Gateway Lambda Authorizers. By using Amazon Cognito, you can add user sign-in and access control mechanisms to your web, mobile, and integration apps. To learn more about Amazon Cognito, see Getting Started with Amazon Cognito.

Use cases

First, let’s look at some of the use cases and business processes that benefit from the architecture that I discuss in this blog post.

Master data integration: Let’s say your SAP application is the source of truth for all your master data like material master and customer master, and you’re integrating this master data with non-SAP applications and other software as a solution (SaaS) offerings. You can set up Application Link Enabling (ALE) in SAP, and extract the master data from SAP as IDocs for storing in Amazon S3. Once the data lands in Amazon S3, you can integrate the master data with other applications, or use the data in your data lake solutions. For a list of all master data objects supported by ALE, see Distributable Master Data Objects.

Business-to-business (B2B) integration: IDocs are still extensively used in B2B integration scenarios. Some use cases include finance data integration with banks, and inventory and material master data integration with suppliers. For a full list of business process integrations that are supported through IDocs, see Distributable Master Data Objects. By bringing your IDoc data to Amazon S3, you can tap into existing integration functionality, without much custom development.

Architecture

The following architecture diagram shows the workflow for integrating IDocs with Amazon S3, which incorporates basic authentication.

  1. SAP IDocs can be written as an XML payload to HTTPS endpoints. In this architecture, you create an IDoc port that maps to an HTTPS-based Remote Function Call (RFC) destination in SAP. Out of the box, HTTPS-based RFC destinations support basic authentication with a user name and password. Here, the HTTP destination points to an API Gateway endpoint.
  2. To support basic authentication in the API Gateway, enable a gateway response for code 401 with a WWW-Authenticate:Basic response header. Then, to validate the user name and password, use a Lambda authorizer function.
  3. The Lambda authorizer reads the user name and password from the request header, Amazon Cognito user pool ID, and client ID from the request query parameters. Then it launches an admin-initiated authentication to an Amazon Cognito user pool. If the correct user name and password are provided, the Amazon Cognito pool issues a JSON Web Token (JWT). If a valid JWT is received, the Lambda authorizer allows the API call to proceed.
  4. Once authorized, the API Gateway launches another Lambda function to process the IDoc data.
  5. The Lambda function reads the IDoc payload information from the request body and, using the AWS SDK, writes the IDoc data as an XML file to the S3 bucket.

Once the data is available in Amazon S3, you can use other AWS solutions like AWS Glue for data transformations, and then load the data into Amazon Redshift or Amazon DynamoDB.

Setting it up

Prerequisites

  • Configure AWS Command Line Interface (AWS CLI) for your AWS account and region. For more information, see Configuring the AWS CLI.
  • Get administrator access to your AWS account to create resources using AWS CloudFormation.
  • Get administrator access to SAP application for uploading certificates, and for creating RFC destinations, IDOC ports, and partner profiles.

AWS setup

Next, implement this integration by going through the steps that follow. To make it easy for you to create the required AWS resources, we’ve published an AWS CloudFormation template, Lambda functions, and a deployment script in a GitHub repository.

Please note that there are costs associated with consuming the resources created by this CloudFormation template. See the “CloudFormation resources” section in this blog post for a full list of resources created.

Step 1:

Clone the aws-cloudformation-apigw-sap-idocs GitHub repo to your local machine.

$ git clone https://github.com/aws-samples/aws-cloudformation-apigw-sap-idocs.git

Step 2:

In the terminal/command window, navigate to the downloaded folder.

$ cd aws-cloudformation-apigw-sap-idocs

Step 3:

Change execute access permission for the build.sh file and execute the build.sh script.

$ chmod +x build.sh
$ ./build.sh

Step 4:

This creates the build folder. Navigate to the newly created build folder.

$ cd build

Step 5:

Open the deploystack.sh file and edit variable values as applicable. Change the value for at least the following variables to suit your needs:

  • S3BucketForArtifacts – Where all the artifacts required by the CloudFormation template will be stored.
  • USERNAME – The Amazon Cognito user name.
  • EMAILID – The email ID attached to the Amazon Cognito user name.

Step 6:

Change execute access permission for the deploystack.sh file, and execute the script. Make sure your AWS Command Line Interface (AWS CLI) is configured for the correct account and region. For more information, see Configuring the AWS CLI.

$ chmod +x deploystack.sh

$ ./deploystack.sh

The script performs the following actions:

  • Creates an S3 bucket in your AWS account (per the name specified for variable S3BucketForArtifacts in the deploystack.sh file)
  • Uploads all the required files to the S3 bucket
  • Deploys the CloudFormation template in your account
  • Once all the resources are created, creates an Amazon Cognito user (per the value provided for variable USERNAME in the deploystack.sh file)
  • Sets its password (per the value that you provide when you run the script)

For more information about the created resources, see the “CloudFormation resources” section in this blog post.

SAP setup

You can perform the following steps in an existing SAP application in your landscape or stand up an SAP ABAP Developer Edition system by using the SAP Cloud Appliance Library. If you’d rather install a standalone SAP ABAP Developer Edition system in your VPC, we’ve provided a CloudFormation template to speed up the process in the GitHub repo.

Configure RFC connection in SAP

Step 1:

When the SAP application connects to the API Gateway endpoint, it presents a certificate. For the SAP application to trust this certificate, it needs to be uploaded to the SAP certificate store by using the transaction code STRUST. You can download the Amazon server certificates from Amazon Trust Services. In the Root CAs section of that webpage, download all the root CAs (DER format), and upload them under the SSL client SSL Client (Standard) node using transaction code STRUST. If this node doesn’t exist, create it. For more information about SSL client PSE, see Creating the Standard SSL Client PSE.

trust manager

Step 2:

Open the AWS Management Console and navigate to AWS CloudFormation. Select the stack that you deployed in “AWS setup,” earlier in this blog post. Then, go to the Outputs tab, and note down the values for the IDOCAdapterHost and IDOCAdapterPrefix keys. You will need these fields in the next step.

outputs

Step 3:

In your SAP application, go to transaction code SM59, and create an RFC destination of type G (HTTP Connection to External Server). For Target Host, provide the value of the key IDOCAdapterHost from the previous step. Similarly, for Path Prefix, provide the value of the key IDOCAdapterPrefix. Also, in Service No., enter 443. Once all the details are filled in, press Enter. You will receive a warning that query parameters aren’t allowed. You can ignore that warning by pressing Enter again.

rfc destination

Step 4:

While still in transaction SM59, choose the Logon & Security tab, and then choose Basic Authentication. In the User field, enter the value of USERNAME that you used in “AWS setup,” earlier in this blog post. In the Password field, enter the value of PASSWORD that you used in “AWS setup.” Then under Security Options, choose Active for SSL, and choose DEFAULT SSL Client (Standard) for SSL Certificate.

rfc ssl certificate

Step 5:

Choose Connection Test, and you will get a 200 HTTP response from the API Gateway. If you get an error, recheck the Target Host field (it shouldn’t start with HTTP or HTTPS), make sure the service number is 443, and make sure the path prefix is correct (it should start with a / and contain the full query string). Check whether you provided the correct user name and password. Also, check whether SSL is Active and SSL certificate value is DEFAULT SSL Client (Standard).

test connection

Configure IDoc port and partner profiles

Step 1:

Go to transaction code WE21 and create a port of type XML HTTP using the RFC destination created in “SAP setup,” in this blog post. In Content Type, choose Text/XML.

ports in idoc processing

Step 2:

Go to transaction code BD54, and create a new logical system—for example, AWSAPIGW.

Step 3:

Go to transaction code WE20, and create a new partner profile of type LS.

partner type ls

Step 4:

From transaction code WE20, create outbound parameters for the Partner profile that you created in the previous step. For testing purposes, choose FLIGHTBOOKING_CREATEFROMDAT as the message type, the port name (for example, AWSAPIGW) that was created in “SAP setup,” in this blog post, as the receiver port, and FLIGHTBOOKING_CREATEFROMDAT01 as the basic IDoc type.

outbound parameters

Test with an outbound IDoc

Step 1:

Go to transaction code WE19. In the Via message type, field enter FLIGHTBOOKING_CREATEFROMDAT, and then choose Execute.

test tool for idoc processing

Step 2:

To edit the control record fields, double-click the EDIDC field. Fill in the details for Receiver and Sender. Receiver Partner No. will vary based on your system ID and client. In this example, the system ID is NPL and client is 001. Check transaction BD54 for your logical system name.

test tool for idoc processing

Step 3:

Double-click the E1SBO_CRE and E1BPSBONEW nodes, and provide some values. It doesn’t matter what you provide here. There are no validations for the field values. Once done, choose Standard Outbound Processing. This should send the IDoc data to the API Gateway endpoint.

outbound processing of idoc

Step 4:

Validate whether the IDoc data is stored in the S3 bucket that was created by the CloudFormation earlier.

s3 bucket

Amazon Cognito vs AWS Identity and Access Management (IAM)

We use Amazon Cognito in this architecture because it provides the flexibility to authenticate the user against a user store and to issue short-lived credentials. However, if you would rather use access keys of an IAM user, you can do so by using the access key ID for the user name and access secret key for the password in the RFC destination.

The Lambda function apigw-sap-idoc-authorizer first tries to authenticate the user with Amazon Cognito. If it fails, it tries to authenticate using the access key and secret key. Make sure that the user of these keys has ‘list’ access to the S3 bucket where the IDoc data is stored. For more information, see the inline documentation of the Lambda function apigw-sap-idoc-authorizer. Also, make sure you follow the best practices for maintaining AWS access keys, if you choose to use them instead of Amazon Cognito.

CloudFormation resources

The following resources are created by the CloudFormation template that you deployed in “AWS setup,” earlier in this blog post.

Amazon Cognito user pool: To support user name and password authentication flow from the SAP application, the CloudFormation template creates an Amazon Cognito user pool with the name _user_pool (for example, sapidocs_user_pool), where is the input parameter from the CloudFormation template. The user pool is set up to act as a user store, with email ID as a required user attribute. Password policies are enforced also.

Amazon Cognito user pool client: An app client is also created in the Amazon Cognito user pool. This app client is set up to Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH) and Enable username-password (non-SRP) flow for app-based authentication (USER_PASSWORD_AUTH). These two settings allow the Lambda authorizer functions to authenticate a user against the Amazon Cognito user pool using the credentials supplied by SAP when making API Gateway calls.

Amazon S3 bucket: An S3 bucket with the name (for example, 123456789-sapidocs) is created to store the IDoc XML files.

Lambda authorizer function: A NodeJS Lambda function with the name apigw-sap-idoc-authorizer is created for authorizing API Gateway requests from SAP by performing admin-initiated auth with Amazon Cognito with the user name/password provided in the request.

Lambda integration function: A NodeJS Lambda function with the name apigw-sap-idoc-s3 is created to store the IDoc payload received from SAP into the S3 bucket created earlier. The IDoc data is stored as XML files.

IAM roles: Two roles are created for the Lambda functions.

  • A role with the name -lambda-authorizer-role (for example, sapidocs-lambda-authorizer-role) is created for providing Amazon Cognito admin-initiated authentication access to the Lambda authorizer function.
  • A role with the name -lambda-s3-access-policy (for example, sapidocs-lambda-s3-access-policy) is created for providing write access to the S3 bucket for storing IDocs.

API Gateway API: An API Gateway API with the name sap-idoc-adapter-api is created. A Lambda authorizer (‘Request’ based) with the name IDOC_Adapter_Authorizer is also created for this API. This API has a GET method and a POST method. Both these methods use the Lambda authorizer for authentication. The GET method targets a mock endpoint and is only used for testing connections and authorization from the SAP application. The POST method uses Lambda integration by calling the Lambda function apigw-sap-idoc-s3 for uploading the IDoc payload data from the SAP application to the S3 bucket.

Resource limits

  • Make sure to note Amazon API Gateway Limits and Important Notes, especially the payload size limit (10 MB at the time of writing) and integration timeout (29 seconds at the time of writing). Batching IDocs might result in higher payload size or higher processing time, which can result in timeouts. You might want to consider smaller batch sizes.
  • Make sure to note AWS Lambda Limits. There are limits on the invocation payload size and memory allocations that might also affect the IDoc batch size.

Conclusion

This blog post gives you a way to upload SAP IDoc data to Amazon S3 without any coding in the SAP application, while incorporating security best practices. The API is protected via user authentication by Amazon Cognito and user authorizations through IAM policies. Now you can integrate your SAP master data, such as material master, with other applications that are running on AWS. You can also perform B2B integrations, such as integrating finance data with banks.

This approach works for most use cases. However, there are edge cases where the volume might be high enough to warrant custom coding in the SAP application by using ABAP HTTP client libraries. For such cases, it’s advised that you check third-party adapters or build your own ABAP HTTP client libraries.

I hope that you found this blog post useful. Please don’t hesitate to contact us with your comments or questions.

Extracting data from SAP HANA using AWS Glue and JDBC

$
0
0

Feed: AWS for SAP.
Author: Chris Williams.

Have you ever found yourself endlessly clicking through the SAP GUI searching for the data you need? Then finally resort to exporting tables to spreadsheets, just to run a simple query to get the answer you need?

I know I have—and wanted an easy way to access SAP data and put it in a place where I can use it the way I want.

In this post, you walk through setting up a connection to SAP HANA using AWS Glue and extracting data to Amazon S3. This solution enables a seamless mechanism to expose SAP to a variety of analytics and visualization services, allowing you to find the answer you need.

There are several tools available to extract data from SAP. However, almost all of them take months to implement, deploy, and license. Also, they are a “one-way door” approach—after you make a decision, it’s hard to go back to your original state.

In this post, you use the previous AWS resources plus AWS Secrets Manager to set up a connection to SAP HANA and extract data.

Before you set up connectivity, you must store your credentials, connection details, and JDBC driver in a secure place. First, create an S3 bucket for this exercise.

You should now have a brand new bucket and structure ready to use.

Next, use Secrets Manager to store your credentials and connection details securely.

The following screenshot shows your secret successfully saved.

Next, create an IAM role for your AWS Glue job. The IAM role can either be created before creating the extraction job or created during the run. For this exercise, create it in advance.

After creating the IAM role, upload the JDBC driver to the location in your S3 bucket, as shown in the following screenshot. For this example, use the SAP HANA driver, which is available on the SAP support site.

Now that you set up the prerequisites, author your AWS Glue job for SAP HANA.

Now, create the actual AWS Glue job.

import sys
import boto3
import json
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job


## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

# Getting DB credentials from Secrets Manager
client = boto3.client("secretsmanager", region_name="us-east-1")

get_secret_value_response = client.get_secret_value(
        SecretId="SAP-Connection-Info"
)

secret = get_secret_value_response['SecretString']
secret = json.loads(secret)

db_username = secret.get('db_username')
db_password = secret.get('db_password')
db_url = secret.get('db_url')
table_name = secret.get('db_table')
jdbc_driver_name = secret.get('driver_name')
s3_output = secret.get('output_bucket')

# Uncomment to troubleshoot the ingestion of Secrets Manager parameters
# By uncommenting, you may print secrets in plaintext!
#print "bucketname"
#print s3_output
#print "tablename"
#print table_name
#print "db username"
#print db_username
#print "db password"
#print db_password
#print "db url"
#print db_url
#print "jdbc driver name"
#print jdbc_driver_name

# Connecting to the source
df = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", table_name).option("user", db_username).option("password", db_password).load()

df.printSchema()
print df.count()

datasource0 = DynamicFrame.fromDF(df, glueContext, "datasource0")

# Defining mapping for the transformation
applymapping2 = ApplyMapping.apply(frame = datasource0, mappings = [("MANDT", "varchar","MANDT", "varchar"), ("KUNNR", "varchar","KUNNR", "varchar"), ("LAND1", "varchar","LAND1", "varchar"),("NAME1", "varchar","NAME1", "varchar"),("NAME2", "varchar","NAME2", "varchar"),("ORT01", "varchar","ORT01", "varchar"), ("PSTLZ", "varchar","PSTLZ", "varchar"), ("REGIO", "varchar","REGIO", "varchar"), ("SORTL", "varchar","SORTL", "varchar"), ("STRAS", "varchar","STRAS", "varchar"), ("TELF1", "varchar","TELF1", "varchar"), ("TELFX", "varchar","TELFX", "varchar"), ("XCPDK", "varchar","XCPDK", "varchar"), ("ADRNR", "varchar","ADRNR", "varchar"), ("MCOD1", "varchar","MCOD1", "varchar"), ("MCOD2", "varchar","MCOD2", "varchar"), ("MCOD3", "varchar","MCOD3", "varchar"), ("ANRED", "varchar","ANRED", "varchar"), ("AUFSD", "varchar","AUFSD", "varchar"), ("BAHNE", "varchar","BAHNE", "varchar"), ("BAHNS", "varchar","BAHNS", "varchar"), ("BBBNR", "varchar","BBBNR", "varchar"), ("BBSNR", "varchar","BBSNR", "varchar"), ("BEGRU", "varchar","BEGRU", "varchar"), ("BRSCH", "varchar","BRSCH", "varchar"), ("BUBKZ", "varchar","BUBKZ", "varchar"), ("DATLT", "varchar","DATLT", "varchar"), ("ERDAT", "varchar","ERDAT", "varchar"), ("ERNAM", "varchar","ERNAM", "varchar"), ("EXABL", "varchar","EXABL", "varchar"), ("FAKSD", "varchar","FAKSD", "varchar"), ("FISKN", "varchar","FISKN", "varchar"), ("KNAZK", "varchar","KNAZK", "varchar"), ("KNRZA", "varchar","KNRZA", "varchar"), ("KONZS", "varchar","KONZS", "varchar"), ("KTOKD", "varchar","KTOKD", "varchar"), ("KUKLA", "varchar","KUKLA", "varchar"), ("LIFNR", "varchar","LIFNR", "varchar"), ("LIFSD", "varchar","LIFSD", "varchar"), ("LOCCO", "varchar","LOCCO", "varchar"), ("LOEVM", "varchar","LOEVM", "varchar"), ("NAME3", "varchar","NAME3", "varchar"), ("NAME4", "varchar","NAME4", "varchar"), ("NIELS", "varchar","NIELS", "varchar"), ("ORT02", "varchar","ORT02", "varchar"), ("PFACH", "varchar","PFACH", "varchar"), ("PSTL2", "varchar","PSTL2", "varchar"), ("COUNC", "varchar","COUNC", "varchar"), ("CITYC", "varchar","CITYC", "varchar"), ("RPMKR", "varchar","RPMKR", "varchar"), ("SPERR", "varchar","SPERR", "varchar"), ("SPRAS", "varchar","SPRAS", "varchar"), ("STCD1", "varchar","STCD1", "varchar"), ("STCD2", "varchar","STCD2", "varchar"), ("STKZA", "varchar","STKZA", "varchar"), ("STKZU", "varchar","STKZU", "varchar"), ("TELBX", "varchar","TELBX", "varchar"), ("TELF2", "varchar","TELF2", "varchar"), ("TELTX", "varchar","TELTX", "varchar"), ("TELX1", "varchar","TELX1", "varchar"), ("LZONE", "varchar","LZONE", "varchar"), ("STCEG", "varchar","STCEG", "varchar"), ("GFORM", "varchar","GFORM", "varchar"), ("UMSAT", "varchar","UMSAT", "varchar"), ("UPTIM", "varchar","UPTIM", "varchar"), ("JMZAH", "varchar","JMZAH", "varchar"), ("UMSA1", "varchar","UMSA1", "varchar"), ("TXJCD", "varchar","TXJCD", "varchar"), ("DUEFL", "varchar","DUEFL", "varchar"), ("HZUOR", "varchar","HZUOR", "varchar"), ("UPDAT", "varchar","UPDAT", "varchar"), ("RGDATE", "varchar","RGDATE", "varchar"), ("RIC", "varchar","RIC", "varchar"), ("LEGALNAT", "varchar","LEGALNAT", "varchar"), ("/VSO/R_PALHGT", "varchar","/VSO/R_PALHGT", "varchar"), ("/VSO/R_I_NO_LYR", "varchar","/VSO/R_I_NO_LYR", "varchar"), ("/VSO/R_ULD_SIDE", "varchar","/VSO/R_ULD_SIDE", "varchar"), ("/VSO/R_LOAD_PREF", "varchar","/VSO/R_LOAD_PREF", "varchar"), ("AEDAT", "varchar","AEDAT", "varchar"), ("PSPNR", "varchar","PSPNR", "varchar"), ("J_3GTSDMON", "varchar","J_3GTSDMON", "varchar"), ("J_3GSTDIAG", "varchar","J_3GSTDIAG", "varchar"), ("J_3GTAGMON", "varchar","J_3GTAGMON", "varchar"), ("J_3GVMONAT", "varchar","J_3GVMONAT", "varchar"), ("J_3GLABRECH", "varchar","J_3GLABRECH", "varchar"), ("J_3GEMINBE", "varchar","J_3GEMINBE", "varchar"), ("J_3GFMGUE", "varchar","J_3GFMGUE", "varchar"), ("J_3GZUSCHUE", "varchar","J_3GZUSCHUE", "varchar")], transformation_ctx = "applymapping1")


resolvechoice3 = ResolveChoice.apply(frame = applymapping2, choice = "make_struct", transformation_ctx = "resolvechoice3")
dropnullfields3 = DropNullFields.apply(frame = resolvechoice3, transformation_ctx = "dropnullfields3")

# Writing to destination
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": s3_output}, format = "csv", transformation_ctx = "datasink4")

job.commit()

Now that you created the AWS Glue job, the next step is to run it.

As this is the first run, you may see the Pending execution message to the right of the date and time for 5-10 minutes, as shown in the following screenshot. Behind the scenes, AWS is spinning up a Spark cluster to run your job.

The job log for a successful run looks like the following screenshot.

If you encounter any errors, they are in Amazon CloudWatch under /aws-glue/jobs/.

You got the data out of SAP into S3. Now you need a way to contextualize it, so that end users can apply their logic and automate what they usually do in spreadsheets. To do this, set up integration with your data in S3 to Athena and Amazon QuickSight.

Next, extend these queries to visualizations to further enrich the data.

With Amazon QuickSight’s drag-and-drop capability, you can now build visualizations from the fields brought over using S3 and Athena.

In this post, you walked through setting up a connection to SAP HANA using AWS Glue and extracting data to S3. This enables a seamless mechanism to expose SAP to a variety of analytics and visualization services allowing you to find the answer you need. No longer do you have to use SAP’s transaction code, SE16, to export data to a spreadsheet, only to have to upload it to another tool for manipulation.

Make sure that you review your HANA license model with SAP to make sure you are using supportable features within HANA when extracting data.


Automating SAP migrations using CloudEndure Migration

$
0
0

Feed: AWS for SAP.
Author: Brian Griffin.

This post is by Ajay Kande, SAP consultant, AWS Professional Services

Organizations migrating SAP workloads to AWS are looking for a lift-and-shift solution (rehost, without any OS or DB change). Previously, you used traditional SAP backup and restore for migration, or AWS-native tools such as AWS Server Migration Service, or partner tools to perform this type of migration. CloudEndure Migration is a new AWS-native migration tool for SAP customers.

Enterprises looking to rehost a large number of SAP systems to AWS can use CloudEndure Migration without worrying about compatibility, performance disruption, or long cutover windows. You can perform any re-architecture after your systems are running on AWS.

With CloudEndure Migration, you can also migrate SAP workloads from one AWS account to another. For example, you may deploy SAP workloads in an Amazon Internet Services Private Limited Account (AWS accounts for customers in India), and later decide to migrate to AWS. In that case, your migration efforts are similar to migrating from on-premises to AWS.

One option for this type of migration is to take AMIs in the source account, share them with the target account, and launch instances. However, if the Amazon EBS volumes have custom encryption keys, based on the number of EBS volumes and size, it becomes more difficult. As of publication time, there is not a direct way to share AMIs with encrypted volumes between accounts. This process is also difficult to manage for large workloads and doesn’t scale.

CloudEndure Migration simplifies, expedites, and reduces the cost of such migrations by offering a highly automated lift-and-shift solution. This post demonstrates how easy it is to set up CloudEndure Migration, and the steps involved in migrating SAP systems from one AWS account to another. You can use a similar approach for migrating from on-premises to AWS.

CloudEndure Migration architecture

The following diagram shows the CloudEndure Migration architecture for migrating SAP systems from one AWS account to another.

data moves between source account v p c and a w s target account v p c.

The steps for this migration are as follows:

  1. Register for a CloudEndure Migration account
  2. Set up a project and define replication settings.
  3. Install a CloudEndure agent on the source Amazon EC2 instances.
  4. Monitor replication and update blueprint.
  5. Launch test instances.
  6. Perform migration cutover.
  7. Perform SAP post-migration activities.
  8. Perform cleanup activities.

Prerequisites

To prepare your network for running CloudEndure Migration, set the following connectivity settings:

  • Communication over TCP port 443:
    • Between the source machines and the CloudEndure user console.
    • Between the staging area and the CloudEndure user console.
  • Communication over TCP port 1500:
    • Between the source machines and the staging area.

The following diagram shows all of the required ports that you must open from the source and the staging area subnets:

architecture diagram showing continuous data replication between corporate data center or any cloud and a w s cloud

Additionally, you have the following prerequisites:

  • The CloudEndure Migration agent installation on the source machines has the following requirements:
    • Root directory—Verify that you have at least 2 GB of free disk on the root directory (/) of your source machine.
    • /tmp directory (Linux)—You need at least 500 MB of free disk on the /tmp directory for the duration of the installation process.
    • Available RAM—Verify that the machine has at least 300 MB of free RAM to run the CloudEndure agent.
  • Linux systems have the following requirements:
    • Python—Use Python 2 (2.4 or above) or Python 3 (3.0 or above).
    • dhclient—Make sure to install the dhclient package.
    • Kernels—Verify that you have kernel-devel/linux-headers installed that are the same version as the kernel you are running.
  • For Windows systems, make sure that you are using .NET Framework version 4.5 or above on machines running Windows Server 2008 R2 or later. Use .NET Framework version 3.5 or above on machines running Windows Server 2008 or earlier.
  • Use VPC peering between the source VPC and staging VPC. This is not mandatory, and only used in cases of cross-region or cross-account migration projects.

Registering for a CloudEndure Migration account

Register for a CloudEndure Migration account to begin using the solution. Account registration gives you access to free CloudEndure Migration licenses for your migration project.

The following screenshot shows the registration page. To register, complete the following steps.

  1. Enter your email address.
  2. Select the CAPTCHA box.
  3. Choose Submit.

cloud endure registration page.

This email address also serves as your CloudEndure Migration user name. After receiving an email confirming your registration, follow the additional instructions and activate your account.

When your CloudEndure Migration Account is active, sign into the CloudEndure user console to set up your solution.

Setting up the project and defining replication settings

CloudEndure Migration creates a default project when you activate your account. You can either use the default project or create a new project for your migration. To create a new project, complete the following steps.

  1. On the Setup & Info page, under AWS Credentials, provide the AWS access key ID and secret access key of the IAM user created in the target account.
  2. Choose Save.

For more information about the IAM permissions for the IAM user, see the JSON code.

screem for adding a w s access key i d and a w s secret access key.

Before using the CloudEndure Migration solution, define the replication settings for AWS. This section provides an overview for defining these replication settings, including defining your source infrastructure, target infrastructure, replication servers, and optional cloud-specific settings such as VPN and proxy usage.

  1. From Setup & Info, choose Replication Settings.
  2. Select your source and target environments.
    1. For the subnet, choose the subnet created for the staging environment in your staging VPC.
    2. Select a specific instance type for your replication server and the security group. This post uses the default instance type so that CloudEndure Migration creates a security group during the replication server launch in the staging area subnet.
  3. For Default disk type, choose Use fast SSD data disks. This speeds up the replication process by having CloudEndure choose GP2 volumes for disks that are larger than 500 GB. You can also select whether to use a public or private network for sending the replicated data from the source machines to the staging area.
  4. For the traffic to flow over a private connection, choose Use VPN or Direct Connect.
  5. For Staging Area Tags, enter a key and volume.
  6. For Network Bandwidth Throttling, deselect Disabled. This regulates traffic and minimize bandwidth congestion. Enable this option to control the transfer rate of data that the source machine sends to the staging area over TCP Port 1500.
  7. After defining all your settings, choose Save Replication Settings.

CloudEndure Project setup complete message. choose close. the next step is to install the cloud endure agent on your source machine.

Installing your CloudEndure agent on source EC2 instances

The CloudEndure user console has installation steps on how to download the agent and install.

For Linux machines:

  1. Download the CloudEndure Agent Installer.
    wget –o ./installer_linux.py https://console.cloudendure.com/installer_linux.py
  2. Install the agent.
    sudo python ./installer_linux.py –t <> --no-prompt

For Windows machines:

  1. Download the Agent Installer for Windows.
  2. Install the agent.
    installer_win.exe –t <> --no-prompt

Monitoring replication and updating the blueprint

After installing the agent, the machine appears in the CloudEndure user console (no reboot required). You can log in to the CloudEndure user console to monitor the replication progress.

When the initial sync is complete, update the blueprint. Target machines launch based on the properties defined in the blueprint.

  1. Choose Blueprint.
  2. Select the desired machine.
  3. Add desired tags for the target instance.
  4. Select the disk type for your target disk.
  5. Choose Save Blueprint.

Launching test instances

Before you perform the cutover of your source machines into your target infrastructure, test your CloudEndure Migration solution. By testing your machines, you can verify that your source machines are working correctly in the target environment. Perform a test at least one week before the planned cutover, to allow time to fix any issues that may arise during testing.

  1. Select the machines to test.
  2. Choose Launch Machine, Test Mode.
  3. Choose Continue.

You can track the progress of this launch in the Job Progress dialog box.

Log in to AWS Management Console in your target account to track the EC2 instances launch.

Performing a migration cutover

After testing all of your machines, you are ready to transition your machines to the target.

  1. Choose Launch Target Machines, and select the desired machines.
  2. Choose Cutover Mode.

The CloudEndure user console gives you the option to perform cutovers of multiple machines at the same time. Before you proceed with this step, stop the SAP application on your source system, make sure that all the changes are replicated, and perform the cutover.

You can check the status of the cutover in the Job Progress dialog box. For an example, see the following screenshot.

job progress date, time, and status.

Log in to the AWS Management Console in your target account to track the EC2 instances that launch. For an example, see the following screenshot.

to track e c 2 instances, in the a w s management console, under instances, choose instances.

Performing SAP post-migration activities

After your SAP migration, complete the following steps.

  1. Copy Amazon Elastic File System (Amazon EFS)/NFS file systems (if used for /usr/sap/trans and /sapmnt) from the old to the new SAP accounts and mount it on respective instances. You can either use rsync or AWS DataSync to move these files.
  2. Because the hardware key has changed, request licenses from SAP and apply them to the target instances.
  3. Start the SAP database application on target instances and perform validation.
  4. Configure backup and snapshots in the new account, if applicable.
  5. Configure load balancers in the new account, if applicable.
  6. Complete all of the remaining account-level setup activities. This includes but is not limited to DNS, Active Directory, VPN/Direct Connect, security baselining, and setting up monitoring in the new AWS account similar to the source environment.
  7. Go live.

Performing cleanup activities

After validation, uninstall the agent by removing machines from the CloudEndure User Console.

  1. Choose Machine Actions.
  2. Choose Remove [n] Machines from This Console. It takes up to 60 minutes for CloudEndure Migration to clean up the instances and volumes in the staging area.
  3. When all of the agents are uninstalled, delete the VPC peering connection and staging VPC. This deletes all AWS resources that you created for replication.
  4. Terminate instances in the source VPC.

Conclusion

This post discussed how to use CloudEndure to migrate your SAP workloads from one AWS account to another. You can use similar approaches to migrate SAP workloads from on-premises data centers to AWS.

You can use CloudEndure Migration software to perform automated migration to AWS with no licensing charges. Each free CloudEndure Migration license allows for 90 days of use following agent installation. During this period, you can start the replication of your source machines, launch target machines, conduct unlimited tests, and perform a scheduled cutover to complete your migration. You can use AWS promotional credits to migrate your SAP systems to AWS. Contact us to find out how and to apply for credits.

Building data lakes with SAP on AWS

$
0
0

Feed: AWS for SAP.
Author: KK Ramamoorthy.

Data is ubiquitous and is being exponentially generated in enterprise. However, the majority of this data is dark data. The Gartner glossary defines dark data as the information assets that organizations collect, process, and store during regular business activities, but which they generally fail to use for other purposes. Those other purposes could include analytics, business relationships, and direct monetization.

Some of our customers, such as Thermo Fisher, are already tapping into the data generated across their enterprise to build a scalable and secure data platform on AWS. Their data includes medical instruments and various software applications. This data platform is helping medical researchers and scientists to conduct research, collaborate, and improve medical treatment for patients. For more details, see Thermo Fisher Case Study.

We have thousands of customers running their business-critical SAP workloads on AWS and realizing big business benefits, as evident in the SAP on AWS Case Studies. Although the business case for migrating SAP workloads to AWS is well-articulated, many customers are also looking to build stronger business cases around transformations powered by data and analytics platforms. We hear firsthand from customers that they are looking at ways to tap into SAP data along with non-SAP application data. They want real-time streaming data generated by internet-powered devices to build data and analytics platforms on AWS.

In this post, I cover various data extraction patterns supported by SAP applications. I also cover reference architectures for using AWS services and other third-party solutions to extract data from SAP into data lakes on AWS.

Data lakes on AWS are powered by various AWS services:

Because S3 acts as both the starting point and a landing zone for all data for a data lake, I focus here on design patterns for extracting SAP data into S3. I also cover some of the key considerations in implementing these design patterns.

Data extraction patterns for SAP applications

For this post, I focus only on SAP ERP (S/4HANA, ECC, CRM, and others) and SAP BW applications as the source. This is where we see a vast majority of our customer requirements in extracting the data to S3. Although customers are using SaaS applications as data sources, the mere fact that these are built API first makes it easier to integrate them with AWS services.

I also only focus on data replication patterns in this post and leave data federation patterns for another day, as it is a topic on its own. Data federation is a pattern where applications can access data virtually from another data source instead of physically moving the data between them. Following is a high-level architecture of the various integration patterns and extraction techniques. Let’s dive-deep in to each of these patterns.

Shown here is a high-level architecture of the various integration patterns and extraction techniques for SAP applications. Extractions can be perormed at both database and application level using techniques identified here.

Database-level extraction

Database-level extraction, as the name suggests, taps in to SAP data at database level. There are various APN Partner solutions—Attunity Replicate, HVR for AWS, and others—that capture raw data as it is written to the SAP database transaction logs. They transform it with required mappings, and store the data in S3. These solutions are also able to decode SAP cluster and pool tables.

For HANA databases, especially those that have prebuilt calculation views, customers can use Python Support SAP HANA client libraries for native integration with AWS Glue and AWS Lambda. Or, they use SAP HANA Java Database Connectivity (JDBC) drivers.

The key considerations for database-level extraction include the following:

  • As the third-party adapters pull data from transaction logs, there is minimal performance impact to the SAP database application.
  • Change data capture is supported out of the box based on database change logs. This is true even for those tables where SAP doesn’t capture updated date and time at the application level.
  • Certain database licenses (for example, runtime licenses) may prevent customers from pulling data directly from the database.
  • This pattern doesn’t retain the SAP application logic, which is usually maintained in the ABAP layer, potentially leading to re-mapping work outside SAP. Also, changes to the SAP application data model could result in additional maintenance effort due to transformation changes.

Application-level extraction

In SAP ERP applications, business logic largely resides in the ABAP layer. Even with the code push-down capabilities of SAP HANA database, the ABAP stack still provides an entry point for API access to business context.

Application-level extractors like SAP Data Services extract data from SAP applications using integration frameworks in ABAP stack and store it in S3 through default connectors. Using Remote Function Call (RFC SDK) libraries, these extractors are able to natively connect with SAP applications to pull data from remote function modules, tables, views, and queries. SAP Data Services can also install arbitrary ABAP code in the target SAP application and push data from SAP application rather than pulling it. The push pattern helps with better performance in certain cases.

SAP applications also support HTTP access to function modules, and you can use AWS Glue or Lambda to access these function modules using HTTP. SAP has also published PyRFC library that can be used in AWS Glue or Lambda to natively integrate using RFC SDK. SAP IDOCs can be integrated with S3 using an HTTP push pattern. I wrote about this technique in an earlier post, SAP IDoc integration with Amazon S3 by using Amazon API Gateway.

The key considerations for application-level extraction include the following:

  • Extractions can happen with business context in place as the extractions happen at the application level. For example, to pull all sales order data for a particular territory, you could do so with all related data and their associations mapped through function modules. This reduces additional business logic–mapping effort outside SAP.
  • Change data capture is not supported by default. Not all SAP function modules or frameworks support change data capture capabilities.
  • Using AWS native services like AWS Glue or Lambda removes the requirement for a third-party application, hence reducing the total cost of ownership.  However, customers might see an increase in custom development effort to wire the HTTP or RFC integrations with SAP applications.
  • Potential performance limitations exist in this pattern as compared to database-level extraction because of application-level integration. Also, additional performance load in the SAP application servers is caused due to pulling data using function modules and other frameworks.

Operational data provisioning–based extraction

The Operational data provisioning (ODP) framework enables data replication capabilities between SAP applications and SAP and non-SAP data targets using a provider and subscriber model. ODP supports both full data extraction as well as change data capture using operational delta queues.

The business logic for extraction is implemented using SAP DataSources (transaction code RSO2), SAP Core Data Services (CDS) Views, SAP HANA Information Views, or SAP Landscape Replication Server (SAP SLT). ODP, in turn, can act as a data source for OData services, enabling REST-based integrations with external applications. The ODP-Based Data Extraction via OData document details this approach.

Solutions like SAP Data Services and SAP Data Hub can integrate with ODP, using native remote function call (RFC) libraries. Non-SAP solutions like AWS Glue or Lambda can use the OData layer to integrate using HTTP.

The key considerations for extraction using ODP include the following:

  • Because business logic for extractions is supported at application layer, the business context for the extracted data is fully retained.
  • All table relationships, customizations, and package configurations in the SAP application are also retained, resulting in less transformation effort.
  • Change data capture is supported using operation delta queue mechanisms. Full data load with micro batches is also supported using OData query parameters.
  • Using AWS native services like AWS Glue or Lambda removes the requirement for a third-party application, hence reducing the total cost of ownership. However, customers might see an increase in custom development effort to build OData-based HTTP integrations with SAP applications. I published a sample extractor code using Python that can be used with AWS Glue and Lambda in the aws-lambda-sap-oauth GitHub repository to accelerate your custom developments.
  • Data Services and Data Hub might have better performance in pulling the data from SAP because they have access to ODP integration using RFC layer. SAP hasn’t opened the native RFC integration capability to ODP for non-SAP applications, so AWS Glue and Lambda have to rely on HTTP-based access to OData. Conversely, this might be an advantage for certain customers who want to standardize on open integration technologies. For more information about ODP capabilities and limitations, see Operational Data Provisioning (ODP) FAQ.

SAP Landscape Transformation Replication Server–based extraction

SAP Landscape Transformation Replication Server (SLT) supports near real-time and batch data replication from SAP applications. Real-time data extraction is supported by creating database triggers in the source SAP application. For replication targets, SAP SLT supports by default SAP HANA, SAP BW, SAP Data Services, SAP Data Hub, and a set of non-SAP databases. For a list of non-SAP targets, see Replicating Data to Other Databases documentation.

For replicating data to targets that are not supported by SAP yet, customers can implement their own customizations using the Replicating Data Using SAP LT replication server SDK. A detailed implementation guide is available in SAP support note 2652704 – Replicating Data Using SAP LT Replication Server SDK (requires SAP ONE Support access).

In this pattern, you can use AWS Glue to pull data from SAP SLT supported target databases into S3. Or, use SAP Data Services or SAP Data Hub to store the data in S3. You can also implement ABAP extensions (BADIs) using the SAP LT replication server SDK to write replicated data in S3.

The key considerations for extraction using SAP SLT include the following:

  • It supports both full data extraction and change data capture. Trigger-based extraction supports change data capture even on source tables that don’t have updated date and timestamp.
  • Additional custom development in ABAP is required to integrate with targets not supported by SAP.
  • Additional licensing cost for SAP Data Hub, SAP Data Services, or other supported databases to replicate data to S3.
  • Additional custom development effort in AWS Glue when replicating from an SAP-supported database to S3.
  • An SAP SLT enterprise license might be required for replicating to non-SAP-supported targets.

End-to-end enterprise analytics

Ultimately, customers are looking to build end-to-end enterprise analytics using data lakes and analytics solutions on AWS.

SAP provides operational reporting capabilities using embedded analytics within its ERP applications. But, customers are increasingly looking at integrating data from SAP, non-SAP applications, the Internet of Things (IoT), social media streams, and various SaaS applications. They want to drive process efficiencies and build newer business models using machine learning.

A sample high-level architecture for end-to-end enterprise analytics is shown below. In this architecture, customers can extract data from SAP applications using the patterns discussed in this post. Then, they can combine it with non-SAP application using AWS services to build end-to-end enterprise analytics. These services can include Amazon S3, Amazon Redshift, Amazon Athena, Amazon Elasticsearch Service, and Amazon QuickSight. Other visualization solutions like Kibana and SAP Fiori Apps can also be a part of the solution.

Shown here is a high level architecture for end-to-end enterprise anlytics landscape that include SAP applications running on AWS along with DataLakes and Analytics powered by AWS services.

Summary

Today, our customers have to deal with multiple roles, such as data scientists, tech-savvy business users, executives, marketers, account managers, external partners, customers. Each of these user groups requires access to different kinds of data from different sources. They access it through multiple channels—web portals, mobile apps, voice enabled apps, chat bots, and APIs.

Our goal at AWS is to work with customers to simplify these integrations so that you can focus on what matters the most—innovating and transforming your business.

If you are at re:Invent, stop by my session, GPSTEC338 – Building data lakes for your customers with SAP on AWS to learn more about the patterns that I discussed in this post. Until then, keep on building!

AWS momentum with SAP

$
0
0

Feed: AWS for SAP.
Author: Fernando Castillo.

As I look back on what our SAP on AWS customers have achieved together the past year, the word that comes to my mind is momentum. This takes me back to my college days where, as an engineering student, I learned that momentum is quantified as the product of mass and velocity. I’d like to frame this post using that equation.

This is an image showing the equation: mass x velocity = momentum

Mass

First, I want to discuss mass, or customer adoption. Today, AWS is excited to announce that more than 5,000 active AWS customers run SAP on AWS and over half of these customers have deployed SAP HANA-based solutions on AWS. That’s massive!

It’s been overwhelming to see the levels of deep collaboration and trust that our customers have had with us to run their most business-critical applications on AWS. Across the board, these customers are seeing that AWS offers them choice. Their stories range from the lift-and-shift of existing ECC environments to complete S/4HANA transformations and innovation on top of SAP cloud landscapes with AI/ML, IoT, data lakes, voice, and more.

At AWS, we innovate on behalf of you! We’ve loved partnering with you to migrate and modernize. Have a look at some of the recent customer stories, like Swire Coca-Cola and Cambridge University Press who have gone all-in on AWS, including the migration of their entire SAP environments to AWS. Others, like ENGIELiberty MutualLionIDOUtopia, and USHIO, migrated SAP to AWS and are now implementing additional AWS services for innovation in their SAP environments.

As customers continue to adopt AWS to run their SAP solutions, we continue to share their stories and successes. If you are interested in being part of this growing group of enterprises, feel free to connect with us.

Image showing SAP on AWS customer logos. More than 5,000 active AWS customers run SAP on AWS and over half of these customers have deployed SAP HANA-based solutions on AWS.

Velocity

Massive customer adoption doesn’t happen without velocity: solution development, delivery speed, and agility to innovate. Our experience with SAP over the past 11 years has helped us learn what customers need, and what really works.

Our alliance started in 2008, when SAP started using AWS to spin up demo systems. It continues today, where we now have the most certifications of any cloud provider, and more cloud regions running SAP Cloud Platform (SCP). We have 5x more available SCP services, and more than 165 AWS services that can be used to extend your business applications.

Image showing global coverage of AWS regions. 10 out of 12 SAP Hyperscale Cloud regions run on AWS, providing 5 times the services available to SAP customers.

On the keynote stages of SAP TechEd and SAP Connect in 2019, we introduced industry firsts, including general availability of 18-TB and 24-TB cloud-native instances for SAP HANA. Today, AWS is the only cloud provider that is certified to support 48 TB for SAP S/4HANA and 100 TB for SAP B/W applications.

Image showing scale up options for SAP HANA on AWS compute instances from .244 to 24TB scale-up and up to 48TB for S/4 and 100TB for B/W scale-out.

But our relationship with SAP extends beyond instance certification. Today, SAP is one of the biggest AWS customers, and uses AWS for many of their offerings:

  • HANA Enterprise Cloud, which has been running on AWS for years.
  • SAP NS2, which runs exclusively on AWS to support our joint US Public Sector customers, including the Navy.
  • SAP Cloud Platform, where customers and partners can create new solutions, connect with SAP SaaS Solutions and extend those solutions with AWS Services.
  • True SaaS solutions such as SAP Concur, QualtricsXM, and SAP SuccessFactors for US government agencies, which are run on AWS.

SAP HANA Cloud, SAP Data Warehouse Cloud, and SAP Analytics Cloud were also announced this year. With these services, AWS serves SAP HANA in containers, providing true elasticity and as-a-service availability, creating a more agile platform for developer and analytics communities.

We also continue to collaborate with SAP on joint IoT offerings, where we can connect cloud-to-cloud and edge-to-edge. That allows your applications to become smart about the semantic data of physical assets in business environments.

Finally, SAP Data Custodian for AWS was launched this year. It addresses customer needs around compliance, GDPR, and security in the cloud.

The speed of innovation between AWS and SAP has supported a velocity that’s unmatched, and our customers have been reaping the many benefits of this partnership.

Image showing recent co-innovations with SAP (HANA Cloud, SAP Data Warehouse Cloud, SAP Analytics Cloud, SAP IoT Edge-2-Edge, and SAP Data Custodian for AWS) and SAP services that run on AWS (SAP Cloud Platform, SAP Concur, QualtricsXM, SAP Customer Experience, SAP HANA Enterprise Cloud, NS2, and SAP Success Factors).

Over the years, we’ve seen that many of our customers are experiencing gaps in the skills necessary to undertake their desired SAP journeys. For that, we’ve leaned on our AWS Partner Network (APN) partners. Over the years, these partners have been supporting the many successful customer deployments of SAP on AWS across industries and all over the world.

An important subset of these APN Partners is those who have achieved SAP Competency status. This competency demonstrates experience, enablement, and access to support our customers in their SAP journey.

Image that shows AWS & SAP partners including GSIs, SIs, MSPs, and Tech Partners.

Momentum

Based on these topics, we arrive at momentum. Our active customer base and vibrant partnership with SAP are representative of the momentum that we’ve experienced this year. We are the platform of choice for customer journeys to the cloud.

We look forward to sharing more, and featuring some customer journeys at re:Invent 2019 this week, including a breakout featuring HP, ENGIE, and ENGIE, one featuring Heineken, and another featuring Phillips 66. For a full list of SAP sessions at re:Invent and to learn more, see AWS and SAP.

If you have any questions or want to set up a discovery workshop to begin evaluating your journey, send us an email at sap-on-aws-team@amazon.com.

Using AWS to enable SAP Application Auto Scaling

$
0
0

Feed: AWS for SAP.
Author: Chris Williams.

Customers who run SAP on AWS today take advantage of the broadest and deepest set of native cloud services. These span traditional services like compute, storage, and databases, as well as emerging technologies like IoT and machine learning, on top of a reliable global infrastructure.

One of the main features that we see customers use is the changing of instance types. As the workload changes, customers find that their instances are overutilized (the instance type is too small) or underutilized (the instance type is too large).

This concept of scalability is called vertical scaling. More simply, it is the ability of the system to accommodate additional workloads just by adding resources.

As you know, vertical scalability at the database layer for SAP is important, but what about the application server layer? You can’t create one massive application server to support all your workloads, can you? Maybe you can, but it’s probably not a good idea, and this is why SAP designed application servers to scale horizontally. This is where the concept of auto scaling often comes into play.

For horizontal scalability, customers perform sizing and use historical data to determine how many application servers they require to support their peak workload. This leads to underutilized resources or resource bottlenecks when unplanned events occur, such as marketing-driven sales order spikes or large data volume extracts for reporting.

When horizontal scalability comes into the discussion, someone from your DevOps team or cloud engineering team might suggest Amazon EC2 Auto Scaling. But as we all know, SAP can sometimes be a square peg to a round hole. SAP doesn’t always make it easy to handle scalability given its on-premises origin.

This is where AWS helps you build a bridge connecting SAP’s on-premises scalability capabilities to a cloud-native Auto Scaling group.

Solution overview

Traditional automatic scaling uses a range of metrics from CPU utilization to the number of requests to the target of your load balancer. These are both great indicators of workload and the applications use of the underlying resources. However, they don’t reflect some necessary behavior patterns that customers should consider for SAP utilization.

Every SAP application server consists of work processes that are SAP services based on the SAP kernel. This makes SAP unique. Each work process is specialized in a particular task type that runs at the OS layer consuming CPU, memory, network, and more. When sizing SAP application servers, the balance and distribution of these are critical to a healthy SAP system.

For microservices and most modern-day applications, standard automatic scaling metrics like CPU and request counts are often sufficient to handle the application automatically scaling. For SAP, you must consider work processes, especially how many are free or currently consumed.

So, how do you bridge the native automatic scaling capability to consider CPU utilization and request counts as well as work process consumption?

That’s where the AWS Professional Services SAP Application Auto Scaling solution comes in.

This solution enables enterprises and SAP Basis administrators to automatically detect SAP application server consumption based on SAP-specific workload metrics for dialog, batch, enqueue, and print work processes. This solution can adapt to spikes and dips for concurrent user logins, month-end close, payment runs, and a variety of both predictable and unpredictable workloads.

The solution uses the on-demand cloud model of provisioning only the application servers that you require. It horizontally scales out (new compute is started as application servers) and back in (existing compute as application servers are stopped), based on the metrics that you define. This is similar to the way that your thermostat maintains the temperature of your home—you select a temperature and the thermostat does the rest.

The following diagram shows how this architecture is configured:

Diagram showing architecture for SAP Application Auto Scaling, including Amazon DynamoDB, AWS Lambda, Amazon Athena, AWS Glue, Amazon S3, and AWS Systems Manager.

But wait. When you say that this solution scales not only out but also back in, are you going to shut down an application server with users and background jobs running on it?

Here is another nuanced SAP challenge. Not all SAP traffic is routed through a load balancer for which we can proactively use connection draining. Some of this traffic is end users on the SAPGUI or from other systems calling your system through native SAP RFC.

To handle this natively with SAP, invoke the soft shutdown, or graceful shutdown, capability within SAP all with the serverless compute layer, AWS Lambda. This ensures that no requests or data are lost when ending an SAP instance and you minimize the overall TCO of this solution.

A soft shutdown waits for transactions to be completed in a specific order. You then combine this with the API control plane of EC2 to coordinate the application servers’ underlying EC2 instance for a holistic approach to SAP capacity on demand.

The AWS Professional Services solution, using AWS serverless computer, storage, and analytics, enables customers bound by on-premises and monolithic SAP architectures to run SAP in a more elastic, scalable, and cost-effective manner.

Conclusion
This offering serves not only as a turnkey solution in the form of native infrastructure as code but also as an accelerator to build highly customized solutions for customer-specific requirements.

With AWS Auto Scaling, you only pay for what you require, helping to reduce operational cost and providing higher service level objectives. Gone are the days of calculating how many application servers you require to over-provision to stay above the SAPS calculated for your new project or the upcoming marketing campaign over the weekend.

Just like a thermostat, you select metrics and the solution does the rest.

Are you interested in how customers are using this solution today? Or, maybe you would like a better understanding of the underlying services? For more information, contact us at sap-on-aws@amazon.com.

AWS Transfer for SFTP for SAP file transfer workloads – part 2

$
0
0

Feed: AWS for SAP.
Author: Kenny Rajan.

Part 1 of this series demonstrated how to integrate SAP PI/PO systems with AWS Transfer for SFTP (AWS SFTP) and how to use the data that AWS SFTP stores in Amazon S3 for post-processing analytics. This post shows you how to integrate SAP Cloud Platform Integration (SAP CPI) with AWS SFTP and use the AWS analytics solutions shown in part 1 for post-processing analytics.

Architecture overview

SAP CPI is a pay-as-you-go subscription model offered by SAP. With capabilities similar to SAP PI/PO, SAP CPI offers pay-as-you-go exchange infrastructure to integrate processes and data. This includes SAP file workloads between cloud apps, third-party applications, and on-premises solutions with this open, flexible, on-demand integration system running as a core service on the SAP Cloud Platform.

The following diagram shows the high-level architecture of SAP CPI system integration with AWS SFTP. SAP systems are hosted on premises or in the AWS Cloud environment with SAP CPI connection.You can use AWS SFTP to store the SAP file workloads in S3 by enabling integration flow connection and perform post-processing functions using AWS Glue, Amazon Athena, and Amazon QuickSight.

SAP CPI system integration with AWS SFTP: High-level architecture of SAP CPI system integration with AWS SFTP
Authentication options

To establish a connection with AWS SFTP, you must have the following SAP CPI authentication options:

  • SAP CPI key-based authentication – Use key-based authentication in SAP to configure and integrate SAP CPI AWS SFTP.
  • SAP CPI password-based authentication – Use AWS Secrets Manager to enable username- and password-based authentication. This integrates SAP CPI communication channels with AWS SFTP.

SAP CPI key-based authentication

Configure the SAP CPI tenant known host key file to store the SFTP key, hostname, key algorithm, and SSH key parameters. As shown in the following workflow diagram, the known host file will store the SFTP public key, hostname, and public key algorithm. SSH key pair is stored in the SAP CPI key store configuration to establish connection from SAP CPI tenant to SFTP server:

SAP CPI key-based authentication Workflow diagram for SAP CPI key-based authentication.

Known host file

To establish an SSH-based communication, the SAP CPI tenant needs the host key of the SFTP server.

  1. To extract the host key of the SFTP server, run the ssh-keyscan command on the AWS SFTP endpoint you created.
  2. Update the host key in the SAP CPI known hosts file. See the following code example where ssh-keyscan command is executed on AWS SFTP server domain to retrieve the host key value:

Use ssh-keyscan command on AWS SFTP server endpoint to extract host key.

Update the server host key in the known_hosts CPI tenant file form

  1. In the CPI tool, select monitoring (operations view), security material option.
  2. Select the known_hosts entry, and download to your local machine
  3. Add the AWS SFTP server host key retrieved in the previous step in the known host file.

Download known host file from CPI security material.

To avoid any corruption or deletion of existing host keys that could hamper other SAP CPI integration, add the host key at the end of the SAP CPI known host file.

Add SFTP host key in the known_host file

As shown in below, upload the known host file from your local drive to SAP CPI Tenant.

  1. Choose Add feature, Known Hosts (SSH).
  2. Choose Deploy

Deploy the known_host file

For key-based authentication, you can generate a key pair using SAP CPI tools.

  1. From the SAP CPI monitoring page, in the tenant keystore, choose Create SSH key.
  2. For Key type, choose RSA.
  3. Define the key-specific values.
  4. Choose Deploy

Create SSH key in SAP CPI tenant. Generate a key pair using SAP CPI tools and update the keypair in AWS SFTP

When the deployment is complete, download the id_rsa public key from the keystore. Upload the id_rsa public key pair downloaded earlier to the AWS SFTP server SSH public key page.

For information about adding or rotating public keys for your AWS SFTP server, see rotating SSH keys documentation.

Testing connectivity

You can now test the connectivity between SAP CPI and the AWS SFTP server.

  1. In SAP CPI monitoring view, select Connectivity tests function.
  2. Choose SSH option, and enter the following details:
    • For Host, enter s-6602732347fea.server.transfer.us-east-1.amazonaws.com (AWS SFTP endpoint). For more information, see Create an SFTP Server.
    • For Port, enter 22.
    • For Proxy Type, select None.
    • For Timeout, enter your desired timeout value.
    • For Authentication, choose public-key based.
    • For User Name, enter kenny (AWS SFTP server user name created earlier).
    • Select the check boxes for Check Host Key and Check Directory access.
    • For Directory, select the S3 directory associated with AWS SFTP server.
    • Choose Send.

This establishes the connection between SAP CPI and AWS SFTP and lists the current objects stored in the AWS SFTP server S3 directory. In the following diagram, SAP CPI lists the SAP material master files stored in S3 directory using STFP connection.

SAP CPI and AWS SFTP connectivity test. Listing the SAP material master files stored in S3 directory using SAP CPI connectivity testing.

You can now use this SSH key pair based SAP CPI connection to create an integration flow between your SAP systems and AWS SFTP server for your file-transfer workloads.

SAP CPI password-based authentication

You can now configure SAP CPI integration with the AWS SFTP server using username- and password-based authentication.

If you are using a different AWS SFTP endpoint, follow the same known host file configuration process shown in the previous SAP CPI known host file configuration. To create username- and password-based authentication, see AWS Transfer for SFTP for SAP file transfer workloads – part 1.

  1.  In SAP CPI monitoring view, choose Security material function.
  2. Choose Add feature, user-credentials.
  3. On the Add User Credentials page, enter the credentials and deploy the following entries:
    • For Name, enter a credential name to retrieve your user name and password credentials in the SAP CPI integration flow.
    • For Type, choose User Credentials.
    • For User, enter the user name created for password-based authentication in part 1 of this series using Secrets Manager.
    • For Password, enter the same password created as part of password-based authentication in part 1 of this series using Secrets Manager.
    • Once deployed, verify the successful deployment of user credentials entry in the SAP CPI security material page.

Setup the user credentials. Adding user credentials for username and password-based authentication

Testing connectivity

To test the connection, create an integration flow in SAP CPI between your preferred HTTPS tool and AWS SFTP.

  1. In the SAP CPI design view, for address, enter s-66027032347fea.server.transfer.us-east-1.amazonaws.com (AWS SFTP endpoint).
  2. For Authentication, choose User Name/Password.
  3. For Credential Name, enter SFTP_KENNY (the credential name from the previous step).
  4. For Timeout, enter your desired value.
  5. For Maximum Reconnect Attempts, enter your desired value.
  6. For Reconnect Delay, enter your desired value.

CPI integration-flow. Integration flow setup in SAP CPI between HTTPS tool and AWS SFTP.

You can retrieve the deployed integration flow URL from the SAP CPI manage integration content page.

This post uses SOAP UI to send the SAP MATMAS document using the HTTPS connection method. To send the file to the SAP CPI, upload the SAP material Idoc structure in the HTTPS tool. The integration flow processes the file to the S3 directory using AWS SFTP.

Process file workloads using HTTPS tool Send the SAP material file to AWS SFTP using HTTPS connection tool and SAP CPI intergration.

When the processing is complete, you should see the SAP MATMAS file stored in the S3 directory for post-processing activities.

SAP Matmas file is stored in AWS SFTP S3 directory for post processing activites.

Conclusion

You can migrate your SAP file transfer workloads and SAP export files to S3 seamlessly by using a fully managed AWS SFTP service. You don’t have to worry about managing and maintaining an SFTP server and data resilience for your mission-critical workloads.

Five keys to a successful SAP migration on AWS

$
0
0

Feed: AWS for SAP.
Author: Alex Dzhan.

This is a guest post written by Alan Manuel, Chief Product Officer, Protera Technologies.

Regardless of industry, your customers have more choices than ever. To stay competitive you must meet their needs more quickly, more accurately, and with higher quality.

If your business runs SAP, you’re considering a comprehensive approach to meet these needs:

  1. Implementing new functionality offered by SAP HANA and SAP S/4HANA to enable the modern business process that your customers require.
  2. Executing your digital transformation strategy to unlock innovation and increase agility.
  3. A continuous improvement plan to improve service levels, reduce costs, and stay current with SAP’s support plans.

Migrating your SAP to the cloud, in a process called SAP transformation, is the first step that many customers take towards implementing this approach.

​Before sharing tips on successfully migrating SAP on Amazon Web Services (AWS), we’ll share the experience of a company who has already made this transformation.

Joerns Healthcare migrates to SAP HANA on AWS

Joerns Healthcare is an international supplier and service provider in post-acute care. They have a suite of advanced injury and wound prevention, patient care and handling products, and professional services and programs. Joerns Healthcare receives orders 24x7x365, with 80% needing to be fulfilled the same day – within 24 to two hours.

“We had to be able to adapt to the faster growth we were experiencing, provide more stability for our SAP systems, and provide the business with real-time actionable data,” explains CIO Jeff Sadtler.

Joerns Healthcare supports customers across the United States, with 130 warehouses, 675 vehicles, and 650 field technicians. They decided to upgrade from ECC 6 to Suite on HANA for the real-time analytics to support their time sensitive business model. Joerns Healthcare selected AWS for the speed, scalability, and reliability in supporting the evolving needs of their business and customers.

“We were able to achieve all these goals and reduce our spend significantly enough to pull forth the next phase of our transformation, which is SAP Field Service Management,” says Sadtler.

To hear more of the Joerns Healthcare story, including lessons learned, watch the webinar.

Five Keys to SAP Transformation Success on AWS

We learned through experiences like that of Joerns Healthcare and other customer engagements that there are five keys to SAP transformation success on AWS.

Key 1: Understand your technical options

If you are like most customers, your SAP Transformation journey begins by understanding your technical options.

The first key to success is learning what options are available for you—and what might pose a risk—within your technical environment. This includes your current cloud capabilities, analytics, and existing systems.

Make sure that you:

  • Find out whether your existing operating system and database is supported in the cloud
  • Check your SAP kernel version and SAP basis version
  • Determine interdependencies with your SAP- and non-SAP systems
  • Select the right target architecture based upon SAP requirements and AWS offerings

Key 2: Assess your functional environment

The goal here is to determine how your users’ business processes changes, and how that change impacts your systems.

That’s because while cloud-based environments, including AWS and SAP, can bring new capabilities to optimize and improve support for modern business environments. These changes may also impact business processes and functions.

In this step:

  • Prioritize remediation of your simplification items
  • Check which functions or processes are no longer compatible in S/4HANA
  • Check SAP custom code and SAP Fiori apps

To learn about the remaining three keys, we invite you to watch the on-demand webinar “The Five Keys to SAP Transformation Success.”

Why PFC Brakes and Change Healthcare migrated SAP systems to AWS Cloud

$
0
0

Feed: AWS for SAP.
Author: Alex Dzhan.

This is a guest post written by Chance Veasey, SVP of SAP Line of Business, Velocity Technology Solutions.

The deadline to move to SAP S/4HANA was recently extended to 2027. Many companies are in the early stages of evaluating how and when to make this upgrade. For many organizations, the decision to adopt the cloud for their enterprise deployments is in the rear view mirror. Thus, migrating SAP to the cloud first, and then pursuing the upgrade to S/4HANA later, may be the best course of action for your company.

Migrating to the cloud now may allow your organization up to save 30–40% in operating costs*, helping fund your larger upgrade initiatives later. Once in the cloud, your organization can begin to adopt the advanced features available at Amazon Web Services (AWS), such as data lakes, machine learning, and IoT. This can democratize the delivery of advanced technologies to your business users.

This provides the time you need to consider the impact of the upgrade to S/4HANA. More than any other prior implementation, it is essential to merge and prepare technical and functional teams for the move to S/4HANA. That’s because with an S/4HANA implementation, you merge master ledgers, and your business processes meld with technology more than before.

AWS and SAP have worked together for more than 10 years. AWS offers a large inventory of SAP-certified instances readily available for both production and non-production usage. AWS also offers global infrastructure, scalability, and security. A move to AWS helps you eliminate costly, resource-heavy physical infrastructure upgrades and kick-start your innovation.

When running SAP on AWS, you can enable new use cases, such as:

  • Including implementing high availability (HA) or disaster recovery
  • Deploying a hybrid cloud model
  • Extending your SAP environment across multiple Availability Zones
  • Integrating your data with a data lake.

For a successful migration and post-migration, ongoing support, you need a partner who can approach the migration from both a technical and functional perspective. This is a differentiator for Velocity Technology Solutions.

Velocity is a managed services company with broad expertise working with ERP apps. We have more than 15 years of SAP experience across industries including manufacturing, healthcare, services, and more.

We are actively launching new SAP workloads on AWS every month. For example, in December, we migrated three customers on SAP to AWS.

It may be helpful to read examples of other customers we’ve supported in their migrations to SAP on AWS.

PFC Brakes manufactures brakes for motor sports, auto manufacturers, and first responders. The company had aging infrastructure, crashing software, custom code, and lean resources. At the same time, PFC Brakes faced a mandate for a digital transformation and cloud strategy. As the company sought to upgrade to S/4HANA, it needed a managed services company to help with the subsequent re-engineering and process changes.

With Velocity’s help, PFC Brakes implemented S/4HANA and integrated it with shop floor systems. The company now has a scalable, stable system that enables forecasting and financial reporting.

“The expertise Velocity Technology Solutions brings as an APN Premier Consulting Partner combined with their AWS SAP Competency status allows us to focus more on what we do best—manufacture, sell, and distribute premium brake products,” said Scott Sprouse, Vice President of Information Technology at Performance Friction Corporation. “As a customer, we are proud Velocity Technology Solutions has achieved this highest level of accreditation in the AWS Partner Network.”

Learn more about PFC Brakes through this webinar.

This is an image of an architecture diagram for running S/4 on AWS.

Figure 1: Velocity reference architecture for S/4HANA on AWS. Moving from on-premises to AWS allowed for less downtime and faster performance.

At AWS re:Invent 2019, Change Healthcare shared how Velocity led the company through an SAP and Oracle migration to AWS. Within 12 months, Change Healthcare migrated a total of 185 systems.

To hear the complete story, including insightful tips on planning and moving your own SAP workloads, watch the presentation.

Architecture diagram that shows how a customer is running SAP on AWS.

Figure 2: The new IT infrastructure allowed the customer to be “digital enterprise” ready and turnaround times have been reduced from weeks to minutes.

Whether you are getting started with planning your SAP migration to AWS, or need advanced assistance with specific workloads or questions, Velocity can help.

*Based on internal Velocity Technology Solutions data


Red Hat Enterprise Linux (RHEL) high availability for SAP NetWeaver and HANA on AWS

$
0
0

Feed: AWS for SAP.
Author: Manas Srivastava.

One of the key things customers look for when deploying SAP workloads on AWS is having high availability (HA) set up for their business/mission-critical SAP applications. In this blog post, we discuss the HA option for customers running their SAP workloads on Red Hat Enterprise Linux (RHEL). Red Hat provides SAP certified high availability solutions for various SAP components. Red Hat has extended HA add-on components, including pacemaker, STONITH, corosync, and resource agents. Customers can build HA clusters across AWS Availability Zones within a Region for SAP HANA and SAP NetWeaver-based applications.

AWS provides a broad variety of Amazon EC2 instances certified by SAP for SAP HANA and SAP NetWeaver-based applications. For SAP HANA, there are SAP-certified Amazon EC2 instances ranging from 244 GiB to 24 TiB of memory. For NetWeaver-based applications, there are more than 50 SAP supported Amazon EC2 instance types to choose from. For complete details, see the Certified and Supported SAP HANA Hardware Directory and SAP OSS Note 1656099 (login required). You can configure this high availability add-on from Red Hat for SAP HANA databases and SAP Central services (ASCS/SCS) on any SAP certified and supported Amazon EC2 instances.

An image of an architecture diagram for SAP ASCS and HANA set up as HA using RHEL cluster

Figure 1: SAP ASCS and HANA set up as HA using RHEL cluster

The preceding figure is the architecture representation of the SAP ABAP System Central Services (ASCS) and HANA HA setup. The ASCS, Primary HANA node, and primary application server (PAS) are deployed in Availability Zone A (us-east-1a). The Enqueue Replication Server (ERS), standby HANA database, and additional application server (AAS) are deployed in Availability Zone B (us-east-1b). There are two sets of RHEL clusters. The first is for the SAP application, and this is the setup between the ASCS and ERS instances. The second is for the HANA setup between the primary and standby HANA database (DB). The Route53 service here is used as a DNS service. This allows you to create a CNAME for the FQDN with your company domain name and NLB DNS name. You can also leverage your on-premises DNS service for the same.

The details on how to set up the cluster are described in the upcoming sections. This starts with getting the right subscription that provides the required agents for the HA configuration.

The RHEL high availability add-on is only available as part of Red Hat Enterprise Linux for SAP Solutions. Customers have two subscription options:

  •  AWS Marketplace – Customers can choose to purchase subscriptions for RHEL for SAP with HA and US from the AWS Marketplace. This is available with either an on-demand or yearly subscription model. It is available across all AWS commercial and AWS GovCloud (US) Regions. Additionally, AWS Marketplace Amazon Machine Image (AMIs) are supported by AWS Premium Support.
  • Bring your own subscription (BYOS) – Red Hat provides an additional option to port existing or newly purchased AWS subscriptions with the Red Hat Cloud Access Program. This may be a desirable option for those who have already purchased a subscription from Red Hat and are migrating existing SAP workloads to AWS.

To get more details and a deeper understanding of which option is best for your organization, refer to the Red Hat Enterprise Linux for SAP offering on Amazon Web Services FAQ.

AWS and Red Hat extended the HA solution to AWS and documented the configuration guides to help with deployments. Refer to the RHEL HA for SAP HANA and RHEL HA for SAP NetWeaver from Red Hat for details.

This image shows the steps for deployment of HA setup of SAP ASCS and HANA using RHEL cluster. 1 - Review Guides, 2 - Subscribe from AWS Marketplace, 3 - Launch EC2 Instances, 4 - Provision and configure storage, 5 - Complete SAP HANA/NW Specific OS configurations, 6 - Install and Configure on both primary and secondary nodes, 7 - Setup the Cluster for ASCS/HANA, 8 - Automate and operate

Figure 2: Manual Deployment Process for HA setup of SAP ASCS and HANA using RHEL cluster

The preceding figure is the flow of the Manual Deployment process for HA setup of SAP ASCS and HANA using RHEL Cluster. You start with reviewing the guides from SAP and RHEL and get the RHEL subscription either from Amazon Market Place or BYOS. After this you launch the EC2 instances in the chosen AZ’s and configure the required storage. This is followed by completing the OS specific configuration and then finally the installation of SAP HANA and NetWeaver. Post this you can refer to the SAP NetWeaver and HANA Guide to configure the HA cluster.

To help you get started quickly and assist you with your HA deployment for SAP HANA, we enhanced our SAP HANA on AWS Quick Start. This supports Multi-AZ single-node deployments with the RHEL for SAP with HA and US product from the AWS Marketplace. The Quick Start supports the performance-optimized setup for SAP HANA. It provisions both a primary and a standby HANA server, which have the same instance size and other infrastructure characteristics. They are deployed in separate private subnets in two difference Availability Zones and are configured for synchronous HANA system replication (HSR) and HA. You can choose to create a new virtual private cloud (VPC) for your deployment, or provision the SAP HANA servers in your existing VPC infrastructure. The deployment takes less than 2 hours. Read through the deployment guide to get more details.

We hope you will benefit from this information about using RHEL as an operating system for deploying highly available SAP workloads on AWS. While we can set up the HA cluster in an automated way using the SAP HANA on AWS Quick Start, the configuration guide provides a deeper insight on the setup. To learn more about using Red Hat’s High Availability solutions for SAP workloads on AWS, check out the recorded webinar “Build a solid landscape for SAP HANA and S/4HANA on AWS” from AWS and Red Hat.

Let us know if you have any comments or questions—we value your feedback.

Tagging recommendations for SAP on AWS

$
0
0

Feed: AWS for SAP.
Author: Shivam Mittal.

Customers running SAP on AWS often ask us if we’ve seen reusable trends in tagging strategies for SAP workloads. Tags are simple labels consisting of a customer-defined key and an optional value. Tags enable customers to assign metadata to cloud resources, making it easier to manage, search, and filter existing resources.

In this post, we outline the benefits of tagging and provide recommendations for customers and partners deploying SAP workloads on AWS. Recommended tags are based on practices we’ve seen across a number of our engagements. Customers can directly use all of these tags or modify them to fit their own needs.

  • Customers use tags for operation and deployment automation activities, such as snapshots of storage volumes, OS patching, and AWS System Manager automation. SAP customers can also use tags for automating the start/stop of SAP servers, running cron jobs, and monitoring/alerting capabilities.
  • Partners use AWS tags for solution deployment. High availability cluster, backup, and monitoring solutions often rely on AWS resource tags for their operations.
  • AWS billing reports support the use of tags. Customers can create cost allocation tags that help identify pricing of AWS resources based on individual accounts, resources, business units, and SAP environments.
  • AWS Identity and Access Management (IAM) policies support tag-based conditions, enabling customers to constrain permissions based on specific tags and their values. IAM user or role permissions can include conditions to limit access to development, test, or production environments or Amazon Virtual Private Cloud (Amazon VPC) networks based on their tags.
  • Tags can be assigned to identify resources that require heightened security risk management practices. For example, Amazon Elastic Compute Cloud (Amazon EC2) instances hosting applications that process sensitive or confidential data. This can enable automated compliance checks to ensure that proper access controls are in place or that patch compliance is up-to-date.
  • Tags can be applied anytime: Tags can be created and applied after a resource is created. However, no information is captured between the time the resource was created and when the tag was applied.
  • Tags are not retroactive: Cost allocation reports are only available from the point in time they were activated. If cost allocation is activated in October, no information from September is displayed.
  • Tags are static snapshots in time: Changes made to tags after a report is executed are not reflected in previous reports.
  • Tags must be denoted for cost allocation: After creating a new tag, it must be asked/activated/added as a cost allocation tag. If it is not, it is not visible in Detailed Billing Reports (DBR) or AWS Cost Explorer.
  • Define naming convention: Tags are case-sensitive, so define standards for your AWS resources. For example, tag key names should use upper CamelCase (or PascalCase) for manual creation. CamelCase combines words/abbreviations by beginning each word with a capital letter, such as MiscMetadata and SupportEndpoints.
  • Standardize delimiters: Do not use delimiters as part of tag values. This works well with case-sensitive tags.
  • Use concatenated/compound tagging: Combine multiple values for a tag key (Owner = JohnDoe | johndoe@company.com | 8005551234). PascalCase should be used to standardize compound tags.

Note: We can use a :” prefix – to clearly differentiate company-defined tags from tags defined by AWS or required by third-party tools a customer may use.

Tag Name :name
Purpose Identifies the resource name. Can be the hostname of the SAP server.
Values String
Example: aws2sql01
Cost Allocation Tag? Yes
Tag Name :sap-product
Purpose Identifies the SAP product running for SAP Resource.
Values String
Examples: ecc, bw, po, solman, content-server
Cost Allocation Tag? Yes
Tag Name :sid
Purpose Identifies the SAP system SID.
Values String
Cost Allocation Tag? No
Tag Name :landscape-type
Purpose Identifies the SAP landscape type support or project.
Values String
Examples: n, n+1, n+2
Cost Allocation Tag? No
Tag Name :ha-node
Purpose Identifies the HA cluster node.
Values String
Examples: primary, secondary, disaster recovery (DR)
Cost Allocation Tag? No
Tag Name :backup
Purpose Identifies the backup policy for the server.
Values String
Examples: daily-full, daily-incremental, weekly-full
Cost Allocation Tag? No
Tag Name :environment-type
Purpose Identifies whether the resource is part of a production or non-production type of environment.
Values String
Examples: lab, development, staging, production
Cost Allocation Tag? No
Tag Name :created-by
Purpose For tracking the AWS account ID, IAM user name, or IAM role that created the resource.
Values String
Examples: account-id, user name, role session name
Cost Allocation Tag? Yes
Tag Name :application
Purpose Identifies the resource application name.
Values String
Example: sap
Cost Allocation Tag? Yes
Tag Name :app-tier
Purpose The tier key is used to designate the functional tier of the associated AWS resource. This key provides another way to deconstruct AWS spending to understand how each infrastructure subcomponent contributes to overall cost. Also used for determining backup and disaster-recovery requirements. It is also useful for threat modeling when using tools such as AWS Tiros.
Values String
Examples: Web, app, data, network, other
Cost Allocation Tag? No
Tag Name :cost-center
Purpose Identifies the cost center of the department that is billed for the resource.
Values Numeric cost center code
Cost Allocation Tag? Yes

Customers can also consider tags for poweroff-time, poweron-time, business-stream, resource-owner-email, and support-team-email with their AWS resources.

The screenshot below shows an examples of some tags that have been set up. In this example, abc is the company name.

This is an image of sample tags that have been set up for ABC company. Tags include app-tier, application, backup, cost-center, created-by, environment-type, ha-node, landscape-type, resource-owner-email, product, sid, and support-team-email.

Figure 1: SAP Server Tagging Example

Tagging strategies differ from customer to customer depending on their needs. Our SAP Professional Services practice has found it useful to provide a prescriptive starting point for customers to build from. The most important aspects of tagging are defining what works for your organization and remaining precise and accurate. Please also review tag restrictions while preparing the tagging strategy for your SAP workloads.

Let us know if you have any comments or questions—we value your feedback.

Ensuring reliability and availability of your SAP applications

$
0
0

Feed: AWS for SAP.
Author: Alex Dzhan.

This is a guest post written by Chris Pomeroy, Vice President Solutions Architecture, Syntax

In a recent survey, 47% of respondents said that running SAP applications on the cloud was an important part of their IT strategy. Your business may be considering a cloud migration for your SAP workloads but you have questions around the following topics:

  • Security—what are the security options in the cloud?
  • Reliability and availability—how do you best architect for availability in the cloud?
  • Compliance and governance—how are these achieved in the cloud?
  • Cost—how does the value of the cloud compare?
  • Strategy—do you have a cloud strategy?

For many companies, ensuring your systems are available and reliable is a priority. After all, these systems are critical to your business, and this has led to the popularity of a managed cloud solution. A managed cloud is designed around high availability to help eliminate single points of failure in the IT infrastructure.

Moving to the cloud can help reduce the cost of resources to manage software, maintain hardware, and back up data. Instead, cloud vendors and cloud service providers can help with the responsibility of uptime and availability by delivering consistent monitoring and support as part of their service level agreements.

When you migrate and run SAP workloads to the AWS Cloud, you can also reduce risk. For example, by using multiple AWS Regions, you can store data in multiple locations around the world to eliminate single points of failure. SAP HANA System Replication offers automatic takeover, meaning you can automatically switch from your primary to secondary systems in the event of a service disruption.

In this process, virtual hostnames are employed so that users and technical components don’t have to change anything when an HA failover occurs. During an HA failover event, the SUSE Linux OS cluster software initiates a takeover using the HanaSR agent and the related OS commands. Fencing is configured to avoid cluster split-brain issues where there is a network failure.

In terms of reliability, SAP ABAP SAP Central Services (ASCS) enables enqueue replication and automatic failover. Volume snapshots and images are replicated to another Region and you use the Amazon Elastic File System for dynamic shared storage. This is also known as a Maximum Availability Architecture.

This is an image of an architecture diagram showing a production region in North Virginia with two availability zones and a secondary region running S3 in Oregon.

Figure 1 shows Syntax’s Maximum Availability Architecture

Another common customer interest is the desire to work as efficiently as possible from a cost and resourcing standpoint.

At Syntax, we call this “right-sizing” an environment. We help examine your AWS services and infrastructure to assess consumption, storage, and compute. We then make recommendations regarding the optimal usage to meet your business needs. We can provide a cloud readiness or architecture design session for SAP on AWS.

Additionally, it is important for companies to offer an internal, self-service cloud portal. This can enable employees to access multiple cloud services, and automatically calculate how much resource usage to charge back to each project or department. This is invaluable for budgeting and for building up audit trails for compliance purposes.

With over 40 years of experience, Syntax is a large independent ERP services provider with expertise and certifications across AWS and SAP. Syntax supports full-suite SAP and adjacent workloads, including cloud migration, hosting, and management services. Syntax is an AWS Advanced Consulting Partner and an SAP Gold Partner, with more than 200 accreditations and certifications.

Syntax services and tools can help you successfully initiate and execute an SAP migration project:

  • Syntax Enterprise Care® is an advanced monitoring and alerting tool that gives you visibility into your AWS environment. You can use it to monitor infrastructure, disaster recovery (DR) and backup, latency, databases, logs, as well as for technical purging and robust alerting capabilities.
  • The Syntax Cloud Portal enables you to track all your SAP infrastructure and resources in one place for maximum efficiency.

To learn more, read our whitepaper: Migrating Mission-Critical SAP Applications to AWS

How to use snapshots for SAP HANA database to create an automated recovery procedure

$
0
0

Feed: AWS for SAP.
Author: Arne Knoeller.

Many customers are looking into SAP migrations to the cloud. For each migration, every customer must define the appropriate architecture in the cloud. The defined service level agreements (SLAs) must be fulfilled, and the implemented procedures should fit to the operational processes.

In this blog post, we describe a cloud native approach to demonstrate the power and capabilities of AWS. There are still good reasons for HANA System Replication (HSR) or third-party cluster software to build productive systems in cloud environments. However, we focus on an alternative approach by using cloud native features, such as Amazon EC2 Auto Scaling and Amazon Elastic Block Store (EBS) Snapshots. With these features we build an infrastructure with native backup/restore functionality, automated processes, and the focus of low costs, for non-critical SAP applications.

A fast and automated restore process can provide new capabilities on cloud environments. With On-Demand Instances, instances can be provisioned when needed and can increase the availability of the SAP system. Relaying on an automated restore process removes costs of standby instances, regardless of whether they were implemented with the pilot light approach or as hot standby. Furthermore, no additional license costs for third-party software are required.

Solution overview

Without standby resources for the HANA database, the challenge is to create a robust and highly available architecture across multiple Availability Zones. We should make sure that the restore process can be triggered in any Availability Zone within the Region.

Amazon EBS Snapshots are the foundation of the described architecture. Snapshots provide a fast backup process, independent of the database size. They are stored in Amazon Simple Storage Service (S3) and replicated across Availability Zones automatically, meaning we can create a new volume out of a snapshot in another Availability Zone. In addition, Amazon EBS snapshots are incremental by default. Only the delta changes are stored since the last snapshot. To create a resilient high available architecture, automation is key. All steps to recover the database must be automated in case something fails.

The following picture shows the high-level architecture:

Architecture Overview

Set up Amazon EBS Snapshots and SAP HANA database integration

To get started, we use the SAP HANA on AWS Quick Start with the following modifications:

  1. The /backup is based on EBS st1 volume by default. We replace this with /backup_efs, an nfs share provided by Amazon Elastic File System (Amazon EFS).
  2. The HANA configuration needs to be adjusted (Parameter: basepath_logbackup, basepath_databackup and basepath_catalogbackup). The backups are written directly to Amazon EFS, which is replicated across Availability Zones. Backups, especially log backups, are now securely stored and still available even if an issue affects an Availability Zone.
  3. In addition, the Amazon EFS Infrequent Access (EFS IA) option is activated. This automatically stores data that is not accessed within seven, 14, 30, 60 or 90 days. This saves up to 92% of the costs compared to standard Amazon EFS with the recent Amazon EFS price reduction for Infrequent Access storage. HANA log backup data is a perfect use-case for EFS IA because accessing the data is only needed during the recovery process.

Log backups are written automatically to /backup_efs. By default, SAP HANA triggers a log backup every 15 minutes. We can decrease this value to reduce the recovery point objective (RPO) even further.

Now that the log files are available and securely stored across multiple Availability Zones, we can configure the full database backup with snapshots.

HANA snapshot script

In the script “aws-sap-hana-snapshot.sh,” the following commands are implemented. The following code examples will explain the most important commands:

  1. We must make the database aware of the storage snapshot, so an entry into the HANA backup catalog is required. The snapshot script automatically adds an entry before the EBS snapshot is executed with the following SQL command:
    BACKUP DATA FOR FULL SYSTEM CREATE SNAPSHOT COMMENT 'Snapshot created by AWS Instance Snapshot';

    To log on to the HANA database, we store the password of the system user in the HANA hdbuserstore.

  2. Now that the database is aware of the storage snapshot, a roll-forward of the database is possible after the restore. To trigger a snapshot of the EBS volumes, we use the snapshot feature to create point-in-time and crash-consistent snapshots across multiple EBS volumes. The advantage of this feature is that no manual Input/Output (I/O) freeze on a volume level is required to bring multiple volumes in sync. Before this feature was available, the I/O freeze for data and log volumes had to be implemented by dmsetup suspend .The following code snippet is used to execute the snapshot of all volumes, except the root volume.
    aws ec2 create-snapshots --region $region --instance-specification InstanceId=$instanceid,ExcludeBootVolume=true --description $snapshot_description --tag-specifications "ResourceType=snapshot,Tags=[{Key=Createdby,Value=AWS-HANA-Snapshot_of_${HOSTNAME}}]" | jq -r ".Snapshots[] | .SnapshotId"))
  3. After the snapshot execution, we confirm the backup in the SAP HANA backup catalog:
    BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID $SnapshotID SUCCESSFUL 'AWS-Snapshot';

    If the snapshot was not successful according to the SAP HANA backup catalog, the snapshots on AWS get deleted.

Now we have a backup process in place with full backups via the AWS snapshots and log backups written to EFS. Both storage locations are independent of the Availability Zone and can be accessed from another Availability Zone. Because of that, we can re-create the entire database with the AWS Auto Scaling group in another Availability Zone later on.

Set up AWS Auto Scaling

We will now set up an AWS Auto Scaling group with a minimum and maximum capacity of one instance. In case the Amazon Elastic Compute Cloud (Amazon EC2) instance has an issue, such as a hardware failure, the AWS Auto Scaling group automatically creates a new instance based on an Amazon Machine Image (AMI). By selecting multiple Availability Zones, the desired capacity is distributed across these Availability Zones.

The following section describes the automated restore process:

  1. Create AMI, which is used by the AWS Auto Scaling group later on.
    We need to delete all volumes that belong to /hana/data and /hana/log. These volumes are recreated out of the snapshots automatically and must not be included in the AMI.
    Create AMI
  2. Create launch configuration.
    Create Launch Configuration
  3. Within the launch configuration, we select the recently created AMI as a basis and the required instance size.
    Select AMI
  4. To start the restore procedure after a system crash, the userdata-hana-restore.sh script is stored in the Amazon EC2 user data and executed during the initial launch of the Amazon EC2 instance. New volumes of the most recent snapshot are created. These new volumes are attached to the Amazon EC2 instance, and we execute a roll-forward of the database logs to the most recent state. The script can be added under advanced details into the user data section.
    Insert User Data
  5. Once we create the launch configuration, we can set up the AWS Auto Scaling group.
    Create Auto Scaling Group
  6. The group size is one instance. We select all available subnets where the new instance can be deployed.
    Configure Auto Scaling Group Size
  7. We keep the group size at its initial size.
    Set Auto Scaling Group Size

HANA restore script

Let’s have a closer look at the restore.sh script and each step during the restore process.

As a prerequisite, two parameters in the AWS Parameter Store are required. The AWS Parameter Store lists the volume ids for SAP HANA data and log volumes.

  1. To create the parameters with command line interface, we use the commands:
    aws ssm put-parameter --name -hdb-datavolumes --type StringList --value vol-1,vol-2,vol-3
    aws ssm put-parameter --name -hdb-logvolumes --type StringList --value vol-1,vol-2

    This step is required only once during the setup to create the parameters in the AWS Parameter Store. The values get updated later on automatically by the script.

  2. To create new volumes out of the latest snapshot, the script is looking for the latest snapshot created by the snapshot script and the snapshot-id.
    LATESTSNAPDATEDATA[$i]=$(aws ec2 describe-snapshots --filters Name=volume-id,Values=${DATAVOLID[$i]} Name=status,Values=completed Name=tag:Createdby,Values=AWS-HANA-Snapshot_of_${HOSTNAME} | jq -r ".Snapshots[] | .StartTime" | sort -r | awk 'NR ==1')
    SNAPIDDATA[$i]=$(aws ec2 describe-snapshots --filters Name=start-time,Values=${LATESTSNAPDATEDATA[$i]} Name=volume-id,Values=${DATAVOLID[$i]} | jq -r ".Snapshots[] | .SnapshotId")

    The restore process should use the latest snapshot to reduce the number of log files to recover. The AMI was created without SAP HANA data and log volumes, and these volumes must be created out of the EBS snapshot.

  3. Create a new volume out of the snapshot.
    NEWVOLDATA[$i]=$(aws ec2 create-volume --region $REGION --availability-zone $AZ --snapshot-id ${SNAPIDDATA[$i]} --volume-type gp2 --output=text --query VolumeId)

    We recommend to delay the procedure and wait until the volumes are available, before attaching them to the instance.

  4. Attach volumes to instance:
    aws ec2 attach-volume --volume-id ${NEWVOLDATA[$i]} --instance-id $INSTANCEID --device ${DATADEVICEINFO[$i]}

    If we would start the database now, it would have a crash consistent state, based on the time the snapshot was taken. To recover the database to the most recent state, we can use the log files stored on EFS. The AMI automatically mounts the EFS file system during startup.

    imdbmaster:/dev # cat /etc/fstab
    […]
    fs-xxxxxx.efs.eu-west-1.amazonaws.com:/       /hana-efs-backup/       nfs4    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0
    
  5. To recover SAP HANA to the most recent state, we must trigger a point-in-time recovery. This indicates that the recovery is based on a snapshot and with a timestamp in the future.
    sudo -u $SIDADM -i hdbsql -U $HDBUSERSTORE "RECOVER DATABASE FOR $HDBSID UNTIL TIMESTAMP '2099-01-01 12:00:00' CLEAR LOG USING CATALOG PATH ('$HDBCATALOG') USING SNAPSHOT;"

It is important to consider the prerequisites for the snapshot script. Especially to disable the automatic start of the HANA tenant database, after the DB instance start. If the tenant is online, it is no longer possible to recover log files. With the no restart option, we can prevent the automatic tenant start.

SQL command to set no-restart mode:

ALTER DATABASE  NO RESTART;

The restore time depends on two main aspects: 1) time to create new volumes from the snapshot and 2) the number of log files to recover. The volume creation depends on the volume size and the volume type. With Amazon EBS Fast Snapshot Restore, it is possible to reduce the time to initialize newly created volumes. The database recovery process depends on the number of log files and change rate of the database since the snapshot was created.

Additional things to consider

The IP of the newly created instance will change after the AWS Auto Scaling Group creates a new instance. The SAP application server must be aware of this. It is possible to change the DNS entry in Amazon Route 53 and update the IP. In addition, AWS Auto Scaling group can launch the new instance in another Availability Zone. With this configuration, the application server might remain in another Availability Zone and cross Availability Zone traffic with slightly higher latency gets created.

Conclusion

It is possible to build a highly available architecture for SAP HANA with automatic recovery to the most recent state, across different Availability Zones at low costs. Amazon EC2 Spot Instances can further reduce the costs. If a Spot Instance is revoked, AWS Auto Scaling automatically recovers the database for non-critical systems. The automatic restore process does not require any active standby resources. There are tradeoffs for even higher availability or productive workloads, which can be achieved with HANA system replication.

How to integrate Amazon WorkSpaces with SAP Single Sign-On

$
0
0

Feed: AWS for SAP.
Author: Hank Lee.

SAP Single Sign-On allows users to have secure access to SAP and non-SAP systems using centralized authentication whether the systems are on-premises or in the cloud. SAP Single Sign-On provides the simplicity to manage users’ authentication, secure data communication, and integrate with two-factor and risk-based authentication. This includes possession of mobile phone and RSA SecurID card. Moreover, SAP Single Sign-On supports different types of authentication methods including Kerberos/SPNEGO, X.509 certificates and Security Assertion Markup Language (SAML). In this blog, we share how to integrate SAP Single Sign-On(*) based on Kerberos/SPNEGO with Amazon WorkSpaces. We also cover how to use your existing Active Directory service, either in the public cloud or an on-premises environment, to quickly provide thousands of desktops to workers across the globe.

(*)Based on the SAP Note 1848999, licenses for SAP Single Sign-On are required. Please contact your SAP account executive for more detail.

In general, there are three scenarios:

  1. Integrate Amazon WorkSpaces with AD Connector and existing Active Directory on-premises environment (figure 1)
  2. Integrate Amazon WorkSapces with Azure Active Directory (Azure AD) (figure 2)
  3. Integrate Amazon WorkSpaces with AD Connector, pre-built Active Directory (AD) in Amazon Elastic Compute Cloud (Amazon EC2), and AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) (figures 3-1 and 3-2)

In this blog, we will be sharing an example of scenario three using AWS Managed Microsoft AD.

You can apply scenario one using the process outlined in the blog “How to Connect Your On-Premises Active Directory to AWS Using AD Connector.” Scenario two can be referenced in the blog “Add your WorkSpaces to Azure AD using Azure Active Directory Domain Services.

Follow SAP Note 66971 to ensure the supportability on the selected Windows version with SAP GUI. In this blog, we demonstrate the GUI installation on Windows 10.

Scenario 1 architecture – Amazon WorkSpaces and AD Connector with AD on-premisesAmazon WorkSpaces and AD Connector with AD on-premise

Figure 1 Amazon WorkSpaces and AD Connector with AD on-premises

Scenario 2 architecture – Amazon WorkSpaces and Azure AD

Amazon WorkSpaces and Azure AD

Figure 2 Amazon WorkSpaces and Azure AD

Scenario 3 architecture1 – Amazon WorkSpaces and AD Connector with AD built in Amazon EC2

Amazon WorkSpaces and AD Connector with AD built in Amazon EC2

Figure 3-1 Amazon WorkSpaces and AD Connector with AD built in Amazon EC2

Scenario 3 architecture2 – Amazon WorkSpaces and AWS Managed Microsoft AD

Amazon WorkSpaces and AWS Managed Microsoft AD

Figure 3-2 Amazon WorkSpaces and AWS Managed Microsoft AD

Prerequisites:

  • You already have an AWS account and a default Amazon Virtual Private Cloud (VPC).
  • You deploy Amazon WorkSpaces and AWS Managed Microsoft AD in a public subnet.
  • You can access Amazon WorkSpaces with a public internet.
  • You have an existing SAP license for SAP Single Sign-On.
  • You can deploy Amazon WorkSpaces in an available Region.

Deployment Steps:

  1. Setup AWS Managed Microsoft AD.
  2. Launch Amazon WorkSpaces with selected created users from AWS Managed Microsoft AD.
  3. Install SAP GUI and SAP Secure Login Client on the launched Amazon WorkSpaces.
  4. Configure the SAP GUI single sign-on (SSO) feature accordingly.
  5. Test the SAP GUI SSO feature.
  6. (Optional) Build up Amazon WorkSpaces image and bundles for scale-out usage and centralize the SAP GUI logon entry in a shared Windows file system.
  7. (Alternative) Deploy Amazon AppStream2.0 with SAP GUI single sign-on.

Walkthrough:

Setup AWS Managed Microsoft AD and deploy Amazon WorkSpaces

Download the sample code here to deploy AWS Managed Microsoft AD and Amazon WorkSpaces.

Requirements:

Overview:

There are two CDK stacks in app.py file:

  • AWSManagedAD:
    • Create AWS Managed Microsoft AD.
    • Create an Amazon Route 53 private hosted zone and an A record pointing to AWS Managed Microsoft AD.
    • Create an Amazon EC2 Windows instance for domain user/group management.
    • Create an AWS Systems Manager Parameter and Document that attaches to the Amazon EC2 instance to join AWS Managed Microsoft AD automatically.
    • Create an AWS Lambda function to register Amazon WorkSpaces with AWS Managed Microsoft AD.
  • AWSWorkSpaces:
    • Create an Amazon WorkSpaces for SAP GUI configuration

Setup process:

  1. Clone my sample Github repo to your folder in your device and navigate into the folder.
  2. Create two AWS Secrets Managers secrets. One is the secret for domain admin password, and the other is the pre-created Amazon EC2 key pair name. The name of the Secret Key is “Key”. The password should comply with the AWS Managed Microsoft AD password rule.
    The image shows the password for AWS Managed Microsoft AD admin user. The value is stored in AWS Secrets Manager. The image shows the Amazon EC2 key pair name. The value is stored in AWS Secrets Manager.
  3. Edit cdk.json file to meet your environment
    	{  
    	  "app": "python3 app.py",  
    	  "context": {  
    	      "Account": "",  
    	      "Region": "",  
    	      "Domain_name": "",  
    	      "Secret_domain_password_arn": "",  
    	      "Instance_type": "",  
    	      "VpcId": "",  
    	      "Subnet1": [ "", "" ],  
    	      "Subnet2": [ "", "" ],  
    	      "Secret_keypair_arn": "< Secret Manager for EC2 Key Value ARN value >",  
    	      "WorkSpacesUser" : "",  
    	      "WorkSpacesBundle": "wsb-8vbljg4r6"  
    	  }  
    	}  
    

    Parameters:

    Region: Choose the Region that supports the AWS Directory Service and Amazon WorkSpaces. In this blog, I use the Region in us-west-2.
    Domain_name: Fill in the preferred domain name for AWS Managed Microsoft AD. I use test.lab in this blog.
    Secret_domain_password_arn: Input the secret Amazon Resource Name (ARN) value for domain admin password secret.
    Instance_type: Refer to the Amazon EC2 Documentation for the instance type.
    Subnet[1|2]: Fill in the list value for the two subnets in the same VpcId. The former element in the array is the subnet ID, and the latter is the Availability Zone where the subnet resides.
    Secret_keypair_arn: Input Secret ARN value for the Amazon EC2 key pair secret.
    WorkSpacesUser: Fill in the user name that you would create after the AWS Managed Microsoft AD is built. The format is NETBIOSAD_USER.
    WorkSpacesBundle: Fill in the default Amazon WorkSpaces bundle ID to deploy SAP GUI. I picked up wsb-8vbljg4r6, which is for Standard Windows 10.

  4. Install the Python required libraries for cdk.
    $ pip install -r requirement.txt
  5. Run the CDK bootstrap on your AWS account.
    $ cdk bootstrap  aws:///
  6. Deploy the AWSManagedAD stack with your AWS profile.
    $ cdk deploy AWSManagedAD --profile
    If you don’t specify your AWS profile, the default profile will be used. This stack might take 10-20 minutes to deploy all resources.
  7. Once the AWSManagedAD stack is deployed, you can log into the Amazon EC2 instance and create a domain user for Amazon WorkSpaces later. Revise the default security group to connect from your local environment to your Amazon EC2 instance. Please specify First Name, Last Name and the Email for the user.
  8. Deploy the AWSWorkSpaces stack with a specified domain user.
    $ cdk deploy AWSWorkSpaces --profile
    This stack might take 10 minutes to deploy Amazon WorkSpaces.
  9. Once the Amazon WorkSpaces is built, you can download and install Amazon WorkSpaces Client, fill in the registration code from the Amazon WorkSpaces console and log in with domain user.
    The image shows login page for Amazon WorkSpaces client.

Setup AWS Managed Microsoft AD and deploy Amazon WorkSpaces

Requirements:

  • The SAP GUI and SAP Secure Login Client are installed in Amazon WorkSpaces.
  • Log into Amazon WorkSpaces with the specified domain user.
  • The network connection from Amazon WorkSpaces to the SAP Systems is allowed.
  • SAP Systems are installed on your own.

Steps:

  1. Use the Tcode SNCWIZARD to set up the Secure Network Communications (SNC) identity and change the SAP profile parameters accordingly.
  2. Create a domain user for SSO. In this sample, the user is “Hank”. Update the service attribute ServicePrincipalName in “Attribute Editor.”
    The image shows the step to set up ServicePrincipalName property for Windows domain user.
  3. Create a Kerberos User in Tcode SNCWIZARD to match the domain user. Ensure the user principal is in GREEN LIGHT.
    The image shows the first step in SAP T-code SNCWIZARD.The image shows the SPNego configuration result for a domain user.
  4. Copy the SNC name to map to the NetWeaver User in Tcode SU01.
    The image shows the User Mapping value for domain userThe image shows how to fill in SNC name to map SAP GUI users in T-code SU01.
  5. Complete the remainder of the process in SNCWIZARD.
    The image shows the final step in SAP T-code SNCWIZARD.The image shows the final step in SAP T-code SNCWIZARD.

Test the SAP GUI SSO feature

  1. Activate the Secure Network Communication option in the SAP GUI and input CN name.
    The image shows configuration for SAP GUI logon entry.
  2. Double-click the designated SAP System to check whether the single sign-on function is ready.
    The image shows success for SAP GUI single sign-on result.

(Optional) Build up Amazon WorkSpaces image and bundles for scale-out usage and centralize the SAP GUI logon entry in a shared Windows file system.

You might have many staff members that need access to SAP systems with the SAP GUI. In a traditional environment, each user must install the SAP GUI on their laptops and set up SAP logon entries repeatedly. With Amazon WorkSpaces, you can easily duplicate pre-built bundles and provision the environment to each user in a few clicks following the document.

Next, you can create a bundle to deploy a pre-built laptop for each user. Each user must maintain the SAP GUI logon entry in a repeated effort.  They can use a share file system (either with Amazon FSx Windows file systems or a self-built Windows file systems) to adopt a centralized SAP logon entry for different department users, reducing overall efforts. Moreover, you can deploy SAP GUI Installation Server to push customized Windows scripts to users. The detailed process is in the SAPGUI Installation Server Part 5 – Scripting blog.

The following process shows two Amazon WorkSpaces sharing the same Amazon FSx file system for SAP GUI logon entry.

Requirements:

  • An Amazon FSx file system is created in the same domain with Amazon WorkSpaces and mapped in a network device in the first Amazon WorkSpaces.
  • The domain users are in the same organization and have permission to access the Amazon FSx file system.
  • Set up proper security croup and Network ACLs (NACL) between Amazon WorkSpaces and the Amazon FSx file system.

(Amazon WorkSpaces1)

  1. The pre-configured SAP GUI and the SAPGUILandscape.xml(SAP Logon entry) is changed to the share folder Z:
    The image shows SAP logon entry configuration file in first Amazon WorkSpaces.

    (Amazon WorkSpaces2)
  2. Build another Amazon WorkSpaces, connect to the same share folders, and change the SAP GUI option (Server Configuration Files) to the same SAPGUILandscape.xml.
    The image shows SAP logon entry configuration file can be read in first Amazon WorkSpaces.
    (Amazon WorkSpaces2)
  3. After you restart the SAP GUI, the system lists are ready.
    The image shows the result that second Amazon WorkSpaces imported the shared SAP GUI logon entry.

Last, since SAP SNC is mapped one-to-one from the domain user to the SAP GUI user, you need to pre-map different domain users to the related SAP GUI users. The other Amazon WorkSpaces users from domain can use SSO to the SAP systems.

(Alternative) Deploy Amazon AppStream2.0 with SAP GUI SSO

Amazon AppStream 2.0 is a fully managed application streaming service that provides your users secure access to your SAP environment through a browser on any computer, including PCs, Chromebooks, and Macs.

There are several differences in configuring Amazon AppStream 2.0 with Amazon WorkSpaces:

  • Amazon AppStream 2.0 use Directory Configs rather than AWS Directory Service. You should pre-configure Directory Configs and ensure the Image Builder of Amazon AppStream 2.0 can connect to domain in the starting stage. (You can replace the Dynamic Host Configuration Protocol (DHCP) option set in the VPC to the new option with custom Domain Name System (DNS)).
  • Once the Amazon AppStream 2.0 Image Builder is ready, and the configuration for SAP GUI SSO in the image is done, create a related fleet and associate it to a stack.
  • Amazon AppStream 2.0 is a browser-based service, so the authentication is verified by a SAML 2.0 identity provider from Windows domain. You can refer to the AWS blog How to Enable Server-Side LDAPS for Your AWS Microsoft AD Directory to set up Active Directory Federation Services (ADFS) in your environment and Enabling Identity Federation with AD FS 3.0 and Amazon AppStream 2.0 to connect Amazon AppStream 2.0 to the stack.

The test environment is below.

Amazon AppStream 2.0 with AWS Managed Microsoft AD

  1. Configure the Amazon AppStream 2.0 directory config.
    The image shows the Directory Configs in Amazon Appstream 2.0.
  2. Create an Amazon AppStream 2.0 image.
    The image shows the Amazon Appstream 2.0 Image Assistant.The image shows the creation of Amazon Appstream 2.0 Image.
  3. Create a federation rule in the ADFS
    The image shows the SAML configuration in ADFS.
  4. The Amazon AppStream 2.0 with SAP GUI SSO result is below
    The image shows success for SAP GUI single sign-on result from Amazon Appstream 2.0 URL.

You can refer Deploying SAP GUI on Amazon AppStream 2.0 guide for detailed steps.

Extension

In order to extend the usage of Amazon WorkSapces for disaster recovery in different AWS Regions, you can build Amazon WorkSpaces in multiple Regions to replicate configuration by referring to the AWS blog Building a multi-region disaster recovery environment for Amazon WorkSpaces.

Conclusion

Using Amazon Workspaces to integrate SAP GUI SSO features is flexible and scalable for the SAP users around the world. Besides being an easy way to deploy the Amazon WorkSpaces, you can reduce the capital costs to move your Windows desktops to Amazon WorkSpaces.

Viewing all 140 articles
Browse latest View live