Quantcast
Channel: SAP – Cloud Data Architect
Viewing all 140 articles
Browse latest View live

VPC Subnet Zoning Patterns for SAP on AWS, Part 1: Internal-Only Access

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Harpreet Singh and Derek Ewell, Solutions Architects at Amazon Web Services (AWS).

SAP landscapes that need to reside within the corporate firewall are comparatively easy to architect, but that’s not the case for SAP applications that need to be accessed both internally and externally. In these scenarios, there is often confusion regarding which components are required and where they should be placed.

In this series of blog posts, we’ll introduce Amazon Virtual Private Cloud (Amazon VPC) subnet zoning patterns for SAP applications, and demonstrate their use through examples. We’ll show you several architectural design patterns based on access routes, and then follow up with detailed diagrams based on potential customer scenarios, along with configuration details for security groups, route tables, and network access control lists (ACLs).

To correctly identify the subnet an application should be placed in, you’ll want to understand how the application will be accessed. Let’s look at some possible ways in which SAP applications can be accessed:

  • Internal-only access: These applications are accessed only internally. They aren’t allowed to be accessed externally under any circumstances, except by SAP support teams. In this case, the user or application needs to be within the corporate network, either connected directly or through a virtual private network (VPN). SAP Enterprise Resource Planning (ERP), SAP S/4HANA, SAP Business Warehouse (BW), and SAP BW/4HANA are examples of applications for which most organizations require internal-only access.
  • Internal and controlled external access: These applications are accessed internally, but limited access is also provided to known external parties. For example, SAP Process Integration (PI) or SAP Process Orchestration (PO) can be used from internal interfaces, but known external parties might be also allowed to interface with the software from whitelisted IPs. Additionally, integration with external software as a service (SaaS) solutions, such as SAP SuccessFactors, SAP Cloud Platform, and SAP Ariba, may be desirable, to add functionality to SAP solutions running on AWS.
  • Internal and uncontrolled external access: Applications like SAP hybris or an external-facing SAP Enterprise Portal fall into this category. These applications are mostly accessible publicly, but they have components that are meant for internal access, such as components for administration, configuration, and integration with other internal applications.
  • External-only access: This is a rare scenario, because an application will need to be accessible internally for basic administration tasks such as backups, access management, and interfaces, even if most of its components are externally accessible. Due to the infrequency of this scenario, we won’t cover it in this series of blog posts.

In this blog post, we’ll cover possible architecture patterns for the first category of applications (applications that are accessible only internally). We’ll cover the two other scenarios in upcoming blog posts. In our discussions for all three scenarios, we’ll assume that you will access the Internet from your AWS-based resources, via a network address translation (NAT) device (for IPv4), an egress-only Internet gateway (for IPv6), or similar means, to deal with patching, updates, and other related scenarios. This access will still be controlled in a way to limit or eliminate inbound (Internet to AWS Cloud) access requests.

Architectural design patterns for internal-only access

We’ll look at two design patterns for this category of SAP applications, based on where the database and app server are placed: in the same private subnet or in separate private subnets.

Database and app server in a single private subnet

This setup contains three subnets:

  • Public subnet: An SAProuter, along with a NAT gateway or NAT instance, are placed in this subnet. Only the specified public IPs from SAP are allowed to connect to the SAProuter. See SAP Note 28976, Remote connection data sheet, for details. (SAP notes require SAP Support Portal access.)
  • Management private subnet: Management tools like Microsoft System Center Configuration Manager (SCCM), and admin or jump hosts are placed in this subnet. Applications in this subnet aren’t accessed by end users directly, but are required for supporting end users. Applications in this subnet can access the Internet via NAT gateways or NAT instances placed in the public subnet.
  • Apps & database private subnet: Applications and databases are placed in this subnet. SAP applications can be accessed by end users via SAPGUI or over HTTP/S via the SAP Web Dispatcher. End users aren’t allowed to access databases directly.

Database and app server in different private subnets

This setup includes four subnets. The public subnet and the management private subnet have the same functions as in previous scenario, but the third subnet (the apps & database private subnet) has been divided into two separate private subnets for applications and databases. The database private subnet is not accessible from the user environment.

You see that there isn’t much difference between these two approaches. However, the second approach protects the database better, by shielding the database subnet with separate route tables and network ACLs. This allows you to have better control and to manage the access to the database layer more effectively.

Putting our knowledge to use

Let’s put this in context by discussing an example implementation.

Example scenario

You need to deploy SAP S/4HANA, SAP Fiori, and SAP BW (ABAP and Java) on HANA. These applications should be accessible only from the corporate network. The applications will require integration with Active Directory (AD) or Active Directory Federation Services (ADFS) for single sign-on (SSO) based on Security Assertion Markup Language (SAML). SAP BW will have file-based integration with legacy applications as well, and will communicate with an SSH File Transfer Protocol (SFTP) server for this purpose. SAP should be able to access these systems for support. SAP Solution Manager is based on SAP ASE and will be used for central monitoring and change management of SAP applications. All applications are assumed to be on SUSE Linux Enterprise Server (SLES).

Solution on AWS

In this example, we are presuming only one EC2 instance per solution element. If workloads are scaled horizontally, or high availability is necessary, you may choose to include multiple, functionally similar, EC2 instances in the same security group. In this case, you’ll need to add a rule to your security groups. You will use an IPsec-based VPN for connectivity between your corporate network and the VPC. If Red Hat Enterprise Linux (RHEL) or Microsoft Windows Server are used, some configuration changes may be necessary in the security groups, route tables, and network ACLs. You can refer to the operating system product documentation, or other sources such as the Security Group Rules Reference in the Amazon Elastic Compute Cloud (EC2) documentation, for more information. Certain systems will remain on premises, such as the primary AD or ADFS servers, and the legacy SFTP server.

Here’s an architectural diagram of the solution:

The architecture diagram assumes the following example setup:

The following table shows the security group sample configurations. This represents a high-level view of the rules to be defined in the security groups. For exact port numbers or ranges, please refer to the SAP product documentation.

The flow of network traffic is managed by these sample route tables:

*AWS Data Provider requires access to AWS APIs for Amazon CloudWatch, Amazon S3, and Amazon EC2. Further details are available in the AWS Data Provider for SAP Installation and Operations Guide.

For an additional layer of security for our instances, we can use network ACLs, such as those shown in the following table. Network ACLs are fast and efficient, and provide another layer of control in addition to the security groups shown in the previous table. For additional security recommendations, see AWS Security Best Practices.

In certain cases (for example, for OS patches), you may need additional Internet access from EC2 instances; and route tables, network ACLs, and security groups will be adjusted to allow this access temporarily.

What’s next?

In this post, we have defined and demonstrated by example the subnet zoning pattern for applications that require internal-only access. Stay tuned for the next blog post in this series for a discussion of the other subnet zoning patterns we introduced in this post.

We would like to hear about your experiences in setting up VPCs for SAP applications on AWS. If you have any questions or suggestions about this blog post, feel free to contact us.

Next article in this series: VPC Subnet Zoning Patterns for SAP on AWS, Part 2: Network Zoning


itelligence and AWS collaborate to bring value to SAP customers

$
0
0

Feed: AWS for SAP.
Author: Megan Johnson.

itelligence and AWS: Meet our latest Amazon Partner Network (APN) member focused on providing global SAP value and solutions via the AWS Cloud.
This post is by Bas Kamphuis, General Manager, SAP at Amazon Web Services (AWS).

https://www.prnewswire.com/news-releases/itelligence-announces-collaboration-with-amazon-web-services-for-cloud-solutions-300540532.html

With 76% of the world’s transactions touching an SAP system as some point, SAP continues to dominate the Enterprise Resource Planning (ERP) market. Out of the tens of thousands of customers that are on SAP’s flagship software release, Enterprise Common Core (ECC), 100% of them will be moving somewhere by 2025 as SAP retires maintenance for that release. This is to enable the innovations that are powered by SAP’s HANA in-memory database, like S/4HANA and BW/4HANA. This catalyst for change is causing the entire SAP customer and partner ecosystem to take a strategic look at their SAP landscapes and to define how they will enable their digital transformation with HANA.

As one of the world’s most successful SAP global partners, itelligence understands the opportunities and complexities many of these SAP customers face today. They serve large global organizations, and have been implementing and maintaining SAP environments for decades. The AWS and itelligence partnership will focus on enabling customers to attain the benefits of the AWS Cloud (like elasticity, resiliency, security and global scale) in combination with their award winning SAP implementation and operational expertise. Customers can now benefit from both the AWS and SAP innovations without risk or disruption for their existing business operations.

As we continue our joint efforts, customers will see more AWS fueled itelligence offerings that will help SAP customers achieve agility in consuming SAP innovations, increased choice, lowered costs, and reduced time to value.

Today, customers are able to:

  •  Retire their on premise capital expenses by executing datacenter migrations to AWS
  • Accelerate using tools and best practices to perform SAP migrations in days vs. weeks
  • Harden their corporate security and increase their resiliency by leveraging multi-region business continuity and disaster recovery architectures.
  • Consolidating SAP landscapes and moving production environments to AWS on the largest selection of certified HANA offerings for public cloud

In addition, itelligence is integrating AWS into their managed services offerings to help customers reduce the complexity of their IT environment. Together we are looking to integrate AWS offerings like artificial intelligence, big data and machine learning to proactively automate, monitor, and maintain customers’ SAP environments.

We are very excited about our relationship with itelligence and we will continue to co-innovate with them to bring the best solutions and offerings to SAP customers. On behalf of the entire AWS organization, we would like to thank the itelligence team for their commitment and leadership. Welcome to the Amazon Partner Network.

Bas

How on-premises users can access a SUSE HAE-protected SAP HANA instance through Amazon Route 53

$
0
0

Feed: AWS for SAP.
Author: Stefan Schneider.

Stefan Schneider is a solutions architect at Amazon Web Services (AWS).

This blog post describes the Amazon Route 53 agent, which enables on-premises users to access an SAP HANA database that is protected by SUSE Linux Enterprise High Availability Extension (SLES HAE) in the AWS Cloud. The agent provides this functionality by dispatching users through Amazon Route 53.

The agent requires setting up SUSE HAE for high availability (HA) failover, implemented through the overlay IP address agent, as described in SAP Note 2309342, SUSE Linux Enterprise High Availability Extension on AWS. (SAP notes require SAP Service Marketplace credentials.)

The Route 53 agent extends the features of SLES HAE, including the Pacemaker cluster resource management framework, beyond protecting SAP HANA databases. Using this agent jointly with the overlay IP address agent enables SAP users to use SLES HAE for all SLES-supported configurations in the AWS Cloud, including SAP Central Instances (CIs).

The Route 53 agent is currently available as an unsupported open-source tool, and the source code is provided in this blog post. AWS is currently working with SUSE to make the agent available in the upstream repository as a supported tool. You can install the agent after you set up SAP HANA in your AWS account.

How the Route 53 agent works

The current overlay IP address agent allows application servers inside a virtual private computer (VPC) to access a protected SAP HANA server in that VPC, but doesn’t provide access to on-premises applications.

This causes some inconvenience for on-premises users, because it requires applications like HANA Studio to be managed inside the VPC via RDP or a jump server. The Route 53 agent works around this restriction by using a name-based approach to allow on-premises users to connect to the VPC. The two agents operate in parallel: The overlay IP agent routes traffic from the overlay IP address to the active node. The Route 53 agent updates the name of the SAP HANA server with the current IP address.

I’ve described the internal workings of this agent in my article DNS Name Failover for Highly Available AWS Services on the Scaling Bits website. The article describes how the Route 53 hosted zone gets updated.

The Route 53 agent is independent of SAP. It also works with the SAP NetWeaver Central Instance (CI) components of SLES HAE.

Prerequisites

This article assumes that you’ve already installed the overlay IP address agent, including the SLES Pacemaker cluster. In addition, the Route 53 agent requires:

  • Policies for your SLES HAE cluster instances to update Route 53 records
  • A profile for your root user
  • A Route 53 private hosted zone

Adding policies

Add the following policy to your SLES HAE cluster instances, to enable them to update Route 53 A records.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1471878724000",
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:GetChange",
                "route53:ListResourceRecordSets",
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Creating an AWS profile for your root user

The agent calls AWS CLI commands by using an AWS profile, and will use the same profile as the overlay IP agent. It may need a proxy configuration as well, as described in the Scaling Bits website.

You can choose any profile name. The agent uses cluster as the default name, so you must change any references as necessary.

Creating a Route 53 private hosted zone

The agent updates an A record in a Route 53 hosted zone. This means that you‘ll need the required infrastructure in your AWS account. For information about how to create a private hosted zone, see the AWS documentation.

You will need the following (shown here with example values):

  • hostedzoneid: Z22XDORCGO4P3T
  • fullname: suse-service.awslab.cloud.mylab.corp. (The very last dot matters!)

Installing the agent

Copy the source code listed at the end of this blog post into a text file and place it in the directory /usr/lib/ocf/resource.d/aws. This source code is available under the MIT license.

Configuring the cluster

In Pacemaker, edit the configuration of your cluster (crm configure edit) as follows:

primitive res_AWS_ROUTE53 ocf:aws:aws-vpc-route53 
   params hostedzoneid=Z22XDORCGO4P3T ttl=10 fullname=suse-service5.awslab.cloud.mylab.corp. profile=cluster 
   op start interval=0 timeout=180 
   op stop interval=0 timeout=180 
   op monitor interval=300 timeout=180 
   meta target-role=Started

Replace the following required parameters with the appropriate values:

  • hostedzoneid: The host zone ID of Route 53. This is the Route 53 record table.
  • ttl: Time to live (TTL) for the ARECORD in Route 53, in seconds. (10 is a reasonable default value.)
  • fullname: The full name of the service that will host the IP address; for example, suse-service.awslab.cloud.mylab.corp. (The last period is important!)
  • profile: The name of the AWS CLI profile of the root account.The file /root/.aws/config should have an entry which looks like this:
    • [profile cluster] – where cluster represents your profile name
    • region = us-east-1 (specify your current region)
    • output = text (this setting is required)

Configuring AWS-specific contraints

The Route 53 agent has to operate in the same node as the SAP HANA database. You can use a constraint to force it to be in the same node.

Create a file called aws-route53-constraint.txt with the following content. Make sure that you use the same resource identifier as before.

colocation col_Route53 2000: res_AWS_ROUTE53:Started msl_SAPHana_SID_HDB00:Master
order ord_SAPHana 2000: cln_SAPHanaTopology_SID_HDB00 msl_SAPHana_SID_HDB00

In this example, the SAP SID is encoded as part of the resource name. This will differ in your configuration.

Add this file to the configuration, and run the following command as a super user. It uses the file name aws-constraint.txt:

crm configure load update aws-route53-constraint.txt

Summary

The Route 53 agent is used with the Pacemaker cluster resource management framework to extend the features of SLES HAE beyond protecting SAP HANA databases. It allows users to protect SAP Central Instances by dispatching end-users through Route 53 to find the active ABAP SAP Central Services (ASCS) server.

The agent runs as a dependent agent to the HAE SAP agents. It doesn’t require individual administration.

If you need on-premises access to your SLES HAE systems, we encourage you to install the agent, and let us know if you have any questions or feedback.

Source code for the Route 53 agent

#!/bin/bash
#
#   Copyright 2017 Amazon.com, Inc. and its affiliates. All Rights Reserved.
#   Licensed under the MIT License.
#
#  Copyright 2017 Amazon.com, Inc. and its affiliates

# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.

#
#
#
# OCF resource agent to move an IP address within a VPC in the AWS
# Written by Stefan Schneider , Martin Tegmeier (AWS)
# Based on code of Markus Guertler#
#
#
# OCF resource agent to move an IP address within a VPC in the AWS
# Written by Stefan Schneider (AWS) , Martin Tegmeier (AWS)
# Based on code of Markus Guertler (SUSE)
#
# Mar. 15, 2017, vers 1.0.2

#######################################################################
# Initialization:

: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs

OCF_RESKEY_ttl_default=10

: ${OCF_RESKEY_ttl:=${OCF_RESKEY_ttl_default}}

#######################################################################

usage() {
	cat <<-EOT
	usage: $0 {start|stop|status|monitor|validate-all|meta-data}
	EOT
}

metadata() {
cat <


1.0

Update Route53 record of Amazon Webservices EC2 by updating an entry in a
hosted zone ID table.

AWS instances will require policies which allow them to update Route53 ARecords:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1471878724000",
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:GetChange",
                "route53:ListResourceRecordSets",
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Example Cluster Configuration:

Use a configuration in "crm configure edit" which looks as follows. Replace
hostedzoneid, fullname and profile with the appropriate values:

primitive res_route53 ocf:heartbeat:aws-vpc-route53 
        params hostedzoneid=Z22XDORCGO4P3T fullname=suse-service5.awslab.cloud.sap.corp. profile=cluster 
        op start interval=0 timeout=180 
        op stop interval=0 timeout=180 
        op monitor interval=300 timeout=180 
        meta target-role=Started

Update Route53 VPC record for AWS EC2



Hosted zone ID of Route 53. This is the table of
the Route 53 record.

AWS hosted zone ID




The full name of the service which will host the IP address.
Example: suse-service5.awslab.cloud.sap.corp.
Note: The trailing dot is important to Route53!

Full service name




Time to live for Route53 ARECORD

ARECORD TTL




The name of the AWS CLI profile of the root account. This
profile will have to use the "text" format for CLI output.
The file /root/.aws/config should have an entry which looks
like:

  [profile cluster]
	region = us-east-1
	output = text

"cluster" is the name which has to be used in the cluster
configuration. The region has to be the current one. The
output has to be "text".

AWS Profile Name











END
}

debugger() {
	ocf_log debug "DEBUG: $1"
}

ec2ip_validate() {
	debugger "function: validate"

	# Full name
	[[ -z "$OCF_RESKEY_fullname" ]] && ocf_log error "Full name parameter not set $OCF_RESKEY_fullname!" && exit $OCF_ERR_CONFIGURED

	# Hosted Zone ID
	[[ -z "$OCF_RESKEY_hostedzoneid" ]] && ocf_log error "Hosted Zone ID parameter not set $OCF_RESKEY_hostedzoneid!" && exit $OCF_ERR_CONFIGURED

	# profile
	[[ -z "$OCF_RESKEY_profile" ]] && ocf_log error "AWS CLI profile not set $OCF_RESKEY_profile!" && exit $OCF_ERR_CONFIGURED

	# TTL
	[[ -z "$OCF_RESKEY_ttl" ]] && ocf_log error "TTL not set $OCF_RESKEY_ttl!" && exit $OCF_ERR_CONFIGURED

	debugger "Testing aws command"
	aws --version 2>&1
	if [ "$?" -gt 0 ]; then
		error "Error while executing aws command as user root! Please check if AWS CLI tools (Python flavor) are properly installed and configured." && exit $OCF_ERR_INSTALLED
	fi
	debugger "ok"

	if [ -n "$OCF_RESKEY_profile" ]; then
		AWS_PROFILE_OPT="--profile $OCF_RESKEY_profile"
	else
		AWS_PROFILE_OPT="--profile default"
	fi

	return $OCF_SUCCESS
}

ec2ip_monitor() {
	ec2ip_validate
	debugger "function: ec2ip_monitor: check Route53 record "
	IPADDRESS="$(ec2metadata aws ip | grep local-ipv4 | /usr/bin/awk '{ print $2 }')"
	ARECORD="$(aws $AWS_PROFILE_OPT route53 list-resource-record-sets --hosted-zone-id $OCF_RESKEY_hostedzoneid --query "ResourceRecordSets[?Name=='$OCF_RESKEY_fullname']" | grep RESOURCERECORDS | /usr/bin/awk '{ print $2 }' )"
	debugger "function: ec2ip_monitor: found IP address: $ARECORD ."
	if [ "${ARECORD}" == "${IPADDRESS}" ]; then
		debugger "function: ec2ip_monitor:  ARECORD $ARECORD found"
		return $OCF_SUCCESS
	else
		debugger "function: ec2ip_monitor: no ARECORD found"
		return $OCF_NOT_RUNNING
	fi

	return $OCF_SUCCESS
}

ec2ip_stop() {
	ocf_log info "EC2: Bringing down Route53 agent. (Will remove ARECORD)"
	IPADDRESS="$(ec2metadata aws ip | grep local-ipv4 | /usr/bin/awk '{ print $2 }')"
	ARECORD="$(aws $AWS_PROFILE_OPT route53 list-resource-record-sets --hosted-zone-id $OCF_RESKEY_hostedzoneid --query "ResourceRecordSets[?Name=='$OCF_RESKEY_fullname']" | grep RESOURCERECORDS | /usr/bin/awk '{ print $2 }' )"
	debugger "function: ec2ip_monitor: found IP address: $ARECORD ."
	if [ ${ARECORD} != ${IPADDRESS} ]; then
		debugger "function: ec2ip_monitor: no ARECORD found"
		return $OCF_SUCCESS
	else
		debugger "function: ec2ip_monitor:	ARECORD $ARECORD found"
		# determine IP address
		IPADDRESS="$(ec2metadata aws ip | grep local-ipv4 | /usr/bin/awk '{ print $2 }')"
		# Patch file
		debugger "function ec2ip_stop: will delete IP address to ${IPADDRESS}"
		ocf_log info "EC2: Updating Route53 $OCF_RESKEY_hostedzoneid with $IPADDRESS for $OCF_RESKEY_fullname"
		ROUTE53RECORD="/var/tmp/route53-${OCF_RESKEY_hostedzoneid}-${IPADDRESS}.json"
		echo "{ " > ${ROUTE53RECORD}
		echo "	  "Comment": "Update record to reflect new IP address for a system ", " >>	${ROUTE53RECORD}
		echo "	  "Changes": [ " >>  ${ROUTE53RECORD}
		echo "		  { " >>  ${ROUTE53RECORD}
		echo "			  "Action": "DELETE", " >>	${ROUTE53RECORD}
		echo "			  "ResourceRecordSet": { " >>	 ${ROUTE53RECORD}
		echo "				  "Name": "${OCF_RESKEY_fullname}", " >>  ${ROUTE53RECORD}
		echo "				  "Type": "A", " >>	 ${ROUTE53RECORD}
		echo "				  "TTL": ${OCF_RESKEY_ttl}, " >>	${ROUTE53RECORD}
		echo "				  "ResourceRecords": [ " >>  ${ROUTE53RECORD}
		echo "					  { " >>  ${ROUTE53RECORD}
		echo "						  "Value": "${IPADDRESS}" " >>	${ROUTE53RECORD}
		echo "					  } " >>  ${ROUTE53RECORD}
		echo "				  ] " >>  ${ROUTE53RECORD}
		echo "			  } " >>  ${ROUTE53RECORD}
		echo "		  } " >>  ${ROUTE53RECORD}
		echo "	  ] " >>  ${ROUTE53RECORD}
		echo "}" >> ${ROUTE53RECORD}
		cmd="aws --profile ${OCF_RESKEY_profile} route53 change-resource-record-sets --hosted-zone-id ${OCF_RESKEY_hostedzoneid} 
		  --change-batch file://${ROUTE53RECORD} "
		debugger "function ec2ip_start: executing command: $cmd"
		CHANGEID=$($cmd | grep CHANGEINFO |	 /usr/bin/awk -F't' '{ print $3 }' )
		debugger "Change id: ${CHANGEID}"
		rm ${ROUTE53RECORD}
		CHANGEID=$(echo $CHANGEID |cut -d'/' -f 3 |cut -d'"' -f 1 )
		debugger "Change id: ${CHANGEID}"
		STATUS="PENDING"
		MYSECONDS=2
		while [ "$STATUS" = 'PENDING' ]; do
			sleep	${MYSECONDS}
			STATUS="$(aws --profile ${OCF_RESKEY_profile} route53 get-change --id $CHANGEID | grep CHANGEINFO |  /usr/bin/awk -F't' '{ print $4 }' |cut -d'"' -f 2 )"
			debugger "Waited for ${MYSECONDS} seconds and checked execution of Route 53 update status: ${STATUS} "
		done

		return $OCF_SUCCESS
	fi

	return $OCF_SUCCESS
}

ec2ip_start() {
	# determine IP address
	IPADDRESS="$(ec2metadata aws ip | grep local-ipv4 | /usr/bin/awk '{ print $2 }')"
	# Patch file
	debugger "function ec2ip_start: will update IP address to ${IPADDRESS}"
	ocf_log info "EC2: Updating Route53 $OCF_RESKEY_hostedzoneid with $IPADDRESS for $OCF_RESKEY_fullname"
	ROUTE53RECORD="/var/tmp/route53-${OCF_RESKEY_hostedzoneid}-${IPADDRESS}.json"
	echo "{ " > ${ROUTE53RECORD}
	echo "    "Comment": "Update record to reflect new IP address for a system ", " >>  ${ROUTE53RECORD}
	echo "    "Changes": [ " >>  ${ROUTE53RECORD}
	echo "        { " >>  ${ROUTE53RECORD}
	echo "            "Action": "UPSERT", " >>  ${ROUTE53RECORD}
	echo "            "ResourceRecordSet": { " >>  ${ROUTE53RECORD}
	echo "                "Name": "${OCF_RESKEY_fullname}", " >>  ${ROUTE53RECORD}
	echo "                "Type": "A", " >>  ${ROUTE53RECORD}
	echo "                "TTL": ${OCF_RESKEY_ttl} , " >>  ${ROUTE53RECORD}
	echo "                "ResourceRecords": [ " >>  ${ROUTE53RECORD}
	echo "                    { " >>  ${ROUTE53RECORD}
	echo "                        "Value": "${IPADDRESS}" " >>  ${ROUTE53RECORD}
	echo "                    } " >>  ${ROUTE53RECORD}
	echo "                ] " >>  ${ROUTE53RECORD}
	echo "            } " >>  ${ROUTE53RECORD}
	echo "        } " >>  ${ROUTE53RECORD}
	echo "    ] " >>  ${ROUTE53RECORD}
	echo "}" >> ${ROUTE53RECORD}
	cmd="aws --profile ${OCF_RESKEY_profile} route53 change-resource-record-sets --hosted-zone-id ${OCF_RESKEY_hostedzoneid} 
	  --change-batch file://${ROUTE53RECORD} "
	debugger "function ec2ip_start: executing command: $cmd"
	CHANGEID=$($cmd | grep CHANGEINFO |  /usr/bin/awk -F't' '{ print $3 }' )
	debugger "Change id: ${CHANGEID}"
	rm ${ROUTE53RECORD}
	CHANGEID=$(echo $CHANGEID |cut -d'/' -f 3 |cut -d'"' -f 1 )
	debugger "Change id: ${CHANGEID}"
	STATUS="PENDING"
	MYSECONDS=2
	while [ "$STATUS" = 'PENDING' ]; do
		sleep  ${MYSECONDS}
		STATUS="$(aws --profile ${OCF_RESKEY_profile} route53 get-change --id $CHANGEID | grep CHANGEINFO |  /usr/bin/awk -F't' '{ print $4 }' |cut -d'"' -f 2 )"
		debugger "Waited for ${MYSECONDS} seconds and checked execution of Route 53 update status: ${STATUS} "
	done

	return $OCF_SUCCESS
}

###############################################################################

case $__OCF_ACTION in
	usage|help)
		usage
		exit $OCF_SUCCESS
		;;
	meta-data)
		metadata
		exit $OCF_SUCCESS
		;;
	monitor)
		ec2ip_monitor
		;;
	stop)
		ec2ip_stop
		;;
	validate-all)
		ec2ip_validate
		;;
	start)
		ec2ip_start
		;;
	*)
		usage
		exit $OCF_ERR_UNIMPLEMENTED
		;;
esac

VPC Subnet Zoning Patterns for SAP on AWS, Part 3: Internal and External Access

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Harpreet Singh and Derek Ewell, Solutions Architects at Amazon Web Services (AWS).

In part one of this article series on virtual private cloud (VPC) subnet zoning patterns, we described possible ways in which SAP applications may be accessed, and then discussed Amazon Virtual Private Cloud (Amazon VPC) subnet zoning patterns for internal-only access in detail. In the second article in the series, we discussed how traditional application network zoning can be mapped to AWS. In this concluding article of this series, we’ll discuss access patterns for SAP applications that require access to and from both internal and external sources. Access from external sources may be controlled (that is, they may involve known external parties) or uncontrolled (that is, they may be publicly accessible)—we’ll cover both scenarios.

Internal and controlled external access

SAP Process Orchestration (PO)/Process Integration (PI) is the perfect example of this scenario, because in most cases it requires access from trusted external parties for external interfaces, and internal access for internal interfaces between SAP and/or non-SAP systems. Internal interfaces are easy to manage—essentially, you define the appropriate traffic in route tables, network access control lists (ACLs), and security groups. However, the challenge lies in providing external access securely, so let’s focus on egress and ingress traffic from trusted external parties. There are four typical options:

  • Virtual private network (VPN) connections for both ingress and egress
  • Elastic Load Balancing for ingress, and network address translation (NAT) gateways for egress
  • NAT devices (NAT instances or NAT gateways)
  • Reverse proxies

VPN connections (ingress and egress)

If you want to have a dedicated virtual connection with your trusted external parties, you can establish a site-to-site IPsec VPN connection either by using a virtual private gateway (VGW) in your VPC, or by having your own software VPN server, such as those available in the AWS Marketplace, in a public subnet.

Note   The architecture diagrams in this blog post show a single Availability Zone for simplicity. However, for high availability, we recommend configuring the solution across at least two Availability Zones.

Figure 1: VPN connection for controlled external access

Elastic Load Balancing (ingress) / NAT gateway (egress)

Elastic Load Balancing offers three types of load balancers:

  • Classic Load Balancer
  • Network Load Balancer
  • Application Load Balancer

Classic Load Balancers are intended for applications that were built for the EC2-Classic platform. If you’re using a VPC, we recommend using a Network Load Balancer or an Application Load Balancer. A Network Load Balancer operates at the connection level (layer 4) of the Open Systems Interconnection (OSI) model and is ideal for TCP traffic load balancing, while an Application Load Balancer operates at the request level (layer 7) and is the ideal choice for HTTP or HTTPS traffic load balancing.

Let’s consider two examples here, one for Secure File Transfer Protocol (SFTP) and the other for HTTPS.

  • SFTP. Let’s say you have an SFTP server in a private subnet that needs to be externally accessible from trusted external parties. In this scenario, you can use an internet-facing Network Load Balancer in a public subnet, and control access to trusted external parties through the security group associated with the SFTP server in a private subnet. (There is no security group associated with a Network Load Balancer). For outbound (egress) traffic, say, from SAP PI/PO to SFTP servers of trusted external parties, a NAT gateway is required. You can use Amazon Route 53 to register your organization’s external domain name and resolve fully qualified domain names (FQDNs) to the load balancer.

    Figure 2: Network Load Balancer for external access

  • HTTPS. For this second example, let’s say you have web service based interfaces to SAP PI/PO that need to be externally available. In this scenario, access will be based on SSL (HTTPS), so an Application Load Balancer is the perfect fit for ingress traffic. Access from known IPs will be controlled through the security group attached to the load balancer. On the other hand, if SAP PI/PO needs to consume web services exposed by your trusted external parties, you’ll need a NAT gateway. You can use Amazon Route 53 here as well for domain name registration and FQDN resolution.

    Figure 3: Application Load Balancer for external access

Other alternatives

NAT instances and reverse proxies are possible alternatives to using Elastic Load Balancing. However, we recommend that you use Application or Network Load Balancers in internet-facing configurations, because these managed offerings take away the overhead of managing NAT instances and reverse proxies.

  • NAT devices. AWS offers two kinds of NAT devices—NAT gateways and NAT instances. NAT instances are based on an Amazon Machine Image (AMI), whereas NAT gateways are managed by an AWS service. Both of these devices provide internet access for your EC2 instances in private subnets. You can enable NAT instances for port forwarding as well, to allow external parties to access your applications running on EC2 instances in private subnets. Let’s look at the previous SFTP example again, where a SFTP server in a private subnet needs to be accessed by known external parties for file-based interfaces with SAP PI/PO. A NAT instance (after enabling port forwarding) will protect your SFTP server (which is in a private subnet) from direct external access while enabling external parties to access it. You can configure the security group attached to the NAT instance to allow traffic only from known external IPs for controlled access. If all your interfaces are based on outbound connections from SAP PI/PO, NAT gateways provide a perfect fit.

    Figure 4: NAT instance for port forwarding

  • Reverse proxy (ingress) / NAT gateway (egress). A reverse proxy is used for ingress HTTP/HTTPS traffic. A reverse proxy in a public subnet allows external parties to establish HTTP/HTTPS connections to SAP PO/PI servers in private subnets. You can use SAP Web Dispatcher as a reverse proxy for ingress traffic, or you can use third-party products like F5 BIG-IP. If you’re using SAP Web Dispatcher, we recommend that you configure it for reverse invoke connection. You can use a NAT gateway, as in the previous scenario, for egress traffic.

    Figure 5: Reverse proxy for external access

In both of these (NAT and reverse proxy) scenarios, to retain a static public IP address, you will need to use Elastic IP addresses. In addition, you can use Amazon Route 53 for domain name registration and FQDN resolution.

Other application-specific options, such as SAP Cloud Connector, are also available. Cloud Connector establishes connections with the SAP Cloud Platform over HTTPS. The connection is always invoked by Cloud Connector, but after the connection is established, data can be sent both ways (through a reverse invoke connection). We recommend that you place Cloud Connector in the extranet zone, with internet access via a NAT gateway.

Figure 6: Integration with SAP Cloud Platform

Internal and uncontrolled external access

Internal access to applications in this category needs to be managed in a similar way as in the previous scenario (internal and controlled external access), that is, by defining appropriate route tables, network ACLs, and security groups. So, we will focus on uncontrolled external access in this section.

SAP Fiori is a common example of a SAP application where you may need external access without a VPN connection as well as internal access. Other examples include the SAP hybris platform, an externally accessible SAP Enterprise Portal (EP), or SAP Supplier Relationship Management (SRM) system. However, in most cases, uncontrolled external access for SAP systems is limited to HTTP/HTTPS. Let’s consider the example of an SAP Fiori front-end server in a central hub deployment, running on SAP NetWeaver Gateway.

Figure 7: Application Load Balancer for external access

An Application Load Balancer in an external zone acts as the entry point for HTTP/HTTPS requests. The load balancer sends requests to SAP Web Dispatcher. AWS Shield protects the SAP NetWeaver Gateway from common web exploits, especially if it’s used in conjunction with AWS WAF in the load balancer. AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards web applications running on AWS. There are two tiers of AWS Shield: Standard and Advanced. There is no additional charge for AWS Shield Standard.

What’s next?
In this post, we discussed scenarios for accessing SAP applications both internally and externally. If you’d like to discuss your specific scenarios, or if you have any questions or suggestions about this blog post, please contact us.

New SAP Certifications for AWS Instances and World Record Benchmark Results

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Steven Jones is a Technology Director at Amazon Web Services (AWS).

The aspect I enjoy most about working at Amazon Web Services (AWS) is the opportunity to get to work closely with customers as they develop and pursue their individual migration strategies for moving mission-critical workloads to the AWS Cloud. Most importantly, it’s these types of conversations that drive our roadmap.

In May 2016, we announced the availability of our x1.32xlarge instance type with 2 TB of RAM, purpose-built for running large-scale SAP HANA deployments in the AWS Cloud.

In August 2016, we announced SAP certification and support for large, scale-out HANA clusters up to 7 nodes or 14 TB of RAM. This was followed by the addition of our x1.16xlarge instance type with 1 TB of RAM in October 2016.

Back in May of this year, we announced our x1e.32xlarge instance type with 4 TB of RAM for deployments that needed a lot of RAM in a single system, SAP support for very large HANA scale-out clusters of 17 nodes or 34 TB of RAM, and a roadmap for 2018 with plans to support even larger Amazon Elastic Compute Cloud (Amazon EC2) instances with RAM between 8 TB and 16 TB of memory.

We continue to work to support additional deployment options for SAP workloads and have a couple of updates. We released five smaller X1e sizes earlier this month. These additional instance types are now available and certified for SAP NetWeaver on anyDB deployments (SQL Server, Oracle, IBM DB2, etc.). With a high ratio of memory to CPU, these X1e sizes offer a great choice for database instances.

Model vCPUs Memory (GiB) Networking performance SAPS
x1e.xlarge 4 122 Up to 10 Gbps 4,109
x1e.2xlarge 8 244 Up to 10 Gbps 8,219
x1e.4xlarge 16 488 Up to 10 Gbps 16,438
x1e.8xlarge 32 976 Up to 10 Gbps 32,875
x1e.16xlarge 64 1,952 10 Gbps 65,750
x1e.32xlarge 128 3,904 25 Gbps 131,500

And today, I’m pleased to announce that SAP has extended certification for even larger SAP HANA scale-out clusters on X1 instances, leveraging up to 25 x1.32xlarge nodes or 50 TB of RAM.

Certification details can be found in the SAP Certified and Supported SAP HANA Hardware Directory.

In conjunction with this extension, on November 9, 2017, SAP certified our World Record results for the SAP Business Warehouse (BW) Edition for SAP HANA Standard Application Benchmark version 2 executed in the cloud deployment type.* With a dataset of 46.8 billion initial records, this is the largest benchmark of its type as of November 27, 2017, and far exceeds the high-water mark in data volume by an order of magnitude. Our setup comprised 25 x1.32xlarge (2-TB) instances running the SAP HANA database, and demonstrates the unparalleled scalability and agility of the Amazon EC2 platform.

Here’s a screen illustration of HANA Studio showing the 25-node SAP HANA cluster.

Each X1 node offers:

  • 128 vCPUs powered by 4 x Intel Xeon E7-8880 v3 (Haswell) running at 2.3 GHz
  • 1,952 GiB of DRAM-based memory with Single Device Data Correction (SDDC+1)
  • 25 Gbps of network bandwidth
  • 14 Gbps of additional dedicated storage bandwidth to Amazon Elastic Block Store (Amazon EBS)
  • Support for Intel AES-NI features, Intel Transactional Synchronization Extensions (TSX), and Advanced Vector Extensions 2 (Intel AVX2)

Moving at the speed of AWS

Now imagine for a moment that you’re using a traditional data center or hosting approach. How long would it take to plan a deployment of this size, including data center space, power requirements, network architecture, and over 70+TB of storage, and to subsequently wait for the delivery, racking and stacking, and provisioning of storage for your deployment? At a minimum, you’d be looking at weeks, or, more likely, months. In contrast, on AWS, the setup of the infrastructure supporting this massive SAP HANA deployment took us a single day, primarily automated by the AWS Quick Start for SAP HANA.

Cost-effective scalability

Customers often tell us how difficult budgeting and capital expenditure exercises are for on-premises and other co-location type deployment models. In a world where competition means that business must move fast, this is something they can’t afford, so they often either delay projects or overbuy capacity to last for the next 3-5 years. With the AWS Cloud, customers can start with what they need and scale at a moment’s notice to support changing demands. And even these extremely large SAP HANA deployments can be provisioned and paid for on demand, without long-term commitments, allowing customers to move faster than ever before. Review the Lockheed Martin Case Study for an example of a customer who is moving faster while controlling costs with SAP HANA on X1 instances. Additional case studies across a wide variety of SAP workload types are available on our SAP and AWS website.

Global availability

X1 instances are available worldwide in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), GovCloud (US), China (Beijing), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and South America (São Paulo).

Getting started

You can deploy your own production-ready, single-node SAP HANA or scale-out SAP HANA solution on X1 and X1e using our recently updated AWS Quick Start Reference Deployment for SAP HANA with well-tested configurations. Be sure to also review our SAP HANA Operations Guide for other guidance and best practices when planning your SAP HANA implementation on AWS.

Last but not least, we are here to help. We have found that a critical key to successfully deploying scale-out clusters of this size require an in-depth understanding of data structure requirements as well as access patterns of your application. This, in turn, drives the correct strategy around SAP HANA table partitioning and distribution. Contact us for assistance in planning and architecture guidance.

Our partnership with SAP

AWS has a long-standing relationship with SAP where together we are focused on optimizing business outcomes and reducing risk for our mutual customers as they move to the cloud. You can read a blog post by Daniel Schneiss, SVP, SAP HANA Platform & Databases, on the SAP website, where he outlines some of our most recent and future planned collaboration areas.

AWS re:Invent

For those of you heading to Las Vegas for our annual AWS re:Invent conference, we look forward to meeting with you soon. If you’re unable to make it in person, make sure to register for the AWS re:Invent Live Streams.

– Steve

 
* Benchmark Details:
SAP BW edition for SAP HANA Standard Application Benchmark Version 2 (Certificate 2017047)

Benchmark Phase 1:  
Number of initial records: 46,800,000,000
Total Runtime of Data Load/Transformation: 559,827 seconds
   
Benchmark Phase 2:  
Query Executions per Hour: 2,947
Records selected: 2,025,001,210,452
   
Benchmark Phase 3:  
Total Runtime of complex query phase: 382 seconds
   
Operating system: SuSE Linux Enterprise Server 12
Database: SAP HANA 1.0
Technology platform release: SAP NetWeaver 7.50

Configuration: 25 HANA servers (1 Master + 24 Workers) running on 25 Amazon EC2 x1.32xlarge instances (128 virtual CPUs, 1,952 GB main memory each) deployed in the AWS Cloud.

For more details, see: http://global.sap.com/campaigns/benchmark/index.epx and http://global.sap.com/campaigns/benchmark/appbm_cloud_awareness.epx.

Amazon Aurora Database Now Certified for SAP Hybris Commerce

$
0
0

Feed: AWS for SAP.
Author: Bill Timm.

Bill Timm is a partner solutions architect at Amazon Web Services (AWS).

We are happy to announce the certification of the Amazon Aurora database service for SAP Hybris Commerce on AWS.

What is Amazon Aurora?

Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Amazon Aurora provides up to five times better performance than MySQL with the security, availability, and reliability of a commercial database at one tenth the cost.

Benefits of Amazon Aurora

  • Fully managed – Amazon Aurora is a fully managed database service. When you run your SAP Hybris Commerce system on Amazon Aurora, you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, or backups. Amazon Aurora automatically and continuously monitors and backs up your database to Amazon Simple Storage Service (Amazon S3), enabling granular point-in-time recovery.
  • Highly available and durable – When you run your SAP Hybris Commerce system on Amazon Aurora, you can take advantage of Amazon Aurora’s high availability features. Amazon Aurora is designed to offer greater than 99.99% availability. Recovery from physical storage failures is transparent, and instance failover typically requires less than 30 seconds. Amazon Aurora’s storage is fault-tolerant and self-healing. Six copies of your data are replicated across three Availability Zones and continuously backed up to Amazon S3.
  • Highly scalable – You can scale your Amazon Aurora database from an instance with 2 vCPUs and 4 GiB of memory up to an instance with 32 vCPUs and 244 GiB of memory. You can also add up to 15 low-latency read replicas across three Availability Zones to further scale read capacity. Amazon Aurora automatically grows storage as needed, from 10 GiB up to 64 TiB.
  • High performance – Amazon Aurora provides five times the throughput of standard MySQL or twice the throughput of standard PostgreSQL running on the same hardware. This consistent performance is on par with commercial databases, at one-tenth the cost. On the largest Amazon Aurora instance, you can achieve up to 500,000 reads and 100,000 writes per second. You can further scale read operations using Read Replicas that have very low latency.
  • Highly secure – Amazon Aurora provides multiple levels of security for your database. These include network isolation using Amazon Virtual Private Cloud (Amazon VPC), encryption at rest using keys you create and control through AWS Key Management Service (AWS KMS), and encryption of data in transit using Secure Sockets Layer (SSL). On an encrypted Amazon Aurora instance, data in the underlying storage is encrypted, as are the automated backups, snapshots, and replicas in the same cluster.

The certification of the Amazon Aurora database extends the existing benefits of AWS for SAP Hybris Commerce. These benefits include:

  • Speed – With AWS you can provision all the required infrastructure for a complete, production-ready SAP Hybris Commerce environment in hours versus weeks or months.
  • Scalability – Amazon Elastic Compute Cloud (Amazon EC2) enables you to increase or decrease capacity within minutes, not hours or days. Since you can access and manage all AWS services through web service APIs, your SAP Hybris Commerce environment can automatically scale itself up and down depending on your needs.
  • Increased availability – On AWS, multiple Availability Zones offer you the ability to operate your SAP Hybris Commerce environment in a more highly available, fault-tolerant, and scalable architecture than would be possible from a single data center.
  • Reduced cost – Without required minimum commitments or long-term contracts you can achieve a lower variable cost than you can get on your own. Our economies of scale translate into lower pay-as-you-go prices for our customers.
  • Improved customer experience – You can use the Amazon CloudFront content delivery network (CDN) to deliver your entire website, including dynamic, static, and streaming content, using a global network of edge locations. Requests for your content are automatically routed to the nearest edge location, so your customers are never delayed by high latency.
  • Reduced administration – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of EC2 instances.

SAP Hybris Commerce architecture on AWS

Here’s an architectural diagram of a reference SAP Hybris Commerce environment on AWS.

I’d like to highlight a few key architectural points:

  • All tiers of the SAP Hybris Commerce stack are deployed in two Availability Zones, providing a higher level of availability than would be possible from a single data center.
  • The SAP Hybris Commerce application tier uses Auto Scaling to monitor the health of application servers and to dynamically scale application servers based on customer traffic.
  • The Amazon Aurora database is replicated to a second Availability Zone. If the master database system fails, Amazon Aurora uses Amazon Relational Database Service (Amazon RDS) Multi-AZ technology to automate failover to the Aurora Replica in the second Availability Zone.
  • The Amazon CloudFront global content delivery network service provides secure data delivery to customers with low latency and high transfer speeds.
  • Elastic Load Balancing automatically distributes incoming traffic across multiple EC2 instances. It enables you to achieve fault tolerance in your applications, and seamlessly provides the required amount of load balancing capacity needed to route application traffic.

Customers who are running SAP Hybris Commerce on AWS

Here are case studies of just a few of the many customers who are benefiting from running their SAP Hybris Commerce environments on the AWS Cloud.

  • Rent-A-Center (RAC) has undergone a digital transformation during the past few years. As part of that transformation, the company wanted to give its customers the ability to rent items online. To support the new site, RAC decided to use SAP Hybris as its e-commerce platform. After evaluating its options, RAC chose the AWS Cloud for its SAP Hybris environment. The AWS Cloud provides RAC elasticity to support spikes in user traffic, higher availability using Amazon RDS, and the ability to meet its PCI-compliance requirements. Read the full story.
  • Travis Perkins plc is a leading United Kingdom builders’ merchant and home-improvement retailer made up of 21 businesses, including Wickes, Tile Giant, and Benchmarx. By migrating their SAP Hybris Commerce environment to the AWS Cloud, they have experienced increased performance, reduced time to provision infrastructure, and reduced cost compared to their on-premises infrastructure. Read the full story.
  • GE Oil & Gas has migrated hundreds of applications to the cloud, including SAP Hybris Commerce, as part of a major digital transformation. The GE Oil & Gas cloud migration project has helped the General Electric division achieve a 52 percent decrease in IT costs, greater speed to market, and the agility to compete even better in an industry experiencing immense market challenges. Watch the video.
  • DoHome operates as a retail and wholesale store, carrying a wide variety of construction materials, home improvement and home decoration products across Thailand. The company had first deployed their SAP Hybris Commerce system on AWS, enjoying a secured and scalable environment and server runtime of 24/7 before deciding to move their SAP S/4HANA system to AWS. Utilizing services such as Amazon S3, Amazon EC2, Amazon RDS, Amazon VPC, and AWS best practices has helped reduce DoHome’s go-to-market time and improved availability to better serve their customers. Watch the video.

Partner services for SAP Hybris Commerce on AWS

The AWS Partner Network (APN) includes experienced AWS / SAP Hybris Commerce professional services firms that can help you design, architect, build, migrate, and manage your SAP Hybris Commerce environment on AWS. If you are interested in engaging an APN SAP Hybris Commerce partner, contact us.

SAP HANA Dynamic Tiering – Now Validated and Supported on the AWS Cloud

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Somckit Khemmanivanh and Rahul Kabra, SAP Solutions Architects at Amazon Web Services (AWS).

We have been working closely with our partner SAP to validate SAP HANA dynamic tiering on the AWS Cloud. We’re excited to announce that AWS is now a supported platform for SAP HANA dynamic tiering—see the announcement by Courtney Claussen, SAP HANA Dynamic Tiering Product Manager. Here’s more information about dynamic tiering and its benefits on AWS.

SAP HANA dynamic tiering allows your SAP HANA database to store warm data on a separate dedicated host that houses the dynamic tiering service (an esserver process). The dynamic tiering service gives you the ability to create multistore and extended store tables to store and process your warm data.

Dynamic tiering provides three key benefits:

  • It allows you to offload older, less frequently accessed data to an integrated disk tier.
  • It lets you access the data in the disk tier with excellent performance.
  • It lowers the total cost of ownership (TCO) of your SAP HANA system significantly.

Courtney Claussen recently published a blog post about a very compelling use case for dynamic tiering. 5 million records were processed as part of the billing run of an SAP utilities customer. These records were processed in 53 minutes when running entirely in SAP HANA. After this baseline was established, 25% of the 5 million records were moved to dynamic tiering’s multistore table. This resulted in 75% of the table’s rows being placed in SAP HANA and the other 25% in dynamic tiering. The billing run was then repeated again, and the result was a total runtime of 58 minutes, that is, only 10% degradation in runtime, which meant almost equal performance at a much lower cost footprint.

These impressive results and feedback from our customers motivated us to work closely with SAP to validate SAP HANA dynamic tiering on the AWS Cloud, so you can now benefit from the performance, cost savings, and future innovations for this service.

On AWS, you can start small with your certified SAP HANA systems (with systems ranging from 244 GiB of RAM up to 4 TiB of RAM) and scale up as your requirements change. Similarly, with dynamic tiering, you can choose to start small and add capacity as your needs, workloads, users, and traffic patterns change. For example, if you have a 4-TiB SAP HANA system, you could choose to start with a small dynamic tiering system such as our validated 488-GiB system (which is equivalent to 512 GB of RAM). Should you find that you need either more CPU or RAM, or disk space or IOPS, you can scale your dynamic tiering system up to a medium and eventually to a large configuration. The following diagram shows the 4-TiB SAP HANA system with a small dynamic tiering configuration.

SAP HANA with dynamic tiering on AWS

Here’s a table that shows the options for scaling your SAP HANA dynamic tiering systems.


Having this flexibility to scale enables you to optimize performance, lower your costs, and extend your SAP HANA systems as much as possible.

For details on dynamic tiering support on AWS, including prerequisites, supported versions, and other specifics, see SAP Note #2555629 (requires SAP Service Marketplace login).

Feel free to contact us with your questions or suggestions. Thank you!

– Somckit and Rahul

Deploy SAP NetWeaver on the AWS Cloud with Quick Start

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

Somckit Khemmanivanh is an SAP Solutions Architect at Amazon Web Services (AWS).

Do you currently use the AWS SAP HANA Quick Start to automatically provision and install certified SAP HANA systems—ranging from 244 GiB to 4 TiB RAM scale-up, or up to 50 TiB RAM scale-out—on the AWS Cloud? Maybe you use the SAP HANA Quick Start as part of your migration strategy with the FAST migration program? If you’re involved in these or similar scenarios, you need to provision and install one or more SAP application servers for your SAP HANA system. A short time ago, you had to create your own AWS CloudFormation templates or develop custom scripts to automatically provision your Amazon Elastic Compute Cloud (Amazon EC2) instances, and then you could install your SAP system. The SAP NetWeaver Quick Start removes this heavy lifting and manual work. It performs all these tasks for you so you can focus on other business-critical activities.

SAP NetWeaver is a foundational component that provides a set of technologies for developing and running SAP applications. SAP products and applications such as SAP Business Suite, S/4HANA, SAP Business Warehouse (SAP BW), and SAP BW/4HANA rely on SAP NetWeaver. Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices. This Quick Start deploys SAP NetWeaver Application Server (AS) for Advanced Business Application Programming (ABAP), which supports the development of ABAP-based applications for SAP HANA databases. It’s integrated with the SAP HANA Quick Start, which you can still deploy separately.

This Quick Start deploys SAP application servers into your AWS Cloud environment, and connects and integrates these servers with your SAP HANA system. The result is a fully provisioned and automatically installed SAP system running on SAP HANA.

Here’s an architectural overview of what the SAP NetWeaver Quick Start deploys for you.

SAP NetWeaver architecture on AWS

The Quick Start deploys an SAP application tier, an SAP HANA database tier, and Remote Desktop Protocol (RDP) and bastion hosts within a virtual private cloud (VPC) in your AWS account. The deployment includes a Primary Application Server (PAS) instance that provides SAP system utilities, and optional Additional Application Server (AAS) instances to scale out the SAP application tier.

If you want to set up your system differently, you can download the AWS CloudFormation templates and scripts from the GitHub repository and customize them to meet your specific requirements.

To get started with SAP NetWeaver on AWS, use these resources:

We plan to enhance this Quick Start, so your feedback is important to us! Feel free to contact us with your suggestions, and stay tuned for more innovations in 2018.

– Somckit


Migrating SAP Workloads to the AWS Cloud with AWS SMS

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Harpreet Singh and Devendra Singh, Solutions Architects at Amazon Web Services (AWS).

AWS Server Migration Service (AWS SMS) is an agentless service that migrates your on-premises VMware vSphere or Microsoft Hyper-V virtual machines to the AWS Cloud. In this blog post, I’ll discuss some of the main benefits of AWS SMS and explain how you can use this service to migrate your virtualized, on-premises (or private cloud) SAP workloads to Amazon Elastic Compute Cloud (Amazon EC2) instances on the AWS Cloud.

Here are some of the keys benefits of using AWS SMS:

  • Simplified migration: After you configure the source environment, you can migrate your virtual machines easily by scheduling replication jobs in the AWS Management Console. Replication to Amazon Machine Image (AMI) creation is a four-stage process that’s handled automatically when the replication job is executed.
  • Incremental migration: AWS SMS can replicate a live environment incrementally, which can speed up the migration process significantly. You can continue to run your production environment while it’s being replicated to the AWS Cloud.
  • Minimized downtime: There is no impact on production operations during incremental replication. However, final replication (cutover) does require downtime.
  • Parallel migration: With AWS SMS, you can migrate multiple virtual machines in parallel. With this capability, you can migrate your complete landscape (for example, migrate all your development systems at one time, and then quality assurance systems, and so on).

AWS SMS is free to use. However, during replication, it creates Amazon Elastic Block Store (Amazon EBS) snapshots and uses Amazon Simple Storage Service (Amazon S3) to store those snapshots, and there’s a cost associated with those resources. For pricing information, see the AWS website.

In this blog post, we’ll describe the general replication process with AWS SMS, and then we’ll discuss how you can use AWS SMS to migrate your SAP workloads.

Replication process

To set up your on-premises virtualized environment and AWS account for AWS SMS, see the detailed instructions for VMware and Hyper-V on the AWS website, or read the blog post AWS Server Migration Service – Server Migration to the Cloud Made Easy on the AWS Partner Network (APN) blog. As part of the setup, you deploy the AWS Server Migration Connector in your virtualized environment. When the setup is complete, you configure the replication job by setting its schedule and frequency. After you set up the job, the replication of your virtual machine with AWS SMS starts automatically and follows a four-step process. These four steps—scheduled, uploading, converting, and AMI creation—are executed sequentially for each replication job run.

Stages in the AWS SMS replication process

Figure 1: Stages in the AWS SMS replication process

Scheduled

In this step, the migration job(s) you configured are scheduled to run either at a specific time or immediately.

Figure 2: Replication job in Scheduled status

Figure 2: Replication job in Scheduled status

Uploading

This is a multi-step process:

  1. The VMware or Hyper-V snapshot of the virtual machine is triggered. The snapshot creates a VMDK file (for VMware) or an AVHD file (for Hyper-V).
  2. The Open Virtualization Format (OVF) file is created for the virtual machine. This is an XML file that contains metadata about the virtual machine.
  3. The VMDK or AVHD file created by the snapshot are uploaded to an S3 bucket. The S3 bucket is created automatically in the AWS Region where you’ve set up the AWS Server Migration Connector.
  4. After the snapshot files are uploaded to S3, they are deleted from the source environment.
Figure 3: S3 bucket with uploaded VMDK file

Figure 3: S3 bucket with uploaded VMDK file

Converting

This step handles two tasks:

  1. AWS SMS creates an EBS snapshot from the uploaded VMDK or AVHD file.
  2. AWS SMS deletes the VMDK or AVHD file from the S3 bucket.

Creating AMI

This step creates an Amazon Machine Image (AMI) from the EBS snapshot produced during the Converting step. After this step is complete, you can launch an Amazon EC2 instance from the created AMI.

The replication job continues to run at the scheduled frequency, with each execution repeating these steps. Each execution of the replication job brings only incremental changes to the AWS Cloud. When the replication is complete and the servers are ready to go live, you stop the production servers (on premises) to prevent further changes and execute the job one last time to bring in the delta from the last execution. After the final changes have been replicated to the AWS Cloud, you can create an EC2 instance from the AMI.

SAP workload migration with AWS SMS

Now that we’ve discussed the replication process, let’s talk about how you can use AWS SMS to migrate your virtualized SAP environment to the AWS Cloud.

There are two migration options: lift-and-shift, or migration to SAP HANA.

Lift-and-shift migration

In this scenario, you can migrate your virtualized SAP environment running on Windows, Red Hat Linux, SUSE Linux, or Oracle Linux to the AWS Cloud as is, without any changes in the operating system or database. The process consists of these steps:

  1. Schedule the replication job at regular intervals for the virtual machines containing the database and non-database applications (ASCS/SCS, PAS, and AAS), or non-SAP NetWeaver-based applications (like BusinessObjects BI). We recommend intervals of 12 hours for the database and 24 to 48 hours for non-database virtual machines.
  2. Complete the first execution of the replication job. You should schedule this job in advance because it’s an initial and full replication, and will take some time to complete. The execution time will depend on the size of your virtual machines.
  3. Monitor the replication job for successful completion of incremental runs (we recommend at least two runs) and take note of the time it takes to complete each successive replication job. This will give you an estimate of the downtime required for final cutover.

    We recommend completing at least two incremental runs because it takes much less time to complete subsequent jobs after the initial, full replication, since subsequent runs involve only delta changes. For example, in the replication shown in the following illustration, full replication took around 8 hours, and then delta runs completed in around 1.5 hours.

    Figure 4: Reduced execution time after initial replication

    Figure 4: Reduced execution time after initial replication

  4. Plan for the final cutover. For the cutover, you’ll stop production operations on premises (for example, you’ll stop your SAP applications) and you’ll execute the replication job one last time to migrate delta changes to the AWS Cloud. We also recommend staging a mock cutover before the final cutover.
  5. Build an EC2 instance from the AMI created by the last replication job.
  6. Complete post-migration steps such as updating the DNS (or hosts file), validation, and integration.
  7. Go live.

Figure 5 illustrates the replication process.

Figure 5: Steps to replicate SAP workloads (as is) to the AWS Cloud

Figure 5: Steps to replicate SAP workloads (as is) to the AWS Cloud

Migration to SAP HANA

If you aren’t running SAP HANA on premises and would like to migrate to the AWS Cloud with SAP HANA, you can reduce your downtime significantly with AWS SMS by following this two-step approach:

  1. Migrate your virtual machines running on Windows, Red Hat Linux, SUSE Linux, or Oracle Linux to AWS as is, by following the process outlined for lift-and-shift migration.
  2. Migrate to SAP HANA on AWS. If you are already running your SAP applications on AWS, your migration to SAP HANA will be significantly faster, even for large databases, because both your source and target SAP systems will be on AWS.
  • You are no longer constrained by the availability of resources to optimize the export and import process.
  • You can use the SAP database migration option (DMO) to perform Unicode conversion, upgrade, and migration in a single step. For details, see the DMO article published previously on this blog.

In this blog post, I’ve discussed how you can use AWS SMS to migrate your SAP workloads to the AWS Cloud easily, and reduce the downtime required for migration.

You can use AWS promotional credits to migrate your SAP systems to AWS. Contact us to find out how and to apply for credits.

Deploy APIs for SAP Using Amazon API Gateway

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by KK Ramamoorthy, SAP Digital Consultant at Amazon Web Services (AWS).

Your customers, partners, and employees want a seamless, secure user experience across various channels. For example, a customer who places an order using a voice-enabled device like Amazon Alexa should have the same experience on a mobile device. Or a field technician who is accessing training manuals using a mobile app should also be able to access and interact with these manuals on an augmented reality app.

Application programming interfaces (APIs) play a crucial role in building such a unified user experience. With APIs and an API management platform, you can expose easy-to-consume domain-driven services in an agile, flexible, secure, and scalable way.

API management platform

An API management platform provides the following key capabilities:

  • Performance at any scale
  • Security and flexibility
  • Ability to throttle traffic
  • Support for global deployments and edge caching
  • Lifecycle management and versioning
  • Support for canary deployments
  • API key management
  • Monitoring of API activity
  • SDK generation for multiple coding languages
  • Cataloging and documentation of APIs

Amazon API Gateway is a serverless API management platform that is fully managed, performs at any scale, and demonstrates all these capabilities. API Gateway can easily connect to HTTP(S) endpoints or invoke AWS Lambda functions for performing custom business logic. You also have the flexibility to cache data within API Gateway without having to hit your backend systems for every service call. These are just few of the capabilities of API Gateway. For more information, see API Gateway on the AWS website.

API Gateway and SAP

How can SAP customers benefit from API Gateway? SAP provides SAP Gateway to easily expose REST-based services using Open Data Protocol (OData). You can quickly stand up an SAP Gateway hub system in a private subnet within your virtual private cloud (VPC) and then securely expose it to API Gateway through a Network Load Balancer. After your API resources are exposed through API Gateway, you can further fine-tune them depending on your specific business needs. For example, you can choose to enrich the responses for certain services whereas for others you might want to just proxy through or cache locally.

For web and mobile apps, you can add AWS AppSync along with API Gateway. AWS AppSync is a fully managed service that enables data-driven app development. It also supports offline and conflict resolution out of the box. Find more information on AWS AppSync.

Reference architecture

This sample reference architecture illustrates how all these components tie together.

API Gateway and SAP architecture

  • A private subnet in the VPC contains your SAP applications, including SAP Gateway.
  • A Network Load Balancer, placed in the private subnet, will have access to the HTTP(S) ports of the SAP Gateway system and will proxy any requests that are intended for it. To simplify, the SAP Gateway system appears as an Amazon Elastic Compute Cloud (Amazon EC2) instance in this architecture. Note, however, that you need to implement multiple app servers and web dispatchers for effective load balancing.
  • A VPC link securely connects API Gateway with the Network Load Balancer. This enables you to expose your SAP services securely through API Gateway without having to expose the SAP systems to the external network. If required, you can further secure your services with a client certificate that is issued by API Gateway and trusted by the SAP system. This adds an additional layer of security by making sure that only API Gateway can access the SAP Gateway services.
  • For complex business logic, you can use API Gateway to trigger Lambda functions that are deployed in your VPC.
  • After the APIs are exposed, you can use AWS AppSync to further abstract your APIs for data-driven mobile and web app development.
  • Other AWS services like Amazon Lex and AWS IoT can also integrate with API Gateway to consume the exposed services.
  • Amazon Cognito ties all these services securely by managing user identities (both user pools and federated identities) to persist the context of the logged-in user across all AWS services and SAP backend.

Setting it up

Now, let’s go ahead and implement this architecture. We will expose a sample service provided by SAP through API Gateway. See the SAP documentation on the sample service. This service exposes various business objects like Business Partners, Contacts, Orders, and Products as OData services.

  1. Install an SAP NetWeaver Gateway Advanced Business Application Programming (ABAP) system in a private subnet. Developer editions are available from SAP: SAP NetWeaver AS ABAP 7.51 SP02 or SAP NetWeaver AS ABAP 7.51 SP02 on HANA (Cloud Appliance Library edition).

  2. After the SAP system is installed and configured, open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. In the navigation pane, under Load Balancing, choose Load Balancers, Create Load Balancer, and create an internal Network Load Balancer with the SAP NetWeaver ABAP system as the target. Note the DNS name of the Network Load Balancer; we will need this for setting up API Gateway later.

    Creating a load balancer

  3. From the API Gateway console at https://console.aws.amazon.com/apigateway/, create a VPC link. This will provide a secure access through API Gateway to the SAP system in the private subnet.

    Creating a VPC link

  4. API Gateway supports proxy integration as pass through. This comes in handy when you want API Gateway to simply pass the request and response between client and server (in this case, SAP). We will create a proxy resource with path /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/{proxy}.

    Creating a child resource

  5. We will add an ANY operation under the {proxy} resource to resolve various HTTP methods (for example, GET, PUT, POST) and proxy it to the SAP Gateway system through the VPC link. Remember the DNS name that you noted in step 2? You will use that DNS name as the endpoint URL here.

    Adding ANY operation under proxy

  6. Now, you might want to cache some resources at the API Gateway layer so that you don’t have to do the round trip to the backend SAP Gateway system. Caching improves performance of the APIs. In our example, let’s cache the product data because it is master data and doesn’t change often in the SAP system. To cache the data, go to the API Gateway console, choose the /GWSAMPLE_BASIC resource, create a child resource for ProductSet, and add the GET operation to it.

    Caching resources

  7. It’s time to deploy the API. Let’s deploy it to the stage called dev.

    Deploying the API

  8. In the API Gateway console, navigate to the dev Stage Editor and check the Enable API cache box under Cache Settings. Set Cache capacity at 0.5 GB and Cache time-to-live (TTL) at 3600 seconds. It will take 4-5 minutes to build the cache.

    Configuring cache settings

  9. You want to cache only the ProductSet resource. To avoid caching the {proxy+} resources, choose the GET operation. Choose Override for this method and clear Enable Method Cache. Do this for all the other methods as well.

    Caching only the ProductSet resource

  10. Test the APIs using a tool like Postman. You will notice that after the first call to the ProductSet API, subsequent calls will be retrieved from the cache. You can validate this in two ways:

    • Check the Amazon CloudWatch Logs for CacheHitCount and CacheMissCount metrics.
    • Stop the backend SAP Gateway system and then call the APIs.

    The ProductSet API should still work, but data will be fetched from the cache instead of a roundtrip to the backend SAP system.

    Testing the APIs

    Note: In this test, we used Basic authentication (in the Authorization header field), which is OK for testing purposes. However, for production scenarios, you will use OAuth 2.0 flows for authentication. SAP ABAP-based applications support two types of OAuth 2.0 flows:

    • Authorization code flow for OAuth 2.0 – This is a user-initiated flow and best suited when a user is available to provide login credentials. Examples include web or mobile applications where a user is available to initiate login.

    • SAML 2.0 Bearer Assertion Flow for OAuth 2.0 – This is a server-to-server communication flow where the user context of an already authenticated user in one server is used to log in to another server without user involvement. For example, API Gateway calls a Lambda function that can issue a SAML assertion for the AWS logged-in user by using an open source SAML SDK like OpenSAML2. Then, using the SAML assertion, you can get access tokens for the same user in SAP.

    There’s a lot to say about these two flow types, and we’ll cover them in more detail in a subsequent blog post.

Next steps

We have only scratched the surface of the various capabilities here. What we shared in this blog should get you started quickly on integrating Amazon API Gateway and with SAP. Here are some of the things you can do after you have integrated your SAP processes with AWS services using API Gateway:

  • Enrich your apps with chatbot capabilities using Amazon Lex.
  • Increase productivity with image recognition capabilities using Amazon Rekognition.
  • Empower your users with augmented reality apps using Amazon Sumerian.
  • Build data-driven apps using AWS AppSync, which supports offline use cases out of the box.
  • Test, deploy, and maintain your mobile apps using AWS Mobile Hub.
  • And many other options…

The possibilities are endless, and most AWS services are only an API call away. API Gateway provides a fully managed, pay-as-you-go service that enables you to create and manage APIs easily at scale. We hope you found this article useful. Please don’t hesitate to contact us with your comments or questions.

Beyond Infrastructure: How to approach business transformation at startup speed

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Steven Jones is a Technology Director and KK Ramamoorthy is an SAP Digital Consultant at Amazon Web Services (AWS).

Our customers are seeing big business benefits by moving their SAP workloads to AWS. For example, Visy, a leading packaging and recycling company in Australia, is seeing performance improvements of up to 46% and provisioning times for SAP deployments reduced to days versus weeks or months after moving their SAP systems to AWS. Many other customers that have also migrated their SAP workloads to AWS are seeing similar benefits (see the case studies).

While these are tangible and important business benefits, the move to AWS is often just the first step in our customers’ innovation journeys. After they build a foundation on AWS infrastructure, customers are working to transform their entire businesses, and save money, experiment, and go to market more quickly with applications that can extend or integrate with their SAP investments.

We see customers working to realize their own business transformations by focusing on four pillars: Big data & analytics, IoT, Apps & APIs, and DevOps. All of these focus areas are supported by a strong foundation of machine learning and compute services on AWS. Many of these solutions themselves can be built directly on AWS or by using SAP Cloud Platform (SCP), which is available in four AWS Regions around the globe.

Diving into details, here are the opportunities that we see for you in realizing your business’s transformation journey using the four pillars, and we invite you to come see them in action at SAPPHIRE NOW.

Big data & analytics

Data is produced everywhere. A Veritas report, however, found that 52% of data remains as dark data. That is, only 48% of produced data is collected and classified, while only 15% of that is analyzed to gain insights and drive actions. Imagine the potential if you could ingest all this data into data lakes, and subsequently cleanse and analyze it with powerful analytical tools.

A survey by Aberdeen in 2017 found companies that implement a data lake outperform similar companies by 9% in organic revenue growth. You can build your data lakes on AWS using these services:

See the customer stories on big data & analytics with AWS.

Internet of Things (IoT)

If you knew the state of everything and could reason on top of that data, what problems would you be able to solve? This is the question many of our customers are asking when it comes to business processes powered by IoT. AWS IoT Core provides the underlying technology for you to achieve the business outcomes you seek from knowing the state of everything.

With AWS IoT, you can deploy, secure, and manage your devices at scale, perform computing and machine learning inference at the edge using AWS Greengrass and AWS Lambda, and run sophisticated analytics on massive volumes of IoT data using AWS IoT Analytics. Check out some really interesting customer stories on IoT with AWS.

Apps & APIs

Apps power the enterprise. Customers and employees increasingly ask for simpler ways to transact business processes. They want the flexibility to access data from a variety of devices. In fact, Gartner predicts that by the year 2020, 30% of web browsing sessions will be done without a screen. “Voice first” apps are becoming a reality, and your users are expecting the same with enterprise applications.

Moreover, we are hearing from our customers that they don’t want to customize their core business applications due to increased complexity and maintenance costs. Rather, they are looking at extending SAP functionality using cloud applications by consuming core SAP functionality as APIs and building custom extensions as microservices that scale as demand grows.

With Amazon Elastic Container Service (Amazon ECS), you can deploy and orchestrate your Docker container apps, securely access your SAP services using Amazon API Gateway and Lambda, build data-driven web and mobile apps using AWS AppSync and AWS Mobile Hub, and federate user identity across all services using Amazon Cognito. See our customer stories on Apps & APIs with AWS.

DevOps

DevOps is the combination of processes and tools to increase an enterprise’s agility in building and deploying technology solutions at a rapid pace. Automation is one of the core components of DevOps. Treating infrastructure as code provides a unique opportunity for customers to automate many pieces of SAP operations within AWS. You can use increase your business’s agility with the following services:

Explore our customer stories on DevOps with AWS.

Building on a strong foundation

To support your innovation ambitions, AWS solutions are built on a solid foundation of compute, storage, and network capabilities with machine learning connecting all facets of the solution.

With Amazon SageMaker you can build, train, and deploy your own machine learning models quickly without having to worry about the infrastructure. For a vast majority of your machine learning use cases, you can use out-of-the-box solutions like Amazon Rekognition, Amazon Lex, Amazon Comprehend, Amazon Translate, Amazon Transcribe, Amazon Polly, and AWS DeepLens that are only an API call away.

See it at SAPPHIRE NOW

Not sure how to use these services and fine-tune them to suit your specific business needs? Fear not! We have been busy building sample applications and proof of concepts in our internal labs to showcase these capabilities, and we will happily share them with you at SAPPHIRE NOW.

We have a whole set of demos lined up for you, including:

  • Customer recognition using AWS DeepLens – Recognize a customer using facial recognition and pull up the SAP customer record automatically for the customer service agent to provide a personalized experience for your customers.
  • Product recognition using AWS DeepLens – Build a custom machine learning model using Amazon SageMaker to recognize a product, match it with the SAP product catalog, and automatically pull up the product information for placing an order.
  • API enablement for SAP – End-to-end mobile application built using AWS AppSync that integrates with a backend SAP system using API Gateway and enriches the user experience using image recognition with Amazon Rekognition, chatbot functionality with Amazon Lex, and 3D product catalog access using Amazon Sumerian.
  • SAP serverless refresh – SAP system refreshes are considered to be the most time-consuming and error-prone task in any SAP landscape, but with AWS serverless services like AWS Step Functions and Lambda, you can fully automate and bring consistency to your SAP system refreshes. Visit our booth to learn more about the solution.
  • AWS SMS for SAP migrations – Migrating your virtualized SAP applications is much easier than you think. In fact, if you know how to schedule a job, you can easily migrate your virtualized SAP applications to AWS. AWS Server Migration Service (AWS SMS) also gives you the capability to migrate your SAP applications incrementally, which significantly reduces downtime.
  • Serverless SAP data lake – Execute queries from SAP HANA to an Amazon S3-based data lake in a serverless manner with Athena. You can use this feature to build analytics on SAP HANA and combine data from the data lake without inserting data to SAP HANA.
  • SAP HANA high availability setup – SAP HANA high availability (HA) setup based on SLES for SAP requires manual execution of steps and is complex to configure manually. In an on-premises scenario, this can easily take a couple of days to configure. We will show you how you can build a two-node HA cluster for SAP HANA on AWS in under 40 minutes.
  • And many more…

For example, here is a diagram of a sample AWS DeepLens use case with SAP:

AWS provides a broad and feature-rich set of services that let you give your customers the best possible experience. These services help you explore new business opportunities, provide scaled customer experiences, and save money, not just with SAP, but also with the applications that surround it— all core to your business.

Visit our Build On bar in the AWS booth at SAPPHIRE NOW to experience these feature-rich demos or leverage 1:1 time and an SAP on AWS expert to innovate your own custom solution. For more information on where to find us, see the Amazon Web Services at SAPPHIRE NOW website. Not attending SAPPHIRE NOW? Feel free to contact us directly for more information. Hope to see you soon and Build On!

Take advantage of a two-way door to transform mission-critical SAP systems on AWS

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This blog post was written by Bas Kamphuis, GM, Strategic ISVs at AWS

Nobody likes going through a one-way door.

After a one-way door shuts, there’s no easy way to get back to where you started. Your options are limited, and changing course requires you to invest even more time and resources into the journey you’ve unwittingly begun.

You might wish you never opened the door in the first place.

For many SAP customers, deciding how to deploy and run complex and mission-critical SAP environments is akin to walking through a one-way door. SAP is a critical tool for many enterprise operations, yet successfully deploying SAP has traditionally required significant capital investment, complex integrated system architecture design, customized solutions tailored to the stringent requirements of the enterprise, and a rigid IT backbone to ensure resiliency and dependability.

According to a recent research study by Resulting IT, SAP program implementation timelines sometimes stretch beyond intended delivery dates, costs exceed planned budget, and program results can miss expectations. When measuring success metrics such as project on-plan rates, on-budget rates, and on-value rates, Resulting IT found that:

  • Only 36% of the survey respondents felt that their SAP program kept to the original delivery plan
  • Fewer than 30% felt that their SAP program delivered to the agreed budget
  • Fewer than 48% felt that their SAP program achieved its business objects

Why should you walk through a one-way door when you can innovate and transform your SAP programs without barriers by deploying on AWS?

Embrace new SAP innovations by using a two-way door on AWS

Where a one-way door presents challenges, a two-way door presents opportunities.

You can walk right back through the door to begin again with a new approach. This type of flexibility, ease of use, and empowerment is what AWS strives to provide enterprise clients who seek to take advantage of innovative new SAP solutions such as S/4HANA but who struggle with knowing where to begin.

The security, agility, and speed characteristics of AWS enable SAP customers to experiment confidently and efficiently with different configurations and approaches for SAP and tailor an SAP program to meet their specific needs without significant upfront cost. For example, AWS offers a wide range of EC2 instance types that are certified by SAP for production deployment on AWS and can meet the needs of extremely large SAP customer use cases. With AWS, you don’t need to worry about accurately estimating the size of the target system right off-the-bat. You can, for instance, begin by using a system with 122 GiB of memory and within minutes transition to a system with 4 TiB of memory should you realize you need more system memory. And this works both ways: Because of the on-demand consumption model, you aren’t penalized should you change course or decide you need less memory. And as Jeff Barr shared in his blog post yesterday, AWS customers are soon able to scale to 12 TiB and beyond for their HANA scale-up deployments.

Many enterprise customers, including BP, Brooks Brothers, Coca-Cola İçecek (CCI), GE Oil & Gas, Kellogg’s, Liberty Mutual, Moderna Therapeutics, Seaco, and Travis Perkins plc, have migrated SAP environments to AWS to securely run SAP applications with flexibility and at massive scale. As you look to migrate from a non-HANA database to SAP HANA, AWS offers many self-service tools you can use through the SAP Rapid Migration Test Program (FAST) to help reduce your migration time down to days, with minimal infrastructure cost. We’ve already seen many large organizations in 2018 use FAST to test their SAP ERP or BW installation systems on HANA. None of these tests took more than a few days, and the infrastructure cost was less than $1,000. The FAST program combines SAP tools and documentation with AWS tools and on-demand infrastructure to reduce the effort, time, and cost needed to test migration to SAP HANA.

And if you’re seeking assistance as you test SAP HANA through FAST tools or when you complete a large-scale SAP migration to AWS, you can turn to the global AWS & SAP Partner ecosystem. These AWS SAP Competency Partners have deep expertise and proven customer success helping companies migrate and deploy SAP workloads on AWS. For example, Seaco approached AWS SAP Competency Partner Lemongrass Consulting for help migrating its IT landscape to a new environment on AWS. “For Seaco, we completely migrated its IT landscape to AWS, including its SAP systems. By migrating to AWS, the company has achieved a 50 percent cost savings on its IT costs, and has saved considerable time on particular tasks,” explains Eamonn O’Neill, director at Lemongrass. “Seaco was suffering from a billing run that was taking four days a month to execute. When we migrated the company’s IT infrastructure to AWS, we were able to cut that down to one day. The business has achieved direct benefits from that migration.”

Achieve the full potential of digital transformation through the stability and agility of AWS for SAP

Traditional, on-premises options for running SAP environments simply can’t keep up with the needs of the modern enterprise looking to drive new business outcomes. AWS and SAP, however, are committed to building interoperability between AWS services and SAP solutions so that you can develop additional value for your end users and discover new insights from your data.

For example, SAP Cloud Platform is an open platform-as-a-service that provides customers and partners with in-memory capabilities, core platform services, and unique business services for building and extending personalized, collaborative, mobile-enabled cloud applications. SAP Cloud Platform supports AWS and is Generally Available in three AWS Regions, with one additional AWS Region in Beta. Using SAP Cloud Platform on AWS, you can take advantage of the stability of the AWS platform and the availability of emerging technologies, such as microservices, Internet of Things (IoT), advanced analytics, and machine learning to deliver agile applications.

Walk through a two-way door on AWS: Get to know us at SAPPHIRE NOW

Virtually all SAP solutions are certified to run on AWS, and none of the choices you make in the SAP landscape is going to hold you back. By using AWS, you’re free to change your mind, experiment to learn what works best, and take advantage of new service integrations to drive more value to your SAP program and your business.

Join thousands of enterprises around the globe and take the next step in your SAP journey by migrating your SAP systems to AWS and using AWS to optimize your SAP environments.

This year, AWS is a Sapphire-level sponsor at SAPPHIRE NOW. We have a booth at the event (Booth #642) and will be hosting in-booth presentations with SAP and some of our top AWS SAP Partners, including Accenture, Deloitte, DXC, Capgemini, iTelligence, Lemongrass, Linke IT, Protera Technologies, SUSE, and Intel, along with many AWS-driven presentations. If you’re attending the event, we’d love to meet you, learn more about your business needs, and look at how AWS may be able to help.

Not at SAPPHIRE NOW? We’ve had an exciting week! See our key announcements below:

To read more about how you can migrate SAP workloads to AWS, download our brand-new eBook to get started.

Connect with APN Partners that offer SAP-related services and solutions

$
0
0

Feed: AWS for SAP.
Author: Bill Timm.

The AWS Partner Network (APN) is the global partner program for Amazon Web Services (AWS) and consists of tens of thousands of partners worldwide. APN Consulting Partners can help customers design, architect, build, migrate, and manage their workloads and applications on AWS. APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS Cloud.

To make it easier for customers to find APN Partners that offer services and solutions specifically for SAP on AWS, we have created the AWS SAP Partner Services and Solutions Directory. You can find the following types of services and solutions for SAP on AWS in the directory:

  • Cloud assessment services – Advisory services to help you develop an efficient and effective plan for your cloud adoption journey. Typical services include financial/TCO (total cost of ownership), technical, security and compliance, and licensing.
  • Proof-of-concept services – Services to help you test SAP on AWS, including SAP ERP/ECC migration to HANA or S/4HANA, SAP BW migration to HANA or BW/4HANA, SAP OS/DB migration, and new SAP solution implementation.
  • Migration services – Services to migrate existing SAP environments or systems to AWS, including all-in SAP migrations (PRD/QAS/DEV), hybrid SAP migrations (QAS/DEV/TST), and single system migrations (BW).
  • Managed services – Managed services for SAP environments on AWS, including migration services, AWS account and resource administration, OS/DB administration/patching, backup and recovery, and SAP Basis/NetWeaver administration.
  • Packaged solutions – Bundled software and service offerings from SAP Partners that combine SAP software, licenses, implementation, and managed services on AWS, such as SAP S/4HANA, SAP BusinessObjects BI, and many others.
  • Partner solutions – APN Technology Partner solutions for SAP on AWS to support system migration, high availability, backup and recovery, data replication, automatic scaling, and disaster recovery.

You can find the AWS SAP Partner Services and Solutions Directory in the SAP section of the AWS website at this path:

https://aws.amazon.com/sap/ -> Partners -> Find a Partner

Screenshot of AWS SAP Partner Services and Solutions Directory

Searching the directory and connecting with APN Partners

To find an APN Partner service or solution for SAP on AWS:

  1. Open the SAP Partner Services and Solutions Directory.
  2. In the list in the search area, select the type of service or solution you are looking for (for example, cloud assessment or migration services).
  3. Select the relevant SAP solution within that type (for example, BW migration to HANA under migration services).
  4. Select the region you want to search.
  5. Choose Search. A list of partner services that match your search criteria will be displayed, as shown in this screen illustration:

    Screenshot showing how to narrow your search

  6. To learn more about a partner and their service offering, choose the Details >> link.
  7. To connect with the APN Partner, choose the Contact Partner button on the details page, and then complete and submit the form. The APN Partner will be notified and will respond to your inquiry.

Information for APN Partners

If you are an APN Partner and are interested in listing a service or solution offering in the directory, view requirements and instructions.

Smaller X1e instances for SAP HANA non-production workloads

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This blog post was written by Wilson Karunakar Puvvula, SAP Solutions Architect, Amazon Web Services

A lot of customers running SAP HANA on Amazon Web Services (AWS) choose to run their development and QA/test workloads on smaller instances from the R4 family while they run their production on X1/X1e instances. In this blog we will discuss using smaller X1e instances for your development and QA/test environments. This will particularly help customers who are implementing SAP as a greenfield solution and those who have a smaller data footprint on HANA.

At AWS, we are always making an effort to help customers architect their solutions with leaner total cost of ownership (TCO), and AWS offers many instance types that support HANA workloads. Although R4 instances provide a better vCPU-to-memory ratio, some of the non-production workloads might not need such high CPU or I/O. For example, r4.8xlarge provides similar in-memory capabilities as x1e.2xlarge, but it has four times as much vCPU capacity, which often goes underutilized.

The cost of non-production SAP systems can be a significant part of the overall TCO, because every SAP production system has a number of additional non-production systems. Therefore, to lower your TCO you could run your non-production environments on one of the smaller instances from the X1e family. This also aligns with the approach documented in SAP note 2271345 – Cost-Optimized SAP HANA Hardware for Non-Production Usage (SAP logon required).

Comparing X1e and R4 instances

Because the smaller X1e instances have lower Amazon EBS throughput, they generally take additional time when writing the backups to Amazon EBS and loading tables into memory during database startup. In a non-production environment, however, this is an acceptable tradeoff for a majority of customers who want to keep their costs low.

Let’s take a look at some of the X1e and R4 instances that provide similar memory but a different vCPU configuration:

EC2 instance vCPU Memory (GB) SAPS* Amazon EBS throughput (MiB/s) Memory per vCPU
x1e.4xlarge 16 524 16,437 218.75 32.57 GB
x1e.2xlarge 8 262 8,219 125 32.57 GB
x1e.xlarge 4 131 4,109 62.5 32.57 GB
r4.16xlarge 64 524 76,400 1,750 7.5 GB
r4.8xlarge 32 262 38,200 875 7.5 GB
r4.4xlarge 16 131 19,100 437.5 7.5 GB

* SAP Application Performance Standard

We have tested these instances in both standard and distributed SAP installations, and have found the results reasonable. Our tests focused on write performance of HANA backups and data load times into memory. We used a 1 TB Amazon EBS Throughput Optimized HDD volume (st1) attached to the instance for the backups. The write performances were 3-6 GB/minute on smaller X1e instances versus 6-9 GB/minute on R4 instances. For data load into memory, we observed a data load speed of 3-4 GB/minute on smaller X1e instances versus 17-25 GB/minute on R4 instances. These tests were performed on all the instances listed in the table, and we recommend testing other SAP operational activities that you see fit for your business.

Instance resizing: More capacity only when you need it

At AWS we understand the need for customers to scale on demand, allowing you to transition to a larger instance—say scaling up from an x1e.2xlarge instance to an r4.8xlarge instance.

Instance resizing is particularly helpful during maintenance activities such as SAP application upgrades, where you need to match your QA/test environment with production compute. Using instance resizing, you can scale up your QA/test environment for the duration of the upgrade and scale down after the activity has finished. Also note that according to SAP note 2271345 – Cost-Optimized SAP HANA Hardware for Non-Production Usage (SAP logon required), performance-related support will be provided only on production-grade hardware. You can quickly switch to a production-certified instance by using the instance resize feature.

Take a look at the example timeline of an SAP upgrade event where an EC2 instance is scaled on-demand:

chart showing uptime and downtime during upgrade

The example assumes that the customer is running QA/test workloads on a Reserved Instance of x1e.2xlarge. During the downtime phase of the upgrade, the x1e.2xlarge Reserved Instance is scaled up to an r4.8large On-Demand Instance for a duration of 8 hours. In this case, the customer pays for r4.8xlarge on demand for duration of downtime, i.e., 8 hours, and then scales down the QA/test workloads to x1e.2xlarge and reclaims the reserved capacity. Running an r4.8xlarge On-Demand Instance for a duration of 8 hours would cost $17 in the us-east-1 region.

Having this flexibility to scale enables you to optimize performance while lowering your costs. The smaller X1e instances are now available for deployment using the SAP HANA on AWS Quick Start.

For more information, see Changing the Instance Type in the Amazon EC2 documentation, and take a look at the list of instance types that are supported for SAP workloads and their Amazon EBS throughput limits.

Feel free to contact us with your questions or suggestions. Thank you!

— Wilson

Run federated queries to an AWS data lake with SAP HANA

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Harpreet Singh, Solutions Architect at Amazon Web Services (AWS).

An Aberdeen survey revealed that organizations who implemented a data lake outperformed similar companies by 9% in organic revenue growth. A data lake gives these companies the capability to get meaningful insights from their data, which helps them to take actions that differentiate them from the competition.

With its durability and cost effectiveness, Amazon Simple Storage Service (Amazon S3) offers a compelling reason for customers to use it as storage layer for a data lake on AWS. Many of these customers are deploying their SAP HANA–based applications on AWS and want to have the option of building analytics with data from SAP HANA and an Amazon S3–based data lake, while still using SAP HANA as the primary source for analytics.

There could be many scenarios for federating queries from SAP HANA to a data lake on AWS. Here are a few specific examples:

  • Utilities industry: You can store consumption of electricity-relevant data in a data lake on AWS and federate queries from SAP HANA to predict future energy consumption.
  • Retail industry: You can store social media activity about your company in a data lake on AWS, match the activity with customer tickets in SAP CRM for analysis, and improve customer satisfaction. Another example for the retail industry is analyzing data from an e-commerce website and inventory/stock in the SAP system.
  • Pharma: You can perform recall analysis using archived inventory data from the data lake on AWS and current inventory data from the SAP system.

This blog provides steps for configuring SAP HANA to run federated queries to an Amazon S3–based data lake by using Amazon Athena.

Let’s look at the architecture first. Say you are using Amazon S3 as storage for a data lake that receives raw data from various data sources (for example, web applications, other databases, streaming data, other non-SAP systems, etc.) in an Amazon S3 bucket. Raw data is transformed via AWS Glue and is then stored in another Amazon S3 bucket in an Athena-supported format. AWS Glue crawlers catalog the transformed data. If you want to learn how to catalog data in AWS Glue, refer to this blog post.

diagram of data flow from s3 to s a p hana via athena

Figure 1: Data from multiple sources is stored in S3 and then returned, by using Athena, in federated queries from SAP HANA.

For this example, we will focus on federating queries from SAP HANA by using Athena. I have already crawled and cataloged a table containing open source e-commerce data. Here are the details:

  1. A CSV file, eCommerce-Data.csv, that contains sample sales records from an e-commerce site is available in the Transformed Data S3 bucket. This CSV contains sales record of various customers:
    data from c s v file
  2. AWS Glue crawls and catalogs the data that is in the Transformed Data S3 bucket and saves it in the ecommerce_data table in the database named ecommerce-database in AWS Glue.
    ecommerce data in a w s glue
  3. The database and table are now available in Athena, and we can execute SQL queries on this table by using Athena Query Editor.
    query run in athena editor

Our objective is to federate queries from SAP HANA to this ecommerce_data table in ecommerce-database.

Now that we have set the context, let’s focus on the technical bits that are required for this setup.

  • SAP HANA Smart Data Access (SDA), a powerful feature that has been available since HANA 1.0 SPS 6, enables you to perform data manipulation language (DML) statements on external data sources. You can create virtual tables in SAP HANA that point to tables in remote data sources. Refer to the SAP documentation for more details on SAP HANA SDA.
  • Athena provides both JDBC and ODBC drivers, which can be used by other applications to query tables in Athena. SAP HANA SDA supports only the ODBC driver, so we will use the ODBC driver in this blog post.

Install and configure the Athena ODBC driver on the SAP HANA system

First, we need to install the Athena ODBC manager and ODBC driver on the SAP HANA System. (Refer to the SAP HANA Quick Start deployment guide for installing SAP HANA on AWS.)

In the steps below, we will assume SUSE Linux as the operating system (the steps are similar for RHEL). Detailed instructions for ODBC driver installation are available in the Symba Technologies ODBC driver installation and configuration guide.

1. Install the ODBC manager

You can install iODBC (version 3.52.7 or later) or unixODBC (version 2.3.0 or later). We will use unixODBC for this setup.

To install unixODBC on the SAP HANA system, execute as root the following command:

zypper install -y unixODBC

zypper command running

2. Install the Athena ODBC driver

Refer to connecting to Amazon Athena with ODBC for the latest RPM package URL. Then on the SAP HANA instance, execute as root the following commands, replacing the URL in the wget command and the file name in the zypper command:

mkdir AthenaODBC
cd AthenaODBC
wget https://s3.amazonaws.com/athena-downloads/drivers/ODBC/Linux/simbaathena-1.0.2.1003-1.x86_64.rpm
zypper --no-gpg-checks install -y simbaathena-1.0.2.1003-1.x86_64.rpm

commands running

3. Attach the IAM policy for the SAP HANA instance

Assign the managed IAM policy AmazonAthenaFullAccess to the IAM role that is assigned to the SAP HANA instance. Refer to the Athena documentation for details.

You can copy this policy and customize it to meet your specific needs.

attach policy

4. Configure the Athena ODBC driver

On your SAP HANA instance, log in as adm and switch to the home directory. Create .odbc.ini with the following content, replacing the highlighted values with your specific settings, where MyDSN is the name of the data source. (You can change it to any name you like.)

[Data Sources]
MyDSN=Simba Athena ODBC Driver 64-bit
[MyDSN]
Driver=/opt/simba/athenaodbc/lib/64/libathenaodbc_sb64.so
AuthenticationType=Instance Profile
AwsRegion=
S3OutputLocation=s3:////

Here is an example:

code for creating o d b c ini

I am using the AWS Sydney region, so I have used ap-southeast-2 as AwsRegion. I have already created an Amazon S3 bucket that contains the TempForSAPAthenaIntegration folder, which I have used as S3OutputLocation. Change these values to reflect your setup.

5. Configure the environment variable

As adm, create .customer.sh with the following content and change the permissions on this file to 700.

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/simba/athenaodbc/lib/64/
export ODBCINI=$HOME/.odbc.ini

code to create customer.sh

Exit from adm and log in again to check that the environment variable set in .customer.sh are effective (i.e., you can see the ODBCINI variable and changes to LD_LIBRARY_PATH):

changes to .customer.sh

Test Amazon Athena ODBC Driver

Now it’s time to test connectivity to Athena by using the ODBC driver that you installed in previous step. On your SAP HANA instance, as adm, execute the following command, replacing the highlighted text with the name of your data source.

isql  -c -d,

In our example, we defined the data source name as MyDSN in odbc.ini, so we use that data source name here:

successful connection message

If you get an SQL prompt without any error, your ODBC driver has been configured successfully. However, let’s execute a query against the ecommerce_data table that I have in my environment to check that we are able to execute queries and get the results from Athena.

results returned from athena

That’s great—all looks fine.

Configure SAP HANA

As mentioned previously, we will use SAP HANA SDA to connect to the Athena remote data source. We will configure the SAP HANA SDA Generic ODBC adapter for this connectivity.

1. Create the Athena property file

The SAP HANA SDA Generic ODBC adapter requires a configuration file that lists the capabilities of the remote data source. This property file needs to be created as root user in /usr/sap//SYS/exe/hdb/config. We will call this file Property_Athena.ini (you can change this name), and we will create it with following content.

CAP_SUBQUERY : true
CAP_ORDERBY : true
CAP_JOINS : true
CAP_GROUPBY : true
CAP_AND : true
CAP_OR : true
CAP_TOP : false
CAP_LIMIT : true
CAP_SUBQUERY :  true
CAP_SUBQUERY_GROUPBY : true

FUNC_ABS : true
FUNC_ADD : true
FUNC_ADD_DAYS : DATE_ADD(DAY,$2,$1)
FUNC_ADD_MONTHS : DATE_ADD(MONTH,$2,$1)
FUNC_ADD_SECONDS : DATE_ADD(SECOND,$2,$1)
FUNC_ADD_YEARS : DATE_ADD(YEAR,$2,$1)
FUNC_ASCII : true
FUNC_ACOS : true
FUNC_ASIN : true
FUNC_ATAN : true
FUNC_TO_VARBINARY : false
FUNC_TO_VARCHAR : false
FUNC_TRIM_BOTH : TRIM($1)
FUNC_TRIM_LEADING : LTRIM($1)
FUNC_TRIM_TRAILING : RTRIM($1)
FUNC_UMINUS : false
FUNC_UPPER : true
FUNC_WEEKDAY : false

TYPE_TINYINT : TINYINT
TYPE_LONGBINARY : VARBINARY
TYPE_LONGCHAR : VARBINARY
TYPE_DATE : DATE
TYPE_TIME : TIME
TYPE_DATETIME : TIMESTAMP
TYPE_REAL : REAL
TYPE_SMALLINT : SMALLINT
TYPE_INT : INTEGER
TYPE_INTEGER : INTEGER
TYPE_FLOAT : DOUBLE
TYPE_CHAR : CHAR($PRECISION)
TYPE_BIGINT : DECIMAL(19,0)
TYPE_DECIMAL : DECIMAL($PRECISION,$SCALE)
TYPE_VARCHAR : VARCHAR($PRECISION)
TYPE_BINARY : VARBINARY
TYPE_VARBINARY : VARBINARY

PROP_USE_UNIX_DRIVER_MANAGER : true

2. Change the properties of Proprty_Athena.ini

After the file has been created, update its ownership to adm:sapsys, and change the permissions to 444:

changing name and permissions

3. Restart SAP HANA

We need to restart SAP HANA so that it starts with the environment variable that we previously set in .customer.sh.

4. Create the remote data source

Use SAP HANA studio to log in to SAP HANA, and follow the menu path to create a remote data source.

new remote source command

5. Define the properties of the remote source

Fill in the values for Source Name, Adapter Name, Connection Mode, Configuration file, Data Source Name, DML Mode, and your user name and password. For the user name and password, fill in any dummy values as this is not relevant because access is based on the Athena Role that is assigned to the Amazon Elastic Compute Cloud (Amazon EC2) instance. Ensure that the Configuration file name matches the name of the configuration file that you created (in our example, Property_Athena.ini) and that the data source name matches what you defined in .odbc.ini (in our example, MyDSN).

defining properties of remote source

Then save (Ctrl+S), and confirm that the connection test completes successfully.

checking the connection

You can see that an Amazon_Athena remote data source has been created in SAP HANA, and you can expand it to see the database and table (ecommerce-database and ecommerce_data in my example).

e commerce data table in remote data source

6. Create a virtual table

The next step is to create a virtual table in SAP HANA that points to the table in the remote data source. Open the table name context (right-click) menu in the remote source, and choose Add as Virtual Table.

add as virtual table command

Enter a name for the virtual table and the schema in which virtual table needs to be defined. For example, I am creating the vir_ecommerce_data virtual table in the SYSTEM schema.

dialog box

You can see the virtual table in the SYSTEM schema.

virtual table

7. Execute queries on the virtual table

Open the SQL console and execute SQL queries on the virtual table. You should be able to get results.

query results

8. Execute a query on the local and virtual tables

In SAP HANA, I have created a local table by the name of CUSTOMERMASTER that contains customer details.

customermaster table

We will filter a list of rows from the virtual table where CustomerID is listed in the CUSTOMERMASTER table:

select
	 distinct C."FNAME",
	C."LNAME",V."customerid",
	V."country"
from "CUSTOMERMASTER" as C,
	 "vir_ecommerce_data" as "V"
where V."customerid" = C."CUSTOMERID"

federated query from s a p

That’s all, we have successfully federated queries from SAP HANA to an Amazon S3–based data lake by using Athena.

Summary

We used the SAP HANA SDA feature and ODBC drivers from Amazon Athena to federate queries from SAP HANA to Athena. You can now combine data from SAP HANA with data that is available in an Amazon S3 data lake without needing to copy this data to SAP HANA first. Queries are executed by Athena and results are sent to SAP HANA.

Share with us how you have used Athena with SAP HANA or reach out to us with any questions. You can use AWS promotional credits to migrate your SAP systems to AWS. Contact us to find out how and to apply for credits.


Now available: New RHEL for SAP with HA and US in AWS Marketplace

$
0
0

Feed: AWS for SAP.
Author: Sabari Radhakrishnan.

While AWS and Red Hat have been working together for a long time to make it easy for our mutual customers to run SAP workloads, including SAP HANA, on AWS, our customers have had only two choices. They could run various SAP workloads on Red Hat Enterprise Linux (RHEL) on-demand by using the RHEL for SAP HANA Amazon Machine Image (AMI) from AWS Marketplace. Or they could use the Bring Your Own Subscription (BYOS) model images available through the Red Hat Cloud Access program.

The RHEL for SAP HANA AMI provides an easy way for customers to get started, but the listing doesn’t have some of the most sought-after features, like support for SAP Business Applications based on SAP NetWeaver, access to Pacemaker cluster software for high availability installations, and extended four-year update support (E4S). Customers that required these important features had to use the Red Hat Cloud Access BYOS model. Over the past year, we have worked closely with Red Hat to bring these critical features to the AWS Marketplace.

Today, we are excited to share that a new product, Red Hat Enterprise Linux for SAP with High Availability and Update Services (RHEL for SAP with HA and US), is available in the AWS Marketplace for the US East (N. Virginia) Region, with additional Regions coming soon. Using this RHEL for SAP with HA and US, you can run SAP HANA as well as SAP Business Applications based on SAP NetWeaver. In addition, this product offers access to Red Hat Pacemaker cluster software to set up High Availability for your SAP HANA and SAP Business Applications installations. Finally, you can take advantage of E4S for certain minor versions for your mission-critical SAP workloads.

RHEL for SAP with HA and US is available through both on-demand and yearly subscription models. To quickly get started with your SAP HANA deployment, use the SAP HANA Quick Start. The SAP HANA Quick Start follows the best practices of AWS, SAP, and Red Hat to automatically provision and configure the AWS resources required for SAP HANA deployment, operating system configuration, and installation of SAP HANA software in less than an hour.

You can use RHEL for SAP with HA and US with our suite of Amazon EC2 instances that are supported by SAP, including the recently launched Amazon EC2 High Memory Instances for applications based on SAP HANA and SAP NetWeaver. Refer to SAP HANA Hardware Directory to find the list of SAP Certified Amazon EC2 instances for SAP HANA. Refer to SAP OSS Note 1656099 to find the list of SAP-supported instance types for applications based on SAP NetWeaver.

Refer to Red Hat’s knowledge base article Red Hat Enterprise Linux for SAP offerings on Amazon Web Services FAQ to learn more. And reach out to us if you have any questions or need help getting started with this new product.

Announcing support for extremely large S/4HANA deployments on AWS

$
0
0

Feed: AWS for SAP.
Author: Steven Jones.

Steven Jones is a Technology Director and Global Technical Lead for the AWS Partner Organization.

The other day my son, who has headed off to college and is still looking at career options, asked me why I like working at Amazon. I tried as best I could to explain how I get to help Amazon Web Services build service offerings for the some of the world’s most demanding customer workloads performed by computers today, and how lucky I feel to be a part of a company that continues to push boundaries on behalf of customers’ needs and to help drive technology shifts.

As an example, last fall we announced the availability of Amazon Elastic Compute Cloud (Amazon EC2) High Memory instances powered by the latest generation Intel® Xeon® Scalable (Skylake) processors, which provide 6, 9, and 12 TB of memory for large in-memory SAP HANA workloads. Since then, customers using these Amazon EC2 High Memory instances for operating their business-critical workloads have told us they love the ease with which they’ve been able to deploy and integrate these native cloud instances seamlessly right alongside their application servers and other AWS services. This unique technology is up for a 2019 SAP Innovation award. And today, High Memory instances deliver the most memory of any SAP-certified cloud instance.

SAP-certified large scale-out deployments

We’re not stopping at 12 TB, with larger sizes due out later this year. Based on customer input, we have also been working with SAP to support additional deployment options for Amazon EC2 High Memory instances to support even larger database sizes.

With that, I’m pleased to share with you that SAP has certified scale-out deployments using the 12 TB Amazon EC2 High Memory instance type for S/4HANA workloads.

For the first time, you have the ability to leverage scale-out setups for your S/4HANA workloads in the cloud and take advantage of the innovation of the AWS Nitro system, a combination of purpose-built hardware and software components that provide the performance, security, isolation, elasticity, and efficiency of the infrastructure that powers Amazon EC2 instances. You can now scale out up to four nodes, totaling 48 TB of memory, for extremely large S/4HANA deployments. Here’s a screenshot showing a large S/4HANA scale-out cluster with 4 x 12 TB nodes.

screenshot of scale-out cluster

Over the years, we have worked closely with SAP to build and certify a broad range of Amazon EC2 instance types to provide granular memory deployment options for all types of SAP HANA workloads ranging from 60 GB to 12 TB. Customers tell us they are able to start with the memory sizes they currently need and grow their infrastructure as their individual needs dictate.

The best part is that Amazon EC2 instances provide the ability to seamlessly scale up to larger instance types through a simple stop/start process without the need for lengthy migrations or costly outages.

When you reach the 12 TB threshold, the scale-out support for S/4HANA means that you can now also scale out incrementally as your needs change.

Provisioning large scale-out clusters often requires several weeks or even months of planning and execution with on-premises or colocation setups. On AWS you can provision single node virtual or bare metal systems, and even large SAP HANA scale-out clusters, using our automated AWS Quick Start for SAP HANA in less than an hour.

Business continuity is essential with these mission-critical workloads. On-premises or colocation high availability setups almost always leverage a single-data center deployment model. On AWS, you can set up SAP HANA high availability clusters using SAP HANA System Replication across Availability Zones (multiple data centers) within an AWS Region for increased resiliency and protection from data center failures. Additionally, for disaster recovery, you can set up SAP HANA System Replication in asynchronous replication mode across AWS Regions.

These instances are available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), and AWS GovCloud (US) Regions. If you are ready to get started, contact your AWS account team or use the Contact Us page to make a request.

Stay tuned for more information on larger instance sizes, with 18 TB and 24 TB coming later this year.

Deploying highly available SAP systems using SIOS Protection Suite on AWS

$
0
0

Feed: AWS for SAP.
Author: Somckit Khemmanivanh.

This post is by Santosh Choudhary, Senior Solution Architect at Amazon Web Services (AWS).

AWS provides services and infrastructure to build reliable, fault-tolerant, and highly available systems in the cloud. Due to the business-critical importance of SAP Systems, high availability is essential to the business.

High-availability for SAP applications can be achieved in many ways on AWS, depending on the operating system and database that you use. For example, SUSE High Availability Extensions (SUSE HAE), Red Hat Enterprise Linux for SAP with High Availability and Update Services (RHEL for SAP with HA and US), Veritas InfoScale Enterprise for AWS, SIOS Protection Suite, etc.

In this post, we will see how to deploy SAP on AWS in a highly available manner in Windows and Linux environments using SIOS Protection Suite. We’ll also cover some of the differences in SIOS setup in Windows and Linux environments.

SIOS Protection Suite software is a clustering solution that provides a tightly integrated combination of high availability failover clustering, continuous application monitoring, data replication, and configurable recovery policies to protect business-critical applications and data from downtime and disasters.

To start with, AWS recommends deploying the workload in more than one Availability Zone. Each Availability Zone is isolated, but the Availability Zones in an AWS Region are connected through low-latency links. If one instance fails, an instance in another Availability Zone can handle requests.

diagram with three availability zones in a region

Now, let’s explore the architectural layers within an SAP NetWeaver system, single points of failure (SPOFs) within that architecture, and the ways to make these components highly available using SIOS Protection Suite.

Understanding SAP NetWeaver architecture

The SAP NetWeaver stack primarily consists of a set of ABAP SAP Central Services (ASCS) servers, a primary application server (PAS), one or more additional application servers (AAS), and the databases.

ASCS consists of Message Server and Enqueue Server. Message Server acts as a communication channel between the application servers and provides load balancing between the application servers. Enqueue Server stores the database table locks and forms the critical component of ASCS to ensure database consistency.

In an SAP architecture, ASCS and databases are the SPOFs and in a highly available scenarios they need to be made highly available and fault tolerant.

To achieve high availability, ASCS instances are deployed in a clustered environment like Windows Server Failover Clustering (WSFC) or Linux clusters. One of the requirements of a clustered environment is a shared file system. On the AWS Cloud, SIOS Data Keeper can be used to replicate the common file share across the Availability Zones.

Setup for a Windows environment

The SIOS DataKeeper part of SIOS Protection Suite is an SAP certified, optimized, and host-based replication solution that performs block-level replication across the Availability Zones to configure and manage high-availability to imitate a Server Message Block (SMB) file share.

It is used to make a / highly available file system by replicating the content in synchronous mode. It can also be used to make /usr/sap/trans a shared file system.

Using SIOS DataKeeper Cluster, you can achieve high availability protection for critical SAP components, including the ASCS instance, back-end databases (Oracle, DB2, MaxDB, MySQL, and PostgreSQL), and the SAP Central Services instance (SCS) by synchronously replicating data at the block level. In a Windows environment, the DataKeeper Cluster integrates seamlessly with Windows Server Failover Clustering (WSFC). WSFC features, such as cross-subnet failover and tunable heartbeat parameters, make it possible for administrators to deploy geographically dispersed clusters.

The setup consists of Windows Failover Cluster Manager with both ASCS nodes (e.g., ASCS-A and ASCS-B as shown in the following screenshot) and a file server that acts as witness in the cluster. We recommend deploying the file server in a separate, third, Availability Zone.

failover cluster manager nodes

At any point in time, the cluster is pointing to one active node.

failover cluster manager

The following diagram shows the architecture of a highly available SAP system on AWS.

high availability s a p architecture diagram

Customers can either choose to do database replication using database-specific methods (like SQL Always On availability groups) or block-level replication using SIOS for both the database and the ASCS instance. The SAP Recovery Kit, which is part of the SIOS Protection Suite, provides monitoring and switchover for different SAP instances. It works in conjunction with other SIOS Protection Suite Recovery Kits (e.g., the IP Recovery Kit, NFS Server Recovery Kit, NAS Recovery Kit, and database recovery kits) to provide comprehensive failover protection.

The following diagram shows the high-level architecture of SIOS Datakeeper used to create file share for ASCS in a cluster environment and leveraging native SQL replication (using an Always On availability group).

h a sap with m s sql server

This next diagram shows the generic architecture of highly available SAP (running on AnyDB) using SIOS.

diagram for generic h a architecture

Setup for a Linux environment

In the case of a Linux environment, both the DataKeeper and LifeKeeper components of SIOS Protection Suite are used. Datakeeper provides the data replication mechanism, and LifeKeeper is responsible for automatic orchestration of failover of SAP ASCS and databases (e.g., SAP HANA, DB2, Oracle, etc.) across Availability Zones. The SAP HANA Recovery Kit within LifeKeeper starts the SAP HANA system on all nodes and performs the take-over process of system replication.

The actual IP address of the SAP ASCS Amazon Elastic Compute Cloud (Amazon EC2) instance and the underlying database is abstracted using overlay IP address (also called floating IP address). An overlay IP address is an AWS-specific routing entry that sends network traffic to an instance within a particular Availability Zone. As part of the failover orchestration, LifeKeeper is also responsible for changing the entries within the route table during failover to redirect the traffic to the active node (primary node).

architecture diagram with route tables

The detailed SIOS guide steps through the deployment of SAP NetWeaver with high availability on AWS using SIOS Protection Suite. The whitepaper uses NFS as part of the setup. However, you can simplify the setup by using Amazon Elastic File Service (Amazon EFS) instead.

Amazon EFS provides a simple, scalable file system for Linux-based workloads that are running on AWS Cloud services and on-premises resources. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

In case of any questions, please feel free to reach out to us.

Now available: SAP S/4HANA Quick Start for rapid deployment on AWS

$
0
0

Feed: AWS for SAP.
Author: Sabari Radhakrishnan.

This post was written by Kuang-Shih Howard Lee, who is an SAP solutions architect at Amazon Web Services (AWS).

In today’s business world, speed is everything. Enterprises must transform their IT assets at an increasing pace to stay ahead while dealing with complex technologies and deployment models. At Amazon Web Services (AWS), we’re working on simplifying and fast-tracking your SAP software deployments to the cloud, to save you time and resources. We’re excited to announce a new AWS Quick Start for SAP S/4HANA that enables businesses to deploy their SAP S/4HANA workloads on the AWS Cloud in less than three hours, compared with a manual deployment that can take days or even weeks to complete.

SAP S/4HANA, the newest generation of the enterprise resource planning (ERP) software package from SAP that supports core enterprise business functions, is optimized for SAP HANA in-memory databases. With the recently released Amazon EC2 High Memory instances, SAP customers now have the ability to scale their SAP HANA database up (up to 12 TB of memory) and out (up to 48 TB of memory) for extremely large S/4HANA deployments. For details, see SAP S/4HANA on AWS on the AWS website.

What is the AWS Quick Start for SAP S/4HANA?

The AWS Quick Start for SAP S/4HANA is a deployment automation tool designed by AWS solution architects that is built on AWS CloudFormation, Python, and shell scripts. The Quick Start follows best practices from AWS, SAP, and Linux vendors to automatically set up an AWS environment and ready-to-run SAP S/4HANA system. (This Quick Start is the latest addition to the set of AWS Quick Starts that automate the deployment of SAP workloads; you might also want to check out the Quick Starts for SAP HANA, SAP Business One, version for SAP HANA, and SAP NetWeaver.)

SAP S/4HANA Quick Start components and deployment options

The Quick Start deploys an SAP S/4HANA system that consists of a number of Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS core infrastructure services into a new or an existing virtual private cloud (VPC) in your AWS account. It offers two main deployment options: a single-scenario standard deployment and a multi-scenario distributed deployment with or without SAP software installed. Additionally, you can choose whether to use Amazon Elastic File System (Amazon EFS) or Network File System (NFS) for your shared file system. You can also choose to deploy a bastion host and Remote Desktop Protocol (RDP) server. You can choose a combination of the following components to launch an SAP S/4HANA environment that meets your requirements.

Primary resources:

  • SAP HANA primary database
  • SAP S/4HANA ABAP SAP Central Services (ASCS) server
  • SAP S/4HANA Primary Application Server (PAS)

Secondary and optional resources:

  • SAP HANA secondary database for high availability
  • SAP S/4HANA standby ASCS server for high availability
  • Optional SAP S/4HANA Additional Application Server (AAS)
  • Optional bastion host and RDP instances

To ensure business continuity, the AWS Quick Start for SAP S/4HANA also enables you to create a SAP HANA database with high availability, using SAP HANA System Replication (HSR) across Availability Zones within an AWS Region. In addition, you can set up a standby ASCS server for high availability alongside the SAP HANA database to protect mission-critical SAP workloads from Availability Zone outages.

The Quick Start offers the following standard and distributed deployment options for SAP S/4HANA on AWS.

For a new VPC:
Options for deploying S/4HANA into a new VPC on AWS

For an existing VPC:
Options for deploying S/4HANA into an existing VPC on AWS

For more information about these deployment options, see the AWS Quick Start for SAP S/4HANA deployment guide.

The S/4HANA architecture on AWS

The following diagram shows the standard deployment architecture of a typical four-server SAP S/4HANA cluster that hosts the SAP HANA database, ASCS, PAS, and AAS separately in a private subnet within the same Availability Zone, and a bastion host and RDP server in a public subnet.

Four-server S/4HANA cluster in one Availability Zone

The following diagram shows the deployment architecture of a typical four-server, high-availability cluster that hosts the primary SAP HANA database, primary ASCS, PAS, and AAS separately in a private subnet in one Availability Zone, and the secondary SAP HANA database and standby ASCS in a private subnet in another Availability Zone. This architecture also includes a bastion host in an Auto Scaling group and an RDP server in the public subnet of the first Availability Zone.

Four-server S/4HANA cluster in two Availability Zones (HA)

Getting started

To get started with this Quick Start deployment, read through the deployment guide to get a general understanding of the components and deployment options, and then follow the instructions in the guide to launch the Quick Start into your AWS account. Depending on your parameter selections, the Quick Start can run between 1.5 to 2.5 hours to complete the deployment.

The source templates and codes are available to download from GitHub. If you would like to customize this Quick Start to meet your needs, see the AWS Quick Start Contributor’s Guide.

What’s next?

We will continue to enhance the SAP S/4HANA Quick Start to support new operating system versions, SAP S/4HANA software packages, and AWS services and instance types. Let us know if you have any comments or questions—we value your feedback.

Simplify your SAP S/4HANA journey and innovate faster

$
0
0

Feed: AWS for SAP.
Author: Bas Kamphuis.

This post was written by Fernando Castillo, Head WW SAP, at Amazon Web Services (AWS).

This week will be my fifth year anniversary going to SAPPHIRE NOW as part of the Amazon Web Services (AWS) team. During those five years, we’ve been able to help thousands of customers in their SAP journeys to AWS.

During this time, I have had the opportunity to travel around the world and connect with many SAP customers to understand their challenges and to help them find ways to overcome and achieve their goals. Through these interactions—from Seaco, Coca-Cola Icecek, and BP in the early days, to AIG, ENGIE, Bristol-Myers Squibb, Fast Retailing, FirstGroup, and many others more recently—two common themes have emerged as the key benefits of moving to AWS: retiring technical debt and accelerating innovation.

Retiring technical debt means that customers can move away from their old, inflexible, on-premises infrastructure and take advantage of modern DevOps, automation, and flexibility that only the cloud provides. Customers can now move to S/4HANA without long-term commitments, with the ability to explore with low risk and to deliver value to the business.

AWS and SAP on AWS competency partners have been developing multiple tools to simplify migrations to the AWS Cloud. For example, customers are able to provision certified HANA environments (2 TB / 4 TB) in minutes and shut them down with a simple command (or voice if you prefer to use Alexa). We just launched our AWS SAP S/4HANA Quick Start, which enables customers to build fully certified S/4HANA environments in less than 2.5 hours! ENGIE and Fast Retailing, known for their Uniqlo brand, are clear examples of how AWS has been instrumental in enterprises’ S/4HANA journeys.

Figure 1: How the AWS 100% software-defined cloud infrastructure helps retire technical debt

But customers don’t just want to move their SAP solutions from their on-premises data centers to AWS. They want to accelerate innovation. Customers are using SAP Cloud Platform, which runs in seven AWS Regions worldwide today, to enable innovation from both SAP and AWS. Customers are also taking advantage of the full range of AWS services, such as AWS IoT Core, to combine their data on edge devices with their SAP solutions. We are working very closely with SAP on cloud-to-cloud interoperability; stay tuned for more information this week.

Finally, a key component of how we are helping accelerate innovation is our industry focus. By working with many customers in multiple industries, we have been building industry solutions. For example, we have created certifications like GxP for Life Sciences and other reference architectures. At SAPPHIRE, you will be able to connect with our industry team that covers 13 distinct industries.

Figure 2: AWS innovation pillars and industries

With these topics in mind, when we started thinking about SAPPHIRE NOW 2019 and reflecting on the challenges SAP customers are facing and how we have been helping them, it became evident that Simplify your S/4HANA journey and innovate faster captured this year’s simple but powerful theme.

Figure 3: Today, AWS is helping customers retire their technical debt by helping them move to S/4HANA faster, with tools, methods, and competency partners. At the same time, AWS is helping accelerate innovation by taking advantage of SAP Cloud Platform, which is leveraging the innovation AWS brings.

I look forward to seeing you in Orlando. It’s going to be a very interesting week, with multiple announcements and great innovations showcased at our booth (#2000). We have a jam-packed week planned, with sessions, demos, social events, and opportunities to hear from our partner community. You’ll also be able to learn first-hand how you can join the many customers who have successfully migrated to AWS to take advantage of the innovations AWS has to offer.

Viewing all 140 articles
Browse latest View live