Run a CAP Application on Kyma
Preface
SAP recently released the SAP Cloud Platform, Kyma runtime, a managed Kubernetes offering with Kyma.
Kyma is much about extending existing applications, but you also get a full-blown Kubernetes cluster including Istio service mesh that you can use to build a standalone cloud application.
In this tutorial, you deploy an application built with the SAP Cloud Application Programming Model (CAP) to an SAP Cloud Platform, Kyma runtime cluster. The CAP application has an OData service, SAP Fiori UI and uses SAP HANA as a database.
The CAP Risk Management Example is used as starting point.
The tutorial can be also done with any other Kyma installation, but you need an SAP HANA database and an HDI container and you then need to do the part with HANA credentials differently.
The CAP part is probably the smallest in the tutorial: Frankly speaking, just package it into a docker container and run it. But the tutorial also describes how to get a small docker registry running. If you're experienced with Kubernetes, you can skip some sections.
Since SAP HANA Cloud isn’t yet available for Kyma, you take it from Cloud Foundry. It's a bit tricky to copy the credentials, but at the end it's just copying and pasting values. Scripts are provided to help you here ( :-) ). So, please don't get distracted by this.
Disclaimer
Please note, that this tutorial is intended to give an introduction in the topic and not for deploying productive applications.
The SAP Cloud Platform Programming Model (CAP) does not officially support Kubernetes and Kyma as a platform right now.
Preconditions
These are preconditions to do this tutorial:
CAP Risk Management Example Application
You can find the starting point of this tutorial in the cap/freestyle branch:
- Goto the directory where you want to create the example
-
Create a folder for your example
e.g.:
mkdir cap-kyma-app
-
Clone the example git repository and checkout the example branch
git clone https://github.com/SAP-samples/cloud-cap-risk-management cd cloud-cap-risk-management git checkout cap/freestyle-ui5-app
-
Copy the all files from the example to your folder, except git
.git
foldere.g.:
cp -r .gitignore $(ls -1A | grep -v .git) ../cap-kyma-app
-
Open a new project in your source editor for the folder
cap-kyma-app
for Visual Studio Code:
cd ../cap-kyma-app code .
The final code is in the kyma/app branch.
Local Software
The following local software is required:
node
(NodeJS 12.x is recommended)docker
(for example, Docker Desktop for Mac or Windows)- A source code editor (Visual Studio Code is recommended)
bash
orzsh
shell to run the command snippets (for example, Anyway there on Mac or Linux, Git Bash should do for Windows or you can use MinGW, Cygwin).kubectl
(Kubernetes Command Line)helm
(Helm Chart Command Line, not needed if you use an existing docker registry)cf
(Cloud Foundry Command Line)
If you have a Mac, many of the commands can be installed using Homebrew (brew install ...
). For Windows, there is a similar offering called Chocolatey. Please refer to the binary installers of the components otherwise.
SAP Cloud Platform Subaccount
You need an SAP Cloud Platform Subaccount with consumption-based model (that is, Cloud Credits). Kyma is currently supported on Azure landscapes only, as of September 2020.
Enable Kyma
Although Kyma is needed at a later point in the tutorial, it’s recommended to start with this step, because the Kyma provisioning may take some time.
You can either create your own SAP Cloud Platform Trial account or add Kyma to an existing Subaccount.
Create Your Own SAP Cloud Platform Trial account
- https://cockpit.eu10.hana.ondemand.com/trial/#/home/trial
- Choose Enter Your Trial Account
- If you don't have a user you need to register
- Wait for the completion of the on-boarding
- You should land on the Subaccounts page of your trial Global Account
- Choose on trial
- Choose Enable Kyma
This will take a while. You can start the tutorial in the meantime.
If you already have an older trial account, then you might not see the Enable Kyma button. In that case goto Entitlements and add the Kyma Runtime Service Plan.
Use an Existing SAP Cloud Platform Subaccount
Please follow the steps "Enabling the Kyma runtime" and "Provide Authorization to access Kyma runtime" in this blog post: * https://blogs.sap.com/2020/05/13/sap-cloud-platform-extension-factory-kyma-runtime-how-to-get-started/
Enable Cloud Foundry
To use the SAP HANA Cloud service, you can either enable Cloud Foundry for the same subaccount or use a different subaccount, for example, an already existing or trial subaccount.
Install the Cloud Foundry CLI
Later in this tutorial, you need to log on to Cloud Foundry using the command line. Therefore, it's necessary to install the Cloud Foundry CLI. Please check the Cloud Foundry documentation for detailed steps on how to do that.
Run the CAP Application in a Docker Container Locally
In the first part of this tutorial, you prepare your application to be run on Kyma.
Build a Docker Container
Since all applications in Kubernetes as in Kyma are docker containers, you need to create a docker image for the CAP application. For that, you need to define a file Dockerfile
that describes, how to build up the image and what to do when the docker image is run. The file starts with the FROM
directive that names the base image that you want to use since you don't want to start from scratch. Here, you use a public image that already contains the NodeJS 12.x installation. Additionally install openssl
, which is required by the HANA client and carry out npm install
.
You then declare that the CAP default port 4004
is exposed to the outside and run the CAP server with npm start
.
-
Navigate to the root folder of your app.
cd cpapp
-
Create a file named
Dockerfile
and add the following lines to it:3. AddFROM node:12-slim WORKDIR /usr/src/app COPY gen/srv . RUN npm install EXPOSE 4004 USER node CMD [ "npm", "start" ]
sqlite3
as project dependency, so you can try out the scenario without an external database service.4. Add the following snippet to thenpm install --save sqlite3
package.json
file:{ "name": "cpapp", ... "cds": { "requires": { "db": { "kind": "sql" } } } }
This tells CAP to use SQLite in dev and HANA in productive mode.
-
Before you can build the image, run the
cds build
, because the image takes the build results from thegen/srv
folder. You can also do this in the docker build, but that would require additional steps that you skip for now.cds build
-
Build the docker image locally.
Make sure that the docker daemon is running (e.g. Docker Desktop for Mac or Windows)
docker build -t cpapp .
This builds the docker image specified in
Dockerfile
from the current directory (.
argument). The image is tagged with the namecpapp
. Without giving a tag a random tag will be added.You should see an output similar like:
... Removing intermediate container 4f451017d70f ---> 948523646f60 Step 5/6 : EXPOSE 4004 ---> Running in 1a2b7a0ec606 Removing intermediate container 1a2b7a0ec606 ---> be849ff002e1 Step 6/6 : CMD [ "npm", "start" ] ---> Running in cb0b32163709 Removing intermediate container cb0b32163709 ---> 1e0c26b94ac6 Successfully built 1e0c26b94ac6 Successfully tagged cpapp:latest
Docker images consist of several "filesystem layers". The base image is a layer and your own docker image is a layer on top. Each layer can add or remove files. This is convenient because it saves storage because your custom images contain only the delta of files added or removed. To be more precise an image consists of multiple layers. The docker build
will automatically decide when to create a new layer. You can see the different layers in the docker build
output, for example: ---> 365313c4290e
Run the Docker Container
Check the Content of the Docker Container
-
You can run the docker container and look inside its contents.
docker run -i -t cpapp /bin/bash
This starts a docker container with your image (
-t cpapp
) and starts the bash shell (/bin/bash
) that happens to be part of your base image in an interactive (-i
) mode. -
Look inside the contents using the
ls
command.node@a5a0b8115eb5:/usr/src/app# ls manifest.yaml node_modules package-lock.json package.json srv
-
Exit the container using
exit
(Pro-Tip:Ctrl+D
:-))
Run Your CAP Service
Now it's time to run your CAP service. So lets do this.
-
Run the container.
docker run -t cpapp
Without specifying the command, it runs the default command, that is
npm start
. You can try to access the service on http://localhost:4004, but it doesn't work. It shows a similar error message to this:This site can’t be reached localhost refused to connect.
Although the docker container exposes port 4004, the "host" of the container, that is your PC or Mac, doesn't make it accessible. You need to declare it in the docker command line.
-
Stop the service with
Ctrl+C
. -
Run the container again with the publish parameter.
docker run -p 4004:4004 -t cpapp
This tells docker to expose the port
4004
of the docker container to the port4004
of the host. You could also use a different port of the host, but let's keep it simple.Now, you can access the CAP service on http://localhost:4004.
You can click on the Risks (http://localhost:4004/service/risk/Risks) or Mitigations (http://localhost:4004/service/risk/Risks) link that returns an empty OData response.
Add SAP Fiori UI
There's already a Fiori Elements UI for Risks and a SAPUI5 Freestyle UI for Mitigations in the project. We could think of several ways to deploy it to the cloud. For the sake of simplicity, you can use the CAP service's capability to serve static resources from the app
folder. After the cds build
the app
folder isn’t part of the service. You can copy it in, but you need to remove the *.cds
files because they’re already copied from app
to srv
and duplicating these files confuses CAP.
You can automize this in the docker build by modifying the Dockerfile
.
-
Add the highlighted lines to the file
Dockerfile
:FROM node:12-slim WORKDIR /usr/src/app COPY gen/srv . RUN npm install COPY app app/ RUN find app -name '*.cds' | xargs rm -f EXPOSE 4004 USER node CMD [ "npm", "start" ]
-
Rebuild the docker image:
cds build docker build -t cpapp .
-
Run it locally:
docker run -p 4004:4004 -t cpapp
-
Try it out by navigating to: http://localhost:4004/launchpage.html
Deploy to Kyma
In this part of the tutorial, you deploy the dockerized CAP application to Kyma.
Log In to Kyma (Kubernetes Cluster)
The first step is to log in to Kyma using the Kyma Console and configure the local kubectl
command to connect to the Kyma Kubernetes cluster.
- Open the SAP Cloud Platform Cockpit.
- Choose your global account.
- Choose your subaccount.
- On the Overview page, under Kyma Environment, choose Link to dashboard.
- Log in to Kyma.
- Choose the account Icon in the upper right corner.
-
Choose Get Kubeconfig.
A file download should be triggered. If no download is triggered, see the troubleshooting info.
Troubleshooting: If no download is triggered
It can happen, that no download is triggered. In this case follow these steps:
- Open your browser's developer tools.
- Navigate to the network tab.
- Choose the Get Kubeconfig button again.
- Locate the response of the
kubeconfig
file. - Look at the response.
- Copy the response, you will need it in the next steps.
-
Navigate to your home folder
The config for the default cluster is stored in
.kube/config
in your home directory. -
Navigate to the
.kube
folder. -
Create a file named
cap-kyma-app-config
to avoid overwriting the existing configuration. -
Copy the content of the downloaded
kubeconfig.yml
into the filecap-kyma-app-config
. -
Set the new config file for the running shell process:
export KUBECONFIG=~/.kube/cap-kyma-app-config
Now, you can access your kubernetes cluster.
-
Check if you can access your kubernetes cluster.
kubectl get pods
The command should run without an error message, but it doesn't output any pods if you have a newly created cluster.
If you want to use
kubectl
in another shell session, then rerun theexport
statement. The authentication session will expire after some hours. You then need to download thekubeconfig.yml
file again and replace the value of the token parameter in yourcap-kyma-app-config
file with the one from the newly downloadedkubeconfig.yml
file.
Prepare the Docker Registry
Kubernetes needs a docker registry that can be accessed from the cluster's network. This could be any public or private registry. To keep this tutorial self-contained, you use a slightly different approach, which isn’t recommended for productive use: You deploy your own docker registry to the cluster.
If you want to use a different docker registry, then you need to adjust the docker push
commands and the URLs for the docker images.
In the approach with the cluster's own docker registry, a Helm Chart is used to install it on the cluster.
-
Add the stable Helm Chart repository to the
helm
CLI:helm repo add stable https://kubernetes-charts.storage.googleapis.com/
-
Install the Helm Chart for a docker registry:
helm install docker-registry stable/docker-registry
-
You need to make the docker registry available on the public internet. The details to this step are explained later. Run the following commands:
kubectl apply -f - <<EOF apiVersion: gateway.kyma-project.io/v1alpha1 kind: APIRule metadata: labels: app: docker-registry name: docker-registry spec: service: host: docker-registry name: docker-registry port: 5000 gateway: kyma-gateway.kyma-system.svc.cluster.local rules: - path: /.* methods: ["GET", "HEAD" ] accessStrategies: - handler: noop mutators: [] EOF
-
To be able to push docker images via HTTP, you need to add it as an "insecure registry" (not using secure socker communication) to your Docker config.
- Open your Docker Desktop.
- Choose Preferences.
- Choose Docker Engine.
-
Add the following line:
{ ... "insecure-registries" : ["0.0.0.0:5000"] }
-
Choose Apply and Restart.
- Wait for the startup to be completed.
Push Docker Image
-
Since the docker registry isn’t exposed to the open internet (and you don't want to), you need to establish a tunnel from your localhost to the registry:
kubectl port-forward deployment/docker-registry 5000:5000 &
The
&
causes the process to run in the background. You need to keep it running until you finished pushing docker images. You may need to start it newly in case the "docker push" doesn't work anymore.You should see the following output that tells you that the tunning is established:
Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000
-
Your docker image needs an additional tag to declare it part of your
forwarded
docker registry. Otherwise you can't push it.docker tag cpapp 0.0.0.0:5000/cpapp
-
Push it, using the new tag.
docker push 0.0.0.0:5000/cpapp
The output mixes the
docker push
output with thekubectl port-forward
output. Anyhow it should finish with a line like:``` latest: digest: sha256:4054dd60ee4f9889d58aa97295cb3b1430a5c1549e602b6c619d7c4ed7d04ad0 size: 2412 ```
Deploy the CAP Application
Now, you can deploy your CAP service to Kubernetes. You use the Deployment
resource of Kubernetes to describe the application. It contains a description of the container and manages its creation and takes care that the instance keeps running.
-
Create a directory to store your deployment YAML files:
mkdir deployment
-
Create a file
deployment/deployment.yaml
with the following contents:apiVersion: apps/v1 kind: Deployment metadata: name: cpapp labels: app: cpapp version: v1 spec: replicas: 1 selector: matchLabels: app: cpapp version: v1 template: metadata: labels: app: cpapp version: v1 spec: containers: - name: cpapp image: docker-registry.{{CLUSTER_DOMAIN}}/cpapp imagePullPolicy: Always ports: - containerPort: 4004
The file contains a placeholder
{{CLUSER_DOMAIN}}
that you need to replace with your cluster's domain. Either in the file or replacing it when applying the file.You can find your cluster's domain, for example, for the URL of the Kyma Console. If your console URL is for example
https://console.c-abcd123.kyma.shoot.live.k8s-hana.ondemand.com/
, the cluster's domain isc-abcd123.kyma.shoot.live.k8s-hana.ondemand.com
, just without the leadingconsole.
-
Apply the new configuration.
kubectl apply -f deployment/deployment.yaml
Or you can take it from the current
kubectl
configuration and replace it on deployment, like this:sed <deployment/deployment.yaml "s/{{CLUSTER_DOMAIN}}/$(kubectl config current-context)/" | kubectl apply -f -
-
Check the state of the deployment using:
kubectl get deployments
Initially it looks like this:
``` NAME READY UP-TO-DATE AVAILABLE AGE cpapp 0/1 1 0 5s ```
If all goes well, it turns to:
``` NAME READY UP-TO-DATE AVAILABLE AGE cpapp 1/1 1 1 14m ```
-
Since you haven't exposed the app to the public internet, you can only access it with a tunnel. So lets create another tunnel.
kubectl port-forward deployment/cpapp 4004:4004
-
Open the CAP service in the browser: http://localhost:4004
Your Service is now running through kubernetes.
-
Press
Ctrl+C
to stop the tunnel.
Expose CAP Application to the Public Internet
-
Create a new file
deployment/apirule.yaml
with following content:apiVersion: v1 kind: Service metadata: name: cpapp labels: app: cpapp service: cpapp spec: ports: - port: 4004 name: http selector: app: cpapp --- apiVersion: gateway.kyma-project.io/v1alpha1 kind: APIRule metadata: labels: app: cpapp name: cpapp spec: service: host: cpapp name: cpapp port: 4004 gateway: kyma-gateway.kyma-system.svc.cluster.local rules: - path: /.* methods: ["GET", "PUT", "POST", "HEAD", "PATCH", "DELETE" ] accessStrategies: - handler: noop mutators: []
-
Apply with:
kubectl apply -f deployment/apirule.yaml
-
Look up your CAP service URL.
echo "https://cpapp.$(kubectl config current-context)"
The console outputs your CAP service URL, for example
https://cpapp.example.kyma.live.k8s-hana.ondemand.com/
. -
Check if you can access your service via your URL.
You can also add entries to the Risks application.
Add SAP HANA Cloud
Your application runs on Kyma and is accessible from public internet now. Still it works like the local development version (cds watch
) without a real database persistence. In this step, you add support for HANA. As already said, you need to provision it from Cloud Foundry and add the credentials manually.
To keep the latency between the CAP service and HANA low, it makes sense to provision the HANA Cloud database on the same SAP Cloud Platform region as the Kyma cluster. But to try it out you can also use an SAP HANA Cloud instance from your Trial account.
Prepare CAP Application for SAP HANA Cloud
The hdb
module needs to be added to your package.json
to enable CAP to talk to an SAP HANA DB.
-
Install the
hdb
module.npm install --save hdb
-
Open the
package.json
file. -
Add the following snippet for HANA of the CAP application.
{ "name": "cpapp", ... "cds": { "requires": { "db": { "kind": "sql" } }, "hana": { "deploy-format": "hdbtable" } } ... }
With
requires.db.kind
:sql
you tell CAP to use SQLite in dev and HANA in productive mode. The settinghana.deploy-format
:hdbtable
is required for HANA Cloud since it supports only thehdbtable
andhdbview
files for deployment. -
You now need to tell the CAP service to run in the productive mode. To do that, edit the
Dockerfile
and add the highlighted statement.FROM node:12-slim ENV NODE_ENV=production WORKDIR /usr/src/app COPY gen/srv . RUN npm install COPY app app/ RUN find app -name '*.cds' | xargs rm -f EXPOSE 4004 USER node CMD [ "npm", "start" ]
-
Rebuild the CAP project and the docker image for production.
cds build --production docker build -t 0.0.0.0:5000/cpapp . docker push 0.0.0.0:5000/cpapp
The command
cds build
uses the--production
argument to build the HANA artifacts.npm
andnode
uses the environment variableNODE_ENV=production
. Without that CAP falls back to "development mode" settings and tries to use SQLite.
Create and Deploy SAP HANA HDI Container
You use the cds deploy
command to create an HDI container on CF and deploy the database schema to the container.
Make sure that you're logged in to a Cloud Foundry account where an SAP HANA Cloud instance and entitlement for the service plan hana
hdi-shared
is available or you use an SAP CP Trial account, just run:
-
Set the Cloud Foundry API endpoint.
cf api <api-endpoint>
You can find the API Endpoint URL in the overview page of your subaccount.
-
Log on to your Cloud Foundry account.
cf login
-
Run the following line to create an HDI container.
cds deploy --to hana:cpapp-kyma-hdi
The suffix
:cpapp-kyma-hdi
tellscds deploy
to create an HDI container with namecpapp-kyma-hdi
. It also creates a service key with the namecpapp-kyma-hdi-key
that you use to access the database in the next section.Then it deploys the database tables and the test content.
It should end with something like:
Finalizing... Finalizing... ok (0s 96ms) Make succeeded (0 warnings): 14 files deployed (effective 22), 0 files undeployed (effective 0), 0 dependent files redeployed Making... ok (1s 597ms) Enabling table replication for the container schema "C5DF44CB9C08482D821F5BC3BE344FCF"... Enabling table replication for the container schema "C5DF44CB9C08482D821F5BC3BE344FCF"... ok (0s 63ms) Starting make in the container "C5DF44CB9C08482D821F5BC3BE344FCF" with 14 files to deploy, 0 files to undeploy... ok (1s 756ms) Deploying to the container "C5DF44CB9C08482D821F5BC3BE344FCF"... ok (2s 211ms) No default-access-role handling needed; global role "C5DF44CB9C08482D821F5BC3BE344FCF::access_role" will not be adapted Unlocking the container "C5DF44CB9C08482D821F5BC3BE344FCF"... Unlocking the container "C5DF44CB9C08482D821F5BC3BE344FCF"... ok (0s 0ms) Deployment to container C5DF44CB9C08482D821F5BC3BE344FCF done [Deployment ID: none]. (4s 499ms) Application can be stopped.
If it's missing, then there's probably a problem with the HDI deployer on your Operating System. It can be worked around by putting the HDI deployer in a docker container:
Workaround: Use HDI Deployer in Docker Container
Create a file Dockerfile.hdi-deploy
with the following content:
FROM node:12-slim AS build
ENV NODE_ENV=production
WORKDIR /usr/src/app
RUN apt-get update
RUN apt-get install -y openssl python make g++
COPY gen/db/package.json .
RUN npm install
COPY gen/db .
CMD [ "npm", "start", "--", "--exit" ]
Execute the following commands and check if the output gets right this time:
docker build -t cpapp-hdi-deployer -f Dockerfile.hdi-deploy .
docker run --env VCAP_SERVICES='{"hana":[{"credentials": '"$(cf service-key cpapp-kyma-hdi cpapp-kyma-hdi-key | sed 1d )"', "name": "hana","label":"hana","plan":"hdi-shared","tags":["hana"]}]}' -t cpapp-hdi-deployer
Add SAP HANA HDI Container Credentials
You need to somehow inject the HANA credentials into the CAP application. On Cloud Foundry that is done using an env var called VCAP_SERVICES
that takes the credentials for all bound services. Kubernetes takes a slightly different approach, it uses secrets, that can be injected into applications as env vars. But as individual env vars for each value. Luckily, CAP supports both.
On Kyma the service credentials for HANA would look like this:
driver=com.sap.db.jdbc.Driver
hdi_password=dsdssdfdfdsfds...
hdi_user=DE44345...
host=1d468818-87ad-4f9a-b762-a642b9070869.hana.eu10.hanacloud.ondemand.com
password=Vt1jo...
port=443
schema=DE6922EF2F3449E984E2E794456B7CBE
url=jdbc:sap://1d468818-87ad-4f9a-b762-a642b9070869.hana.eu10.hanacloud.ondemand.com:443?encrypt=true&validateCertificate=true¤tschema=DE6922EF2F3449E984E2E794456B7CBE
user=DE6922EF2F3449E984E2E794456B7CBE_0BR49NOHL4VW9DC8NVZFXRN3O_RT
However, since you need to take the HANA credentials from CF, it’s easier to stick to the VCAP_SERVICES
approach for now.
So let's have a look at the credentials that have been created by cds deploy
:
cf service-key cpapp-kyma-hdi cpapp-kyma-hdi-key
The output looks like this:
Getting key cpapp-kyma-hdi-key for service instance cpapp-kyma-hdi as MySelf...
{
"certificate": "-----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\nQTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j\nb20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB\nCSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97\nnh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt\n43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P\nT19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4\ngdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO\nBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR\nTLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw\nDQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr\nhMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg\n06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF\nPnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls\nYSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk\nCAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n-----END CERTIFICATE-----",
"driver": "com.sap.db.jdbc.Driver",
"hdi_password": "Ee8lLp3O7aIPv51kOQ38vV6lCx3Tvm6WpGL9i7PRgMZlcuJygN9IPLW0GBVofz7GKsU6m3QNbADJvmRtVEp_ceEAo0iFT8Y6QtZojKOrMONeGTNcelnbUJw.uVb-3c.R",
"hdi_user": "C5DF44CB9C08482D821F5BC3BE344FCF_7Y4VQVS7NK0VAVT1O6B01Q5XZ_DT",
"host": "1d468818-87ad-4f9a-b762-a642b9070869.hana.eu10.hanacloud.ondemand.com",
"password": "Jj4eS040K.2Z.88Y73c7MbANyvSCVhv0C7EYl6a71oW_3BmoRH6SN91_BAw3Y4cDbli.j2ebCnho0jCyYTghPnL9x8y59mb8CTNv0LvY2CvehYiw9ky7zVdRZJcOPY1s",
"port": "443",
"schema": "C5DF44CB9C08482D821F5BC3BE344FCF",
"url": "jdbc:sap://1d468818-87ad-4f9a-b762-a642b9070869.hana.eu10.hanacloud.ondemand.com:443?encrypt=true\u0026validateCertificate=true\u0026currentschema=C5DF44CB9C08482D821F5BC3BE344FCF",
"user": "C5DF44CB9C08482D821F5BC3BE344FCF_7Y4VQVS7NK0VAVT1O6B01Q5XZ_RT"
}
Create a Secret for SAP HANA HDI Container Credentials
As a first step, you need to upload the HANA HDI container credentials from the CF service key to a Kubernetes secret.
You build the file gen/hdi-secret.yaml
with the next steps.
-
Create the file
gen/hdi-secret.yaml
with the following content:apiVersion: v1 kind: Secret metadata: name: cpapp-kyma-hdi-secret type: opaque stringData: VCAP_SERVICES: > { "hana": [ { "binding_name": null, "credentials": {{CREDENTIALS}}, "instance_name": "hana", "label": "hana", "name": "hana", "plan": "hdi-shared", "provider": null, "syslog_drain_url": null, "tags": [ "hana", "database", "relational" ], "volume_mounts": [] } ] }
-
Replace the
{{CREDENTIALS}}
variable.- Option A:
- Replace
{{CREDENTIALS}}
with the JSON output ofcf service-key cpapp-kyma-hdi cpapp-kyma-hdi-key
(without the initial line). - Create the secret on Kubernetes
kubectl apply -f gen/hdi-secret.yaml
- Replace
- Option B:
node -e 'console.log(process.argv[1].replace("{{CREDENTIALS}}", process.argv[2]))' "$(cat gen/hdi-secret.yaml)" "$(cf service-key cpapp-kyma-hdi cpapp-kyma-hdi-key | sed 1d | sed 's/^/ /')" | kubectl apply -f -
- Option A:
-
Look at your uploaded secret:
kubectl describe secret cpapp-kyma-hdi-secret
It should be similar to the following output:
NAME TYPE DATA AGE Name: cpapp-kyma-hdi-secret Namespace: docker-registry Labels: <none> Annotations: Type: opaque Data ==== VCAP_SERVICES: 2602 bytes
Connect CAP Application to the SAP HANA HDI Container
Now you need to inject the secret's value as env var into your CAP application.
-
Add the highlighted lines to your
deployment/deployment.yaml
file.... spec: containers: - name: cpapp image: docker-registry.{{CLUSTER_DOMAIN}}/cpapp imagePullPolicy: Always ports: - containerPort: 4004 envFrom: - secretRef: name: cpapp-kyma-hdi-secret
This adds all name value pairs in the secret, currently only
VCAP_SERVICES
, as env vars to the container of the deployment. -
Update the Kubernetes cluster with the deployment file.
-
Option A (if you replaced
{{CLUSTER_DOMAIN}}
in thedeployment/deployment.yaml
file):kubectl apply -f deployment/deployment.yaml
-
Option B:
sed <deployment/deployment.yaml "s/{{CLUSTER_DOMAIN}}/$(kubectl config current-context)/" | kubectl apply -f -
Through the deployment, you see temporarily two pods. The old pod will be deleted after the new was launched.
-
-
Check the pods.
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE cpapp-566fcb5f9b-8dfjb 2/2 Running 0 26m cpapp-66b5cb4876-hx5l6 0/2 Init:0/1 0 2s
Rerun the command
kubectl get pods
until there’s only oneRunning
pod for the application. -
Get the URL of your application.
echo "https://cpapp.$(kubectl config current-context)"
-
Open the URL to your application.
Now, you can create some entries in the Risks application, which are stored in the HANA database.
Summary
In the tutorial, you’ve learned how to deploy a CAP application on Kyma. When the SAP HANA Cloud service is available for Kyma (Disclaimer: This isn’t an SAP product commitment), it will be much easier.
You can find the final code in the kyma/app branch.
Troubleshooting
Viewing the Application's Log
You can use the following command to view the latest logs of your app.
kubectl logs $(kubectl get pods -l app=cpapp -o jsonpath='{.items[0].metadata.name}') cpapp
The log-level of the CAP application can be increased, by adding the env variable DEBUG
to the deployment/deployment.yaml
file and apply the file again with kubectl
:
env:
- name: DEBUG
value: "y"
Make sure that env
has the same indent as envFrom
.
Execute Commands in the Application's Container
With the following command, you can "ssh" to your container and start a bash shell:
kubectl exec $(kubectl get pods -l app=cpapp -o jsonpath='{.items[0].metadata.name}') -t -i /bin/bash
Teardown
If want to quickly delete all artifacts created in this tutorial, execute the following commands:
cf delete-service-key cpapp-kyma-hdi cpapp-kyma-hdi-key -f
cf delete-service cpapp-kyma-hdi -f
for i in deployment/*.yaml; do
kubectl delete -f $i
done
kubectl delete secret cpapp-kyma-hdi-secret
helm uninstall docker-registry