This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

The deployment of the orchestrator involves multiple independent components, each with its unique installation process. In an OpenShift Cluster, the Red Hat Catalog provides an operator that can handle the installation for you. This installation process is modular, as the CRD exposes various flags that allow you to control which components to install. For a vanilla Kubernetes, there is a helm chart that installs the orchestrator compoments.

The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.

In addition to the Orchestrator deployment, we offer several workflows (linked below) that can be deployed using their respective installation methods.

1 - RBAC

The RBAC policies for RHDH Orchestrator plugins v1.5 are listed here

2 - Disconnected Enironment

To install the Orchestrator and its required components in a disconnected environment, there is a need to mirror images and NPM packages. Please ensure the images are added using either ImageDigestMirrorSet or ImageTagMirrorSet, depending on the format of their values.

Images for a disconnected environment

The following images need to be added to the image registry:

Recommendation:
When fetching the list of required images, ensure that you are using the latest version of the bundle operator when appropriate. This helps avoid missing or outdated image references.

RHDH Operator:

TBD

OpenShift Serverless Operator:

registry.access.redhat.com/ubi8/nodejs-20-minimal@sha256:a2a7e399aaf09a48c28f40820da16709b62aee6f2bc703116b9345fab5830861
registry.access.redhat.com/ubi8/openjdk-21@sha256:441897a1f691c7d4b3a67bb3e0fea83e18352214264cb383fd057bbbd5ed863c
registry.access.redhat.com/ubi8/python-39@sha256:27e795fd6b1b77de70d1dc73a65e4c790650748a9cfda138fdbd194b3d6eea3d
registry.redhat.io/openshift-serverless-1/kn-backstage-plugins-eventmesh-rhel8@sha256:77665d8683230256122e60c3ec0496e80543675f39944c70415266ee5cffd080
registry.redhat.io/openshift-serverless-1/kn-client-cli-artifacts-rhel8@sha256:f983be49897be59dba1275f36bdd83f648663ee904e4f242599e9269fc354fd7
registry.redhat.io/openshift-serverless-1/kn-client-kn-rhel8@sha256:d21cc7e094aa46ba7f6ea717a3d7927da489024a46a6c1224c0b3c5834dcb7a6
registry.redhat.io/openshift-serverless-1/kn-ekb-dispatcher-rhel8@sha256:9cab1c37aae66e949a5d65614258394f566f2066dd20b5de5a8ebc3a4dd17e4c
registry.redhat.io/openshift-serverless-1/kn-ekb-kafka-controller-rhel8@sha256:e7dbf060ee40b252f884283d80fe63655ded5229e821f7af9e940582e969fc01
registry.redhat.io/openshift-serverless-1/kn-ekb-post-install-rhel8@sha256:097e7891a85779880b3e64edb2cb1579f17bc902a17d2aa0c1ef91aeb088f5f1
registry.redhat.io/openshift-serverless-1/kn-ekb-receiver-rhel8@sha256:207a1c3d7bf18a56ab8fd69255beeac6581a97576665e8b79f93df74da911285
registry.redhat.io/openshift-serverless-1/kn-ekb-webhook-kafka-rhel8@sha256:cafb9dcc4059b3bc740180cd8fb171bdad44b4d72365708d31f86327a29b9ec5
registry.redhat.io/openshift-serverless-1/kn-eventing-apiserver-receive-adapter-rhel8@sha256:ec3c038d2baf7ff915a2c5ee90c41fb065a9310ccee473f0a39d55de632293e3
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-controller-rhel8@sha256:2c2912c0ba2499b0ba193fcc33360145696f6cfe9bf576afc1eac1180f50b08d
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-dispatcher-rhel8@sha256:4d7ecfae62161eff86b02d1285ca9896983727ec318b0d29f0b749c4eba31226
registry.redhat.io/openshift-serverless-1/kn-eventing-controller-rhel8@sha256:1b4856760983e14f50028ab3d361bb6cd0120f0be6c76b586f2b42f5507c3f63
registry.redhat.io/openshift-serverless-1/kn-eventing-filter-rhel8@sha256:cec64e69a3a1c10bc2b48b06a5dd6a0ddd8b993840bbf1ac7881d79fc854bc91
registry.redhat.io/openshift-serverless-1/kn-eventing-ingress-rhel8@sha256:7e6049da45969fa3f766d2a542960b170097b2087cad15f5bba7345d8cdc0dad
registry.redhat.io/openshift-serverless-1/kn-eventing-istio-controller-rhel8@sha256:d14fd8abf4e8640dbde210f567dd36866fe5f0f814a768a181edcb56a8e7f35b
registry.redhat.io/openshift-serverless-1/kn-eventing-jobsink-rhel8@sha256:8ecea4b6af28fe8c7f8bfcc433c007555deb8b7def7c326867b04833c524565d
registry.redhat.io/openshift-serverless-1/kn-eventing-migrate-rhel8@sha256:e408db39c541a46ebf7ff1162fe6f81f6df1fe4eeed4461165d4cb1979c63d27
registry.redhat.io/openshift-serverless-1/kn-eventing-mtchannel-broker-rhel8@sha256:2685917be6a6843c0d82bddf19f9368c39c107dae1fd1d4cb2e69d1aa87588ec
registry.redhat.io/openshift-serverless-1/kn-eventing-mtping-rhel8@sha256:c5a5b6bc4fdb861133fd106f324cc4a904c6c6a32cabc6203efc578d8f46bbf4
registry.redhat.io/openshift-serverless-1/kn-eventing-webhook-rhel8@sha256:efe2d60e777918df9271f5512e4722f8cf667fe1a59ee937e093224f66bc8cbf
registry.redhat.io/openshift-serverless-1/kn-plugin-event-sender-rhel8@sha256:08f0b4151edd6d777e2944c6364612a5599e5a775e5150a76676a45f753c2e23
registry.redhat.io/openshift-serverless-1/kn-plugin-func-func-util-rhel8@sha256:01e0ab5c8203ef0ca39b4e9df8fd1a8c2769ef84fce7fecefc8e8858315e71ca
registry.redhat.io/openshift-serverless-1/kn-serving-activator-rhel8@sha256:3892eadbaa6aba6d79d6fe2a88662c851650f7c7be81797b2fc91d0593a763d1
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-hpa-rhel8@sha256:6b30d3f6d77a6e74d4df5a9d2c1b057cdc7ebbbf810213bc0a97590e741bae1c
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-rhel8@sha256:00777fa53883f25061ebe171b0d47025d27acd39582a619565e9167288321952
registry.redhat.io/openshift-serverless-1/kn-serving-controller-rhel8@sha256:41a21fdc683183422ebb29707d81eca96d7ca119d01f369b9defbaea94c09939
registry.redhat.io/openshift-serverless-1/kn-serving-queue-rhel8@sha256:bd464d68e283ce6c48ae904010991b491b738ada5a419f044bf71fd48326005b
registry.redhat.io/openshift-serverless-1/kn-serving-storage-version-migration-rhel8@sha256:de87597265ee5ac26db4458a251d00a5ec1b5cd0bfff4854284070fdadddb7ab
registry.redhat.io/openshift-serverless-1/kn-serving-webhook-rhel8@sha256:eb33e874b5a7c051db91cd6a63223aabd987988558ad34b34477bee592ceb3ab
registry.redhat.io/openshift-serverless-1/net-istio-controller-rhel8@sha256:ec77d44271ba3d86af6cbbeb70f20a720d30d1b75e93ac5e1024790448edf1dd
registry.redhat.io/openshift-serverless-1/net-istio-webhook-rhel8@sha256:07074f52b5fb1f2eb302854dce1ed5b81c665ed843f9453fc35a5ebcb1a36696
registry.redhat.io/openshift-serverless-1/net-kourier-kourier-rhel8@sha256:e5f1111791ffff7978fe175f3e3af61a431c08d8eea4457363c66d66596364d8
registry.redhat.io/openshift-serverless-1/serverless-ingress-rhel8@sha256:3d1ab23c9ce119144536dd9a9b80c12bf2bb8e5f308d9c9c6c5b48c41f4aa89e
registry.redhat.io/openshift-serverless-1/serverless-kn-operator-rhel8@sha256:78cb34062730b3926a465f0665475f0172a683d7204423ec89d32289f5ee329d
registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8@sha256:119fbc185f167f3866dbb5b135efc4ee787728c2e47dd1d2d66b76dc5c43609e
registry.redhat.io/openshift-serverless-1/serverless-openshift-kn-rhel8-operator@sha256:0f763b740cc1b614cf354c40f3dc17050e849b4cbf3a35cdb0537c2897d44c95
registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:b30d60cd458133430d4c92bf84911e03cecd02f60e88a58d1c6c003543cf833a
registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:3fcd8e2bf0bcb8ff8c93a87af2c59a3bcae7be8792f9d3236c9b5bbd9b6db3b2
registry.redhat.io/rhel8/buildah@sha256:3d505d9c0f5d4cd5a4ec03b8d038656c6cdbdf5191e00ce6388f7e0e4d2f1b74
registry.redhat.io/source-to-image/source-to-image-rhel8@sha256:6a6025914296a62fdf2092c3a40011bd9b966a6806b094d51eec5e1bd5026ef4

The list of images was obtained by:

podman run --rm --entrypoint bash registry.redhat.io/openshift-serverless-1/serverless-operator-bundle:1.35.0  -c "cat /manifests/serverless-operator.clusterserviceversion.yaml" | yq '.spec.relatedImages[].image' | sort | uniq

OpenShift Serverless Logic Operator:

registry.redhat.io/openshift-serverless-1/logic-operator-bundle@sha256:a1d1995b2b178a1242d41f1e8df4382d14317623ac05b91bf6be971f0ac5a227
registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.35.0
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:203043ca27819f7d039fd361d0816d5a16d6b860ff19d737b07968ddfba3d2cd
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift4/ose-cli:latest

gcr.io/kaniko-project/warmer:v1.9.0
gcr.io/kaniko-project/executor:v1.9.0
podman create --name temp-container registry.redhat.io/openshift-serverless-1/logic-operator-bundle:1.35.0-5
podman cp temp-container:/manifests ./local-manifests-osl
podman rm temp-container
yq -r '.data."controllers_cfg.yaml" | from_yaml | .. | select(tag == "!!str") | select(test("^.*\\/.*:.*$"))' ./local-manifests-osl/logic-operator-rhel8-controllers-config_v1_configmap.yaml
yq -r '.. | select(has("image")) | .image' ./local-manifests-osl/logic-operator-rhel8.clusterserviceversion.yaml

Orchestrator Operator:

TBD

Note:
If you encounter issues pulling images due to an invalid GPG signature, consider updating the /etc/containers/policy.json file to reference the appropriate beta GPG key.
For example, you can use:
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
This may be required when working with pre-release or beta images signed with a different key than the default.

NPM packages for a disconnected environment

The packages required for the Orchestrator can be downloaded as tgz files from:

or using NPM packages from https://npm.registry.redhat.com e.g. by:

  npm pack "@redhat/backstage-plugin-orchestrator@1.5.1" --registry=https://npm.registry.redhat.com
  npm pack "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.5.1" --registry=https://npm.registry.redhat.com

3 - Orchestrator CRD Versions

The following table shows the list of supported Orchestrator Operator versions with their compatible CRD version.

Orchestrator Operator VersionCRD Version
1.3v1alpha1
1.4v1alpha2
1.5v1alpha3

3.1 - CRD Version v1alpha3

TBD

3.2 - CRD Version v1alpha1

The v1alpha1 version of Orchestrator CRD is supported only on Orchestrator 1.3 version. It is deprecated and not compatible with future orchestrator versions.

The following Orchestrator CR is an sample of the api v1alpha1 version.

apiVersion: rhdh.redhat.com/v1alpha1
kind: Orchestrator
metadata:
  name: orchestrator-sample
spec:
  sonataFlowOperator:
    isReleaseCandidate: false # Indicates RC builds should be used by the chart to install Sonataflow
    enabled: true # whether the operator should be deployed by the chart
    subscription:
      namespace: openshift-serverless-logic # namespace where the operator should be deployed
      channel: alpha # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: logic-operator-rhel8 # name of the operator package
      sourceName: redhat-operators # name of the catalog source
      startingCSV: logic-operator-rhel8.v1.34.0 # The initial version of the operator
  serverlessOperator:
    enabled: true # whether the operator should be deployed by the chart
    subscription:
      namespace: openshift-serverless # namespace where the operator should be deployed
      channel: stable # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: serverless-operator # name of the operator package
      sourceName: redhat-operators # name of the catalog source
  rhdhOperator:
    isReleaseCandidate: false # Indicates RC builds should be used by the chart to install RHDH
    enabled: true # whether the operator should be deployed by the chart
    enableGuestProvider: false # whether to enable guest provider
    secretRef:
      name: backstage-backend-auth-secret # name of the secret that contains the credentials for the plugin to establish a communication channel with the Kubernetes API, ArgoCD, GitHub servers and SMTP mail server.
      backstage:
        backendSecret: BACKEND_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the Backstage backend secret. Defaults to 'BACKEND_SECRET'. It's required.
      github: #GitHub specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with GitHub.
        token: GITHUB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by GitHub. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITHUB_TOKEN', empty for not available.
        clientId: GITHUB_CLIENT_ID # Key in the secret with name defined in the 'name' field that contains the value of the client ID that you generated on GitHub, for GitHub authentication (requires GitHub App). Defaults to 'GITHUB_CLIENT_ID', empty for not available.
        clientSecret: GITHUB_CLIENT_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the client secret tied to the generated client ID. Defaults to 'GITHUB_CLIENT_SECRET', empty for not available.
      k8s: # Kubernetes specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with the Kubernetes API Server.
        clusterToken: K8S_CLUSTER_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the Kubernetes API bearer token used for authentication. Defaults to 'K8S_CLUSTER_TOKEN', empty for not available.
        clusterUrl: K8S_CLUSTER_URL # Key in the secret with name defined in the 'name' field that contains the value of the API URL of the kubernetes cluster. Defaults to 'K8S_CLUSTER_URL', empty for not available.
      argocd: # ArgoCD specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with ArgoCD. Note that ArgoCD must be deployed beforehand and the argocd.enabled field must be set to true as well.
        url: ARGOCD_URL # Key in the secret with name defined in the 'name' field that contains the value of the URL of the ArgoCD API server. Defaults to 'ARGOCD_URL', empty for not available.
        username: ARGOCD_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username to login to ArgoCD. Defaults to 'ARGOCD_USERNAME', empty for not available.
        password: ARGOCD_PASSWORD # Key in the secret with name  defined in the 'name' field that contains the value of the password to authenticate to ArgoCD. Defaults to 'ARGOCD_PASSWORD', empty for not available.
      notificationsEmail:
        hostname: NOTIFICATIONS_EMAIL_HOSTNAME # Key in the secret with name defined in the 'name' field that contains the value of the hostname of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_HOSTNAME', empty for not available.
        username: NOTIFICATIONS_EMAIL_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_USERNAME', empty for not available.
        password: NOTIFICATIONS_EMAIL_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_PASSWORD', empty for not available.
    subscription:
      namespace: rhdh-operator # namespace where the operator should be deployed
      channel: fast-1.3 # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: rhdh # name of the operator package
      source: redhat-operators # name of the catalog source
      startingCSV: "" # The initial version of the operator
      targetNamespace: rhdh-operator # the target namespace for the backstage CR in which RHDH instance is created
  rhdhPlugins: # RHDH plugins required for the Orchestrator
    npmRegistry: "https://npm.registry.redhat.com" # NPM registry is defined already in the container, but sometimes the registry need to be modified to use different versions of the plugin, for example: staging(https://npm.stage.registry.redhat.com) or development repositories
    scope: "@redhat"
    orchestrator:
      package: "backstage-plugin-orchestrator@1.3.0"
      integrity: sha512-A/twx1SOOGDQjglLzOxQikKO0XOdPP1jh2lj9Y/92bLox8mT+eaZpub8YLwR2mb7LsUIUImg+U6VnKwoAV9ATA==
    orchestratorBackend:
      package: "backstage-plugin-orchestrator-backend-dynamic@1.3.0"
      integrity: sha512-Th5vmwyhHyhURwQo28++PPHTvxGSFScSHPJyofIdE5gTAb87ncyfyBkipSDq7fwj4L8CQTXa4YP6A2EkHW1npg==
    notifications:
      package: "plugin-notifications-dynamic@1.3.0"
      integrity: sha512-iYLgIy0YdP/CdTLol07Fncmo9n0J8PdIZseiwAyUt9RFJzKIXmoi2CpQLPKMx36lEgPYUlT0rFO81Ie2CSis4Q==
    notificationsBackend:
      package: "plugin-notifications-backend-dynamic@1.3.0"
      integrity: sha512-Pw9Op/Q+1MctmLiVvQ3M+89tkbWkw8Lw0VfcwyGSMiHpK/Xql1TrSFtThtLlymRgeCSBgxHYhh3MUusNQX08VA==
    signals:
      package: "plugin-signals-dynamic@1.3.0"
      integrity: sha512-+E8XeTXcG5oy+aNImGj/MY0dvEkP7XAsu4xuZjmAqOHyVfiIi0jnP/QDz8XMbD1IjCimbr/DMUZdjmzQiD0hSQ==
    signalsBackend:
      package: "plugin-signals-backend-dynamic@1.3.0"
      integrity: sha512-5Bl6C+idPXtquQxMZW+bjRMcOfFYcKxcGZZFv2ITkPVeY2zzxQnAz3vYHnbvKRSwlQxjIyRXY6YgITGHXWT0nw==
    notificationsEmail:
      enabled: false # whether to install the notifications email plugin. requires setting of hostname and credentials in backstage secret to enable. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
      package: "plugin-notifications-backend-module-email-dynamic@1.3.0"
      integrity: sha512-sm7yRoO6Nkk3B7+AWKb10maIrb2YBNSiqQaWmFDVg2G9cbDoWr9wigqqeQ32+b6o2FenfNWg8xKY6PPyZGh8BA==
      port: 587 # SMTP server port
      sender: "" # the email sender address
      replyTo: "" # reply-to address
  postgres:
    serviceName: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
    serviceNamespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
    authSecret:
      name: "sonataflow-psql-postgresql" # name of existing secret to use for PostgreSQL credentials.
      userKey: postgres-username # name of key in existing secret to use for PostgreSQL credentials.
      passwordKey: postgres-password # name of key in existing secret to use for PostgreSQL credentials.
    database: sonataflow # existing database instance used by data index and job service
  orchestrator:
    namespace: "sonataflow-infra" # Namespace where sonataflow's workflows run. The value is captured when running the setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `sonataflow-infra`.
    sonataflowPlatform:
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "1Gi"
          cpu: "500m"
  tekton:
    enabled: false # whether to create the Tekton pipeline resources
  argocd:
    enabled: false # whether to install the ArgoCD plugin and create the orchestrator AppProject
    namespace: "" # Defines the namespace where the orchestrator's instance of ArgoCD is deployed. The value is captured when running setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `orchestrator-gitops` in the setup.sh script.
  networkPolicy:
    rhdhNamespace: "rhdh-operator" # Namespace of existing RHDH instance

3.3 - CRD Version v1alpha2

The v1alpha2 version of Orchestrator CRD was introduced in Orchestrator 1.4 version and is currently supported.

New Fields

In OSL 1.35, these new features are introduced:

  1. Support for Workflow Monitoring
  2. Support for Knative Eventing

Hence, the CRD schema extends to allow configuration for these features by the user.

  • orchestrator.sonataflowPlatform.monitoring.enabled
  • orchestrator.sonataflowPlatform.eventing.broker.name
  • orchestrator.sonataflowPlatform.eventing.broker.namespace

Deleted Fields

In RHDH 1.4, the notifications and signals plugins are now part of RHDH image and no longer need to be configured by the user.

Hence, these plugin fields are now removed from the CRD schema.

  • rhdhPlugins.notifications.package
  • rhdhPlugins.notifications.integrity
  • rhdhPlugins.notificationsBackend.package
  • rhdhPlugins.notificationsBackend.integrity
  • rhdhPlugins.signals.package
  • rhdhPlugins.signals.integrity
  • rhdhPlugins.signalsBackend.package
  • rhdhPlugins.signalsBackend.integrity
  • rhdhPlugins.notificationsEmail.package
  • rhdhPlugins.notificationsEmail.integrity

Renamed Fields

For consistency in the subscription resource/configuration in the CRD, these fields are renamed.

  • sonataFlowOperator.subscription.source
  • serverlessOperator.subscription.source

The following Orchestrator CR is an sample of the api v1alpha2 version.

apiVersion: rhdh.redhat.com/v1alpha2
kind: Orchestrator
metadata:
  name: orchestrator-sample
spec:
  sonataFlowOperator:
    isReleaseCandidate: false # Indicates RC builds should be used by the chart to install Sonataflow
    enabled: true # whether the operator should be deployed by the chart
    subscription:
      namespace: openshift-serverless-logic # namespace where the operator should be deployed
      channel: alpha # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: logic-operator-rhel8 # name of the operator package
      source: redhat-operators # name of the catalog source
      startingCSV: logic-operator-rhel8.v1.35.0 # The initial version of the operator
  serverlessOperator:
    enabled: true # whether the operator should be deployed by the chart
    subscription:
      namespace: openshift-serverless # namespace where the operator should be deployed
      channel: stable # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: serverless-operator # name of the operator package
      source: redhat-operators # name of the catalog source
      startingCSV: serverless-operator.v1.35.0 # The initial version of the operator
  rhdhOperator:
    isReleaseCandidate: false # Indicates RC builds should be used by the chart to install RHDH
    enabled: true # whether the operator should be deployed by the chart
    enableGuestProvider: true # whether to enable guest provider
    secretRef:
      name: backstage-backend-auth-secret # name of the secret that contains the credentials for the plugin to establish a communication channel with the Kubernetes API, ArgoCD, GitHub servers and SMTP mail server.
      backstage:
        backendSecret: BACKEND_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the Backstage backend secret. Defaults to 'BACKEND_SECRET'. It's required.
      github: # GitHub specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with GitHub.
        token: GITHUB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by GitHub. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITHUB_TOKEN', empty for not available.
        clientId: GITHUB_CLIENT_ID # Key in the secret with name defined in the 'name' field that contains the value of the client ID that you generated on GitHub, for GitHub authentication (requires GitHub App). Defaults to 'GITHUB_CLIENT_ID', empty for not available.
        clientSecret: GITHUB_CLIENT_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the client secret tied to the generated client ID. Defaults to 'GITHUB_CLIENT_SECRET', empty for not available.
      gitlab: # Gitlab specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with Gitlab.
        host: GITLAB_HOST # Key in the secret with name defined in the 'name' field that contains the value of Gitlab Host's name. Defaults to 'GITHUB_HOST', empty for not available.
        token: GITLAB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by Gitlab. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITLAB_TOKEN', empty for not available.
      k8s: # Kubernetes specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with the Kubernetes API Server.
        clusterToken: K8S_CLUSTER_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the Kubernetes API bearer token used for authentication. Defaults to 'K8S_CLUSTER_TOKEN', empty for not available.
        clusterUrl: K8S_CLUSTER_URL # Key in the secret with name defined in the 'name' field that contains the value of the API URL of the kubernetes cluster. Defaults to 'K8S_CLUSTER_URL', empty for not available.
      argocd: # ArgoCD specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with ArgoCD. Note that ArgoCD must be deployed beforehand and the argocd.enabled field must be set to true as well.
        url: ARGOCD_URL # Key in the secret with name defined in the 'name' field that contains the value of the URL of the ArgoCD API server. Defaults to 'ARGOCD_URL', empty for not available.
        username: ARGOCD_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username to login to ArgoCD. Defaults to 'ARGOCD_USERNAME', empty for not available.
        password: ARGOCD_PASSWORD # Key in the secret with name  defined in the 'name' field that contains the value of the password to authenticate to ArgoCD. Defaults to 'ARGOCD_PASSWORD', empty for not available.
      notificationsEmail:
        hostname: NOTIFICATIONS_EMAIL_HOSTNAME # Key in the secret with name defined in the 'name' field that contains the value of the hostname of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_HOSTNAME', empty for not available.
        username: NOTIFICATIONS_EMAIL_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_USERNAME', empty for not available.
        password: NOTIFICATIONS_EMAIL_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_PASSWORD', empty for not available.
    subscription:
      namespace: rhdh-operator # namespace where the operator should be deployed
      channel: fast-1.4 # channel of an operator package to subscribe to
      installPlanApproval: Automatic # whether the update should be installed automatically
      name: rhdh # name of the operator package
      source: redhat-operators # name of the catalog source
      startingCSV: "" # The initial version of the operator
      targetNamespace: rhdh-operator # the target namespace for the backstage CR in which RHDH instance is created
  rhdhPlugins: # RHDH plugins required for the Orchestrator
    npmRegistry: "https://npm.registry.redhat.com" # NPM registry is defined already in the container, but sometimes the registry need to be modified to use different versions of the plugin, for example: staging(https://npm.stage.registry.redhat.com) or development repositories
    scope: "https://github.com/rhdhorchestrator/orchestrator-plugins-internal-release/releases/download/1.4.0-rc.7"
    orchestrator:
      package: "backstage-plugin-orchestrator-1.4.0-rc.7.tgz"
      integrity: sha512-Vclb+TIL8cEtf9G2nx0UJ+kMJnCGZuYG/Xcw0Otdo/fZGuynnoCaAZ6rHnt4PR6LerekHYWNUbzM3X+AVj5cwg==
    orchestratorBackend:
      package: "backstage-plugin-orchestrator-backend-dynamic-1.4.0-rc.7.tgz"
      integrity: sha512-bxD0Au2V9BeUMcZBfNYrPSQ161vmZyKwm6Yik5keZZ09tenkc8fNjipwJsWVFQCDcAOOxdBAE0ibgHtddl3NKw==
    notificationsEmail:
      enabled: false # whether to install the notifications email plugin. requires setting of hostname and credentials in backstage secret to enable. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
      port: 587 # SMTP server port
      sender: "" # the email sender address
      replyTo: "" # reply-to address
  postgres:
    serviceName: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
    serviceNamespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
    authSecret:
      name: "sonataflow-psql-postgresql" # name of existing secret to use for PostgreSQL credentials.
      userKey: postgres-username # name of key in existing secret to use for PostgreSQL credentials.
      passwordKey: postgres-password # name of key in existing secret to use for PostgreSQL credentials.
    database: sonataflow # existing database instance used by data index and job service
  orchestrator:
    namespace: "sonataflow-infra" # Namespace where sonataflow's workflows run. The value is captured when running the setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `sonataflow-infra`.
    sonataflowPlatform:
      monitoring:
        enabled: true # whether to enable monitoring
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "1Gi"
          cpu: "500m"
      eventing:
        broker:
          name: "my-knative" # Name of existing Broker instance. Optional
          namespace: "knative" # Namespace of existing Broker instance. Optional      
  tekton:
    enabled: false # whether to create the Tekton pipeline resources
  argocd:
    enabled: false # whether to install the ArgoCD plugin and create the orchestrator AppProject
    namespace: "" # Defines the namespace where the orchestrator's instance of ArgoCD is deployed. The value is captured when running setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `orchestrator-gitops` in the setup.sh script.
  networkPolicy:
    rhdhNamespace: "rhdh-operator" # Namespace of existing RHDH instance

4 - Requirements

Operators

The Orchestrator runtime/deployment is made of two main parts: OpenShift Serverless Logic operator and RHDH operator

OpenShift Serverless Logic operator requirements

OpenShift Serverless Logic operator resource requirements are described OpenShift Serverless Logic Installation Requirements. This is mainly for local environment settings.
The operator deploys a Data Index service and a Jobs service. These are the recommended minimum resource requirements for their pods:
Data Index pod:

resources:
      limits:
        cpu: 500m
        memory: 1Gi
      requests:
        cpu: 250m
        memory: 64Mi

Jobs pod:

resources:
      limits:
        cpu: 200m
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 1Gi

The resources for these pods are controlled by a CR of type SonataFlowPlatform. There is one such CR in the sonataflow-infra namespace.

RHDH operator requirements

The requirements for RHDH operator and its components are described here

Workflows

Each workflow has its own logic and therefore different resource requirements that are influenced by its specific logic.
Here are some metrics for the workflows we provide. For each workflow you have the following fields: cpu idle, cpu peak (during execution), memory.

  • greeting workflow
    • cpu idle: 4m
    • cpu peak: 12m
    • memory: 300 Mb
  • mtv-plan workflow
    • cpu idle: 4m
    • cpu peak: 130m
    • memory: 300 Mb

How to evaluate resource requirements for your workflow

Locate the workflow pod in OCP Console. There is a tab for Metrics. Here you’ll find the CPU and memory. Execute the workflow a few times. It does not matter whether it succeeds or not as long as all the states are executed. Now you can see the peak usage (execution) and the idle usage (after a few executions).

5 - Orchestrator on OpenShift

Installing the Orchestrator is facilitated through an operator available in the Red Hat Catalog in the OLM package. This operator is responsible for installing all of the Orchestrator components. The Orchestrator is based on the SonataFlow and the Serverless Workflow technologies to design and manage the workflows. The Orchestrator plugins are deployed on a Red Hat Developer Hub instance, which serves as the frontend.

When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.

To utilize Backstage capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.

Orchestrator Documentation

For comprehensive documentation on the Orchestrator, please visit https://www.rhdhorchestrator.io.

Installing the Orchestrator Go Operator

Deploy the Orchestrator solution suite in an OCP cluster using the Orchestrator operator.
The operator installs the following components onto the target OpenShift cluster:

  • RHDH (Red Hat Developer Hub) Backstage
  • OpenShift Serverless Logic Operator (with Data-Index and Job Service)
  • OpenShift Serverless Operator
    • Knative Eventing
    • Knative Serving
  • (Optional) An ArgoCD project named orchestrator. Requires an pre-installed ArgoCD/OpenShift GitOps instance in the cluster. Disabled by default
  • (Optional) Tekton tasks and build pipeline. Requires an pre-installed Tekton/OpenShift Pipelines instance in the cluster. Disabled by default

Important Note for ARM64 Architecture Users

Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the ARM64 architecture. Consequently, deployment of this operator on an OpenShift Local cluster on MacBook laptops with M1/M2 chips is not supported.

Prerequisites

  • Logged in to a Red Hat OpenShift Container Platform (version 4.14 +) cluster as a cluster administrator.
  • OpenShift CLI (oc) is installed.
  • Operator Lifecycle Manager (OLM) has been installed in your cluster.
  • Your cluster has a default storage class provisioned.
  • A GitHub API Token - to import items into the catalog, ensure you have a GITHUB_TOKEN with the necessary permissions as detailed here.
    • For classic token, include the following permissions:
      • repo (all)
      • admin:org (read:org)
      • user (read:user, user:email)
      • workflow (all) - required for using the software templates for creating workflows in GitHub
    • For Fine grained token:
      • Repository permissions: Read access to metadata, Read and Write access to actions, actions variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository hooks, secrets, security events, and workflows.
      • Organization permissions: Read access to members, Read and Write access to organization administration, organization hooks, organization projects, and organization secrets.

⚠️Warning: Skipping these steps will prevent the Orchestrator from functioning properly.

Deployment with GitOps

If you plan to deploy in a GitOps environment, make sure you have installed the ArgoCD/Red Hat OpenShift GitOps and the Tekton/Red Hat Openshift Pipelines Install operators following these instructions. The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow repository and to automatically trigger the Tekton pipelines as needed.

  • ArgoCD/OpenShift GitOps operator

    • Ensure at least one instance of ArgoCD exists in the designated namespace (referenced by ARGOCD_NAMESPACE environment variable). Example here
    • Validated API is argoproj.io/v1alpha1/AppProject
  • Tekton/OpenShift Pipelines operator

    • Validated APIs are tekton.dev/v1beta1/Task and tekton.dev/v1/Pipeline
    • Requires ArgoCD installed since the manifests are deployed in the same namespace as the ArgoCD instance.

    Remember to enable argocd in your CR instance.

Detailed Installation Guide

From OperatorHub

  1. Deploying PostgreSQL reference implementation
    • If you do not have a PostgreSQL instance in your cluster
      you can deploy the PostgreSQL reference implementation by following the steps here.
    • If you already have PostgreSQL running in your cluster
      ensure that the default settings in the PostgreSQL values file match the postgres field provided in the Orchestrator CR file.
  2. Install Orchestrator operator
    1. Go to OperatorHub in your OpenShift Console.
    2. Search for and install the Orchestrator Operator.
  3. Run the Setup Script
    1. Follow the steps in the Running the Setup Script section to download and execute the setup.sh script, which initializes the RHDH environment.
  4. Create an Orchestrator instance
    1. Once the Orchestrator Operator is installed, navigate to Installed Operators.
    2. Select Orchestrator Operator.
    3. Click on Create Instance to deploy an Orchestrator instance.
  5. Verify resources and wait until they are running
    1. From console run the following command get the necessary wait commands:
      oc describe orchestrator orchestrator-sample -n openshift-operators | grep -A 10 "Run the following commands to wait until the services are ready:"\

      The command will return an output similar to the one below, which lists several oc wait commands. This depends on your specific cluster.

        oc wait -n openshift-serverless deploy/knative-openshift --for=condition=Available --timeout=5m
        oc wait -n knative-eventing knativeeventing/knative-eventing --for=condition=Ready --timeout=5m
        oc wait -n knative-serving knativeserving/knative-serving --for=condition=Ready --timeout=5m
        oc wait -n openshift-serverless-logic deploy/logic-operator-rhel8-controller-manager --for=condition=Available --timeout=5m
        oc wait -n sonataflow-infra sonataflowplatform/sonataflow-platform --for=condition=Succeed --timeout=5m
        oc wait -n sonataflow-infra deploy/sonataflow-platform-data-index-service --for=condition=Available --timeout=5m
        oc wait -n sonataflow-infra deploy/sonataflow-platform-jobs-service --for=condition=Available --timeout=5m
        oc get networkpolicy -n sonataflow-infra
      
    2. Copy and execute each command from the output in your terminal. These commands ensure that all necessary services and resources in your OpenShift environment are available and running correctly.

    3. If any service does not become available, verify the logs for that service or consult troubleshooting steps.

Manual Installation

  1. Deploy the PostgreSQL reference implementation for persistence support in SonataFlow following these instructions

  2. Create a namespace for the Orchestrator solution:

    oc new-project orchestrator
    
  3. Run the Setup Script

    1. Follow the steps in the Running the Setup Script section to download and execute the setup.sh script, which initializes the RHDH environment.
  4. Use the following manifest to install the operator in an OCP cluster:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: orchestrator-operator
      namespace: openshift-operators
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: orchestrator-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    
  5. Run the following commands to determine when the installation is completed:

    wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.5/hack/wait_for_operator_installed.sh -O /tmp/wait_for_operator_installed.sh && chmod u+x /tmp/wait_for_operator_installed.sh && /tmp/wait_for_operator_installed.sh
    

    During the installation process, the Orchestrator Operator creates and monitors the lifecycle of the sub-components operators: RHDH operator, OpenShift Serverless operator and OpenShift Serverless Logic operator. Furthermore, it creates the necessary CRs and resources needed for orchestrator to function properly. Please refer to the troubleshooting-section for known issues with the operator resources.

  6. Apply the Orchestrator custom resource (CR) on the cluster to create an instance of RHDH and resources of OpenShift Serverless Operator and OpenShift Serverless Logic Operator. Make any changes to the CR before applying it, or test the default Orchestrator CR:

    oc apply -n orchestrator -f https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/refs/heads/release-1.5/config/samples/_v1alpha3_orchestrator.yaml
    

    Note: After the first reconciliation of the Orchestrator CR, changes to some of the fields in the CR may not be propagated/reconciled to the intended resource. For example, changing the platform.resources.requests field in the Orchestrator CR will not have any effect on the running instance of the SonataFlowPlatform (SFP) resource. For simplicity sake, that is the current design and may be revisited in the near future. Please refer to the CRD Parameter List to know which fields can be reconciled.

Running The Setup Script

The setup.sh script simplifies the initialization of the RHDH environment by creating the required authentication secret and labeling GitOps namespaces based on the cluster configuration.

  1. Create a namespace for the RHDH instance. This namespace is predefined as the default in both the setup.sh script and the Orchestrator CR but can be overridden if needed.

    oc new-project rhdh
    
  2. Download the setup script from the github repository and run it to create the RHDH secret and label the GitOps namespaces:

    wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.5/hack/setup.sh -O /tmp/setup.sh && chmod u+x /tmp/setup.sh
    
  3. Run the script:

    /tmp/setup.sh --use-default
    

NOTE: If you don’t want to use the default values, omit the --use-default and the script will prompt you for input.

The contents will vary depending on the configuration in the cluster. The following list details all the keys that can appear in the secret:

  • BACKEND_SECRET: Value is randomly generated at script execution. This is the only mandatory key required to be in the secret for the RHDH Operator to start.
  • K8S_CLUSTER_URL: The URL of the Kubernetes cluster is obtained dynamically using oc whoami --show-server.
  • K8S_CLUSTER_TOKEN: The value is obtained dynamically based on the provided namespace and service account.
  • GITHUB_TOKEN: This value is prompted from the user during script execution and is not predefined.
  • GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET: The value for both these fields are used to authenticate against GitHub. For more information open this link.
  • GITLAB_HOST and GITLAB_TOKEN: The value for both these fields are used to authenticate against GitLab.
  • ARGOCD_URL: This value is dynamically obtained based on the first ArgoCD instance available.
  • ARGOCD_USERNAME: Default value is set to admin.
  • ARGOCD_PASSWORD: This value is dynamically obtained based on the ArgoCD instance available.

Keys will not be added to the secret if they have no values associated. So for instance, when deploying in a cluster without the GitOps operators, the ARGOCD_URL, ARGOCD_USERNAME and ARGOCD_PASSWORD keys will be omitted in the secret.

Sample of a secret created in a GitOps environment:

$> oc get secret -n rhdh -o yaml backstage-backend-auth-secret
apiVersion: v1
data:
  ARGOCD_PASSWORD: ...
  ARGOCD_URL: ...
  ARGOCD_USERNAME: ...
  BACKEND_SECRET: ...
  GITHUB_TOKEN: ...
  K8S_CLUSTER_TOKEN: ...
  K8S_CLUSTER_URL: ...
kind: Secret
metadata:
  creationTimestamp: "2024-05-07T22:22:59Z"
  name: backstage-backend-auth-secret
  namespace: rhdh-operator
  resourceVersion: "4402773"
  uid: 2042e741-346e-4f0e-9d15-1b5492bb9916
type: Opaque

Enabling Monitoring for Workflows

If you want to enable monitoring for workflows, you shall enable it in the Orchestrator CR as follows:

apiVersion: rhdh.redhat.com/v1alpha3
kind: Orchestrator
metadata:
  name: ...
spec:
  ...
  platform:
    ...
    monitoring:
      enabled: true
      ...

After the CR is deployed, follow the instructions to deploy Prometheus, Grafana and the sample Grafana dashboard.

Using Knative eventing communication

To enable eventing communication communication between the different components (Data Index, Job Service and Workflows), a broker should be used. Kafka is a good candidate as it fulfills the reliability need. You can find the list of available brokers for Knative is here: https://knative.dev/docs/eventing/brokers/broker-types/

Alternatively, an in-memory broker could also be used, however it is not recommended to use it for production purposes.

Follow these instructions to setup the Knative broker communication.

Additional information

Proxy configuration

Your Backstage instance might be configured to work with a proxy. In that case you need to tell Backstage to bypass the workflow for requests to workflow namespaces and sonataflow namespace (sonataflow-infra). You need to add the namespaces to the environment variable NO_PROXY. E.g. NO_PROXY=current-value-of-no-proxy, .sonataflow-infra, .my-workflow-namespace. Note the . before the namespace name.

Additional Workflow Namespaces

When deploying a workflow in a namespace different from where Sonataflow services are running (e.g., sonataflow-infra), several essential steps must be followed:

  1. Allow Traffic from the Workflow Namespace: To allow Sonataflow services to accept traffic from workflows, either create an additional network policy or update the existing policy with the new workflow namespace.

    Create Additional Network Policy
    oc create -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-external-workflows-to-sonataflow-infra
      # Namespace where network policies are deployed
      namespace: sonataflow-infra
    spec:
      podSelector: {}
      ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                # Allow Sonataflow services to communicate with new/additional workflow namespace.
                kubernetes.io/metadata.name: <new-workflow-namespace>
    EOF
    
    Alternatively - Update Existing Network Policy
    oc -n sonataflow-infra patch networkpolicy allow-rhdh-to-sonataflow-and-workflows --type='json' \
    -p='[
    {
      "op": "add",
      "path": "/spec/ingress/0/from/-",
      "value": {
        "namespaceSelector": {
          "matchLabels": {
            "kubernetes.io/metadata.name": <new-workflow-namespace>
          }
        }          
      }
    }]'
    
  2. Identify the RHDH Namespace: Retrieve the namespace where RHDH is running by executing:

    oc get backstage -A
    

    Store the namespace value in $RHDH_NAMESPACE in the Network Policy manifest below.

  3. Identify the Sonataflow Services Namespace: Check the namespace where Sonataflow services are deployed:

    oc get sonataflowclusterplatform -A
    

    If there is no cluster platform, check for a namespace-specific platform:

    oc get sonataflowplatform -A
    

    Store the namespace value in $WORKFLOW_NAMESPACE.

  4. Set Up a Network Policy: Configure a network policy to allow traffic only between RHDH, Knative, Sonataflow services, and workflows.

    oc create -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-rhdh-to-sonataflow-and-workflows
      namespace: $ADDITIONAL_NAMESPACE
    spec:
      podSelector: {}
      ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                # Allows traffic from pods in the RHDH namespace.
                kubernetes.io/metadata.name: $RHDH_NAMESPACE
          - namespaceSelector:
              matchLabels:
                # Allow traffic from pods in the in the Workflow namespace.
                kubernetes.io/metadata.name: $WORKFLOW_NAMESPACE
          - namespaceSelector:
              matchLabels:
                # Allows traffic from pods in the K-Native Eventing namespace.
                kubernetes.io/metadata.name: knative-eventing
          - namespaceSelector:
              matchLabels:
                # Allows traffic from pods in the K-Native Serving namespace.
                kubernetes.io/metadata.name: knative-serving
    EOF
    

    To allow unrestricted communication between all pods within the workflow’s namespace, create the allow-intra-namespace network policy.

    oc create -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
     name: allow-intra-namespace
     namespace:  $ADDITIONAL_NAMESPACE
    spec:
     # Apply this policy to all pods in the namespace
     podSelector: {}
     # Specify policy type as 'Ingress' to control incoming traffic rules
     policyTypes:
       - Ingress
     ingress:
       - from:
           # Allow ingress from any pod within the same namespace
           - podSelector: {}
    EOF
    
  5. Ensure Persistence for the Workflow: If persistence is required, follow these steps:

  • Create a PostgreSQL Secret: The workflow needs its own schema in PostgreSQL. Create a secret containing the PostgreSQL credentials in the workflow’s namespace:
    oc get secret sonataflow-psql-postgresql -n sonataflow-infra -o yaml > secret.yaml
    sed -i '/namespace: sonataflow-infra/d' secret.yaml
    oc apply -f secret.yaml -n $ADDITIONAL_NAMESPACE
    
  • Configure the Namespace Attribute: Add the namespace attribute under the serviceRef property where the PostgreSQL server is deployed.
    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
      ...
    spec:
      ...
      persistence:
        postgresql:
          secretRef:
            name: sonataflow-psql-postgresql
            passwordKey: postgres-password
            userKey: postgres-username
          serviceRef:
            databaseName: sonataflow
            databaseSchema: greeting
            name: sonataflow-psql-postgresql
            namespace: $POSTGRESQL_NAMESPACE
            port: 5432
    
    Replace POSTGRESQL_NAMESPACE with the namespace where the PostgreSQL server is deployed.

By following these steps, the workflow will have the necessary credentials to access PostgreSQL and will correctly reference the service in a different namespace.

GitOps environment

See the dedicated document

Deploying PostgreSQL reference implementation

See here

ArgoCD and workflow namespace

If you manually created the workflow namespaces (e.g., $WORKFLOW_NAMESPACE), run this command to add the required label that allows ArgoCD to deploy instances there:

oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE

Workflow installation

Follow Workflows Installation

Cleanup

/!\ Before removing the orchestrator, make sure you have first removed any installed workflows. Otherwise the deletion may become hung in a terminating state.

To remove the operator, first remove the operand resources

Run:

oc delete namespace orchestrator

to delete the Orchestrator CR. This will remove the OSL, Serverless and RHDH Operators, Sonataflow CRs.

To clean up the rest of resources run

oc delete namespace sonataflow-infra rhdh

If you want to remove knative related resources, you may also run:

oc get crd -o name | grep -e knative | xargs oc delete

To remove the operator from the cluster, delete the subscription:

oc delete subscriptions.operators.coreos.com orchestrator-operator -n openshift-operators

Note that the CRDs created during the installation process will remain in the cluster.

Compatibility Matrix between Orchestrator Operator and Dependencies

Orchestrator OperatorRHDHOSLServerless
Orchestrator 1.5.01.5.11.35.01.35.0

Compatibility Matrix for Orchestrator Plugins

Orchestrator Plugins VersionOrchestrator Operator Version
Orchestrator Backend (backstage-plugin-orchestrator-backend-dynamic@1.5.1)1.5.0
Orchestrator (backstage-plugin-orchestrator@1.5.1)1.5.0
Orchestrator Scaffolder Backend (backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.5.1)1.5.0

Troubleshooting/Known Issue

Zip bomb detected with Orchestrator Plugin

Currently, there is a known issue with RHDH pod starting up due to the size of the orchestrator plugin. The error Zip bomb detected in backstage-plugin-orchestrator-1.5.0 will be seen and this can be resolved by increasing the MAX_ENTRY_SIZE" of the initContainer which downloads the plugins. This will be resolved in the next operator release. More information can be found here.

To fix this issue, please run patch command within the RHDH instance namespace:

oc -n <rhdh-namespace> patch backstage <rhdh-name> --type='json' -p='[
    {
      "op": "add",
      "path": "/spec/deployment/patch/spec/template/spec/initContainers",
      "value": [
        {
          "name": "install-dynamic-plugins",
          "env": [
            {
              "name": "MAX_ENTRY_SIZE",
              "value": "30000000"
            }
          ]
        }
      ]
    }
  ]'

6 - Orchestrator on Kubernetes

The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a kind installation.

Here’s a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "127.0.0.1"
  apiServerPort: 16443
nodes:
  - role: control-plane
    kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"      
    - |
      kind: KubeletConfiguration
      localStorageCapacityIsolation: true      
    extraPortMappings:
      - containerPort: 80
        hostPort: 9090
        protocol: TCP
      - containerPort: 443
        hostPort: 9443
        protocol: TCP
  - role: worker

Save this file as kind-config.yaml, and now run:

kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'

The cluster should be up and running with Contour ingress-controller installed, so localhost:9090 will direct the traffic to Backstage, because of the ingress created by the helm chart on port 80.

Orchestrator-k8s helm chart

This chart will install the Orchestrator and all its dependencies on kubernetes.

THIS CHART IS NOT SUITED FOR PRODUCTION PURPOSES, you should only use it for development or tests purposes

The chart deploys:

Usage

helm repo add orchestrator https://rhdhorchestrator.github.io/orchestrator-helm-chart

helm install orchestrator orchestrator/orchestrator-k8s

Configuration

All of the backstage app-config is derived from the values.yaml.

Secrets as env vars:

To use secret as env vars, like the one used for the notification, see charts/Orchestrator-k8s/templates/secret.yaml Every key in that secret will be available in the app-config for resolution.

Development

git clone https://github.com/rhdhorchestrator.github.io/orchestrator-helm-chart
cd orchestrator-helm-chart/charts/orchestrator-k8s


helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
helm repo add postgresql https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart
helm repo add workflows https://rhdhorchestrator.io/serverless-workflows-config

helm dependencies build
helm install orchestrator .

The output should look like that

$ helm install orchestrator .
Release "orchestrator" has been upgraded. Happy Helming!
NAME: orchestrator
LAST DEPLOYED: Tue Sep 19 18:19:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
This chart will install RHDH-backstage(RHDH upstream) + Serverless Workflows.

To get RHDH's route location:
    $ oc get route orchestrator-white-backstage -o jsonpath='https://{ .spec.host }{"\n"}'

To get the serverless workflow operator status:
    $ oc get deploy -n sonataflow-operator-system 

To get the serverless workflows status:
    $ oc get sf

The chart notes will provide more information on:

  • route location of backstage
  • the sonata operator status
  • the sonata workflow deployed status

7 - Orchestrator on existing RHDH instance

When RHDH is already installed and in use, reinstalling it is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:

  1. Utilize the Orchestrator operator to install the requisite components, such as the OpenShift Serverless Logic Operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
  2. Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
  3. Import the Orchestrator software templates into the Backstage catalog.

Prerequisites

  • RHDH is already deployed with a running Backstage instance.
    • Software templates for workflows requires GitHub provider to be configured.
  • Ensure that a PostgreSQL database is available and that you have credentials to manage the tablespace (optional).
    • For your convenience, a reference implementation is provided.
    • If you already have a PostgreSQL database installed, please refer to this note regarding default settings.

In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.

The installation steps are detailed here.

8 - Workflows

In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.

8.1 - Deploy From Helm Repository

Orchestrator Workflows Helm Repository

This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.

The repository includes a variety of serverless workflows, such as:

  • Greeting: A basic example workflow to demonstrate functionality.
  • Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
  • Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.

Usage

Prerequisites

To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository

Installation

helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows-config

View available workflows on the Helm repository:

helm search repo orchestrator-workflows

The expected result should look like (with different versions):

NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                      
orchestrator-workflows/greeting 	0.4.2        	1.16.0     	A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube	0.2.16       	1.16.0     	A Helm chart to deploy the move2kube workflow.   
orchestrator-workflows/mta      	0.2.16       	1.16.0     	A Helm chart for MTA serverless workflow         
orchestrator-workflows/workflows	0.2.24       	1.16.0     	A Helm chart for serverless workflows
...

You can install the workflows following their respective README

Installing workflows in additional namespaces

When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.

Version Compatability

The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it. The list below outlines the compatibility between the workflows and Orchestrator versions:

WorkflowsChart VersionOrchestrator Operator Version
move2kube1.5.x1.5.x
create-ocp-project1.5.x1.5.x
request-vm-cnv1.5.x1.5.x
modify-vm-resources1.5.x1.5.x
mta-v71.5.x1.5.x
mtv-migration1.5.x1.5.x
mtv-plan1.5.x1.5.x
——————–————————————-
move2kube1.4.x1.4.x
create-ocp-project1.4.x1.4.x
request-vm-cnv1.4.x1.4.x
modify-vm-resources1.4.x1.4.x
mta-v71.4.x1.4.x
mtv-migration1.4.x1.4.x
mtv-plan1.4.x1.4.x
——————–————————————-
move2kube1.3.x1.3.x
create-ocp-project1.3.x1.3.x
request-vm-cnv1.3.x1.3.x
modify-vm-resources1.3.x1.3.x
mta-v71.3.x1.3.x
mtv-migration1.3.x1.3.x
mtv-plan1.3.x1.3.x
——————–————————————-
mta-analysis0.3.x1.2.x
move2kube0.3.x1.2.x
create-ocp-project0.1.x1.2.x
request-vm-cnv0.1.x1.2.x
modify-vm-resources0.1.x1.2.x
mta-v60.2.x1.2.x
mta-v70.2.371.2.x
mtv-migration0.0.x1.2.x
mtv-plan0.0.131.2.x

Helm index

https://www.rhdhorchestrator.io/serverless-workflows-config/index.yaml