Deployment Guide¶
Dependencies¶
- An existing Kubernetes cluster.
- An IVOA Service Registry deployment
Quick Start¶
helm repo add science-platform https://images.opencadc.org/chartrepo/platform
helm repo add science-platform-client https://images.opencadc.org/chartrepo/client
helm repo update
helm upgrade --install -n traefik --create-namespace --values my-base-local-values-file.yaml base science-platform/base
helm upgrade --install -n skaha-system --values my-posix-mapper-local-values-file.yaml posix-mapper science-platform/posixmapper
helm upgrade --install -n skaha-system --values my-cavern-local-values-file.yaml cavern science-platform/cavern
helm upgrade --install -n skaha-system --values my-skaha-local-values-file.yaml skaha science-platform/skaha
helm upgrade --install -n skaha-system --values my-scienceportal-local-values-file.yaml science-portal science-platform/scienceportal
helm upgrade --install -n skaha-system --values my-storage-ui-local-values-file.yaml storage-ui science-platform-client/storageui
Base install¶
The Base install will create ServiceAccount, Role, Namespace, and RBAC objects needed to deploy the Skaha service, as such it requires cluster-admin privileges.
Important
The base chart itself is optional, but the objects it creates are required for the other services to operate correctly. If you have already created the necessary Namespaces and RBAC objects, or if
they have been created for you by your Cluster Administrator, you can skip this step.
Create a my-base-local-values-file.yaml file to override Values from the main template values.yaml file. Mainly the
Traefik Default Server certificate (optional if needed):
my-base-local-values-file.yaml
secrets:
default-certificate:
tls.crt: <base64 encoded server certificate>
tls.key: <base64 encoded server key>
# Settings passed to Traefik. The install flag is used by Helm to proceed to install Traefik or not. If false, ensure v2.11.0 is at minimum installed.
traefik:
install: true
tlsStore:
default:
defaultCertificate:
# See default-certificate secret(s) above
secretName: default-certificate
helm upgrade --install -n traefik --create-namespace --values my-base-local-values-file.yaml base science-platform/base
NAME: base
LAST DEPLOYED: Thu Nov 11 07:28:45 2025
NAMESPACE: traefik
STATUS: deployed
REVISION: 4
Persistent Volumes and Persistent Volume Claims¶
Important
The base Helm Chart must be installed first as it creates the necessary Namespaces for the Persistent Volume Claims!
There are two (2) Persistent Volume Claims that are used in the system, due to the fact that there are two (2) Namespaces (skaha-system and skaha-workload). These PVCs, while
having potentially different configurations, must point to the same storage. For example, if two hostPath PVCs are created, the hostPath.path must point to the same
folder in order to have shared content between the Cavern Service (cavern) and the User Sessions (Notebooks, CARTA, etc.).
It is expected that the deployer, or an Administrator, will create the necessary Persistent Volumes (if needed), and the required Persistent Volume Claims at
this point. There are sample Local Storage Persistent Volume examples in the base/volumes folder.
Two (2) Persistent Volume Claims are required. While both point to the same underlying storage, they are in different Namespaces. This leads to somewhat duplicated effort, but it is necessary to ensure that both the skaha-system and skaha-workload namespaces have access to the required storage resources.
See this short explanation for more information.
POSIX Mapper install¶
The POSIX Mapper Service is required to provide a UID to Username mapping, and a GID to Group Name mapping so that any Terminal access properly showed System Users in User Sessions. It will generate UIDs when a user is requested, or a GID when a Group is requested, and then keep track of them.
This service is required to be installed before the Skaha service.
Create a my-posix-mapper-local-values-file.yaml file to override Values from the main template values.yaml file.
my-posix-mapper-local-values-file.yaml
# POSIX Mapper web service deployment
deployment:
hostname: example.org
posixMapper:
# Optionally set the DEBUG port.
# extraEnv:
# - name: CATALINA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
# - name: JAVA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
# Uncomment to debug. Requires options above as well as service port exposure below.
# extraPorts:
# - containerPort: 5555
# protocol: TCP
# Resources provided to the Skaha service.
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"
# Used to set the minimum UID. Useful to avoid conflicts.
minUID: 10000
# Used to set the minimum GID. Keep this much higher than the minUID so that default Groups can be set for new users.
minGID: 900000
# The URL of the IVOA Registry
registryURL: https://example.org/reg
# Optionally mount a custom CA certificate
# extraVolumeMounts:
# - mountPath: "/config/cacerts"
# name: cacert-volume
# Create the CA certificate volume to be mounted in extraVolumeMounts
# extraVolumes:
# - name: cacert-volume
# secret:
# defaultMode: 420
# secretName: posix-manager-cacert-secret
# Specify extra hostnames that will be added to the Pod's /etc/hosts file. Note that this is in the
# deployment object, not the posixMapper one.
#
# These entries get added as hostAliases entries to the Deployment.
#
# Example:
# extraHosts:
# - ip: 127.3.34.5
# hostname: myhost.example.org
#
# extraHosts: []
secrets:
# Uncomment to enable local or self-signed CA certificates for your domain to be trusted.
# posix-manager-cacert-secret:
# ca.crt: <base64 encoded ca crt>
# These values are preset in the catalina.properties, and this default database only exists beside this service.
postgresql:
# maxActive: 4
# schema: mapping
# url: jdbc:postgresql://db.host:5432/mapping
# auth:
# username: posixmapperuser
# password: posixmapperpwd
helm -n skaha-system upgrade --install --values my-posix-mapper-local-values-file.yaml posix-mapper science-platform/posixmapper
NAME: posix-mapper
LAST DEPLOYED: Thu Oct 28 07:28:45 2025
NAMESPACE: skaha-system
STATUS: deployed
REVISION: 1
Test it.
# See below for tokens
export SKA_TOKEN=...
curl -SsL --header "Authorization: Bearer ${SKA_TOKEN}" https://example.host.com/posix-mapper/uid
[]%
curl -SsL --header "Authorization: Bearer ${SKA_TOKEN}" "https://example.host.com/posix-mapper/uid?user=mynewuser"
mynewuser:x:10000:10000:::
Cavern (User Storage API) install¶
The Cavern API provides access to the User Storage which is shared between Skaha and all of the User Sessions. A Bearer token is required when trying to read private access, or any writing.
Create a my-cavern-local-values-file.yaml file to override Values from the main template values.yaml file.
my-cavern-local-values-file.yaml
# Cavern web service deployment
deployment:
hostname: example.org
cavern:
# How cavern identifies itself. Required.
resourceID: "ivo://example.org/cavern"
# Set the Registry URL pointing to the desired registry. Required
registryURL: "https://example.org/reg"
# How to find the POSIX Mapper API. URI (ivo://) or URL (https://). Required.
posixMapperResourceID: "ivo://example.org/posix-mapper"
# User Allocation settings. This is used to create new folders under the main root allocation folders, namely /home and /projects.
allocations:
# Required. The default size, in GB, of an allocation. This is used in the absence of the Quota VOSpace property. Can be a floating point number.
# Provided value is 10 (10 GiB) by default.
# Example:
# defaultSizeGB: 25.5
defaultSizeGB: 25
filesystem:
# persistent data directory in container
dataDir: # e.g. "/data"
# RELATIVE path to the node/file content that could be mounted in other containers
subPath: # e.g. "cavern"
# See https://github.com/opencadc/vos/tree/master/cavern for documentation. For deployments using OpenID Connect,
# the rootOwner MUST be an object with the following properties set.
rootOwner:
# The adminUsername is required to be set whomever has admin access over the filesystem.dataDir above.
adminUsername:
# The username of the root owner.
username:
# The UID of the root owner.
uid:
# The GID of the root owner.
gid:
# The API keys object that will be permitted to perform administrative tasks. These will be passed as authorization headers to the Cavern API.
# The token values will be used by client applications, and each client matching a clientApplicationName should be configured with the matching token.
# Format is <clientApplicationName>: <apiKeyToken>
# Example:
# adminAPIKeys:
# skaha: "token-value"
# prepareData: "another-token-value"
adminAPIKeys:
prepareData: "32fjd93jfn93n3nFjsl293jfn93jf="
skaha: "88shdj3en1rBuMVSllWVuuz190HJpF="
# Further UWS settings for the Tomcat Pool setup. Set uws.db.install to false and set the uws.db.url property, with authentication.
uws:
db:
install: true
schema: uws
maxActive: 2
# Optional rename of the application from the default "cavern"
# applicationName: "cavern"
# The endpoint to serve this from. Defaults to /cavern. If the applicationName is changed, then this should match.
# Don't forget to update your registry entries!
#
# endpoint: "/cavern"
# Simple Class name of the QuotaPlugin to use. This is used to request quota and folder size information
# from the underlying storage system. Optional, defaults to NoQuotaPlugin.
#
# - For CephFS deployments: CephFSQuotaPlugin
# - Default: NoQuotaPlugin
#
# quotaPlugin: {NoQuotaPlugin | CephFSQuotaPlug}
# Optionally set the DEBUG port.
#
# Example:
# extraEnv:
# - name: CATALINA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
# - name: JAVA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
#
# extraEnv:
# Optionally mount a custom CA certificate
# Example:
# extraVolumeMounts:
# - mountPath: "/config/cacerts"
# name: cacert-volume
#
# extraVolumeMounts:
# Create the CA certificate volume to be mounted in extraVolumeMounts
# Example:
# extraVolumes:
# - name: cacert-volume
# secret:
# defaultMode: 420
# secretName: cavern-cacert-secret
#
# extraVolumes:
# Other data to be included in the main ConfigMap of this deployment.
# Of note, files that end in .key are special and base64 decoded.
#
# extraConfigData:
# Resources provided to the Cavern service.
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"
# Optionally describe how this Pod will be scheduled using the nodeAffinity clause. This applies to Cavern.
# See https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
# Example:
nodeAffinity:
# Only allow Cavern to run on specific Nodes.
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-special-api-host
# Specify extra hostnames that will be added to the Pod's /etc/hosts file. Note that this is in the
# deployment object, not the cavern one.
#
# These entries get added as hostAliases entries to the Deployment.
#
# Example:
# extraHosts:
# - ip: 127.3.34.5
# hostname: myhost.example.org
#
# extraHosts: []
# secrets:
# Uncomment to enable local or self-signed CA certificates for your domain to be trusted.
# cavern-cacert-secret:
# ca.crt: <base64 encoded CA crt>
# Set these appropriately to match your Persistent Volume Claim labels.
storage:
service:
spec:
# YAML for service mounted storage.
# Example is the persistentVolumeClaim below. This should match whatever Skaha used.
# persistentVolumeClaim:
# claimName: skaha-pvc
Kueue install¶
Kueue is a Kubernetes-native job queuing and scheduling system that enhances the management of workloads in a Kubernetes cluster. It provides advanced features for job scheduling, resource management, and workload prioritization, making it particularly useful for environments where efficient resource utilization and job handling are critical. Kueue is optional, but highly recommended for production deployments.
See https://kueue.sigs.k8s.io/docs/installation/#install-a-released-version for details.
Helm¶
Install CRDs and the various components:
ClusterQueue¶
A ClusterQueue is a global resource in Kueue that defines a set of resources and policies for managing workloads across the entire Kubernetes cluster. It serves as a template for creating and managing LocalQueues, which are specific to namespaces.
An example ClusterQueue has been added to the Skaha Example Kueue ClusterQueue file. Adjust as needed.
Create the ClusterQueue, with the associated ResourceFlavor and WorkloadPriorityClass objects:
cp ../skaha/kueue/examples/clusterQueue.config.yaml ./
# Edit as needed.
kubectl apply -f ./clusterQueue.config.yaml
LocalQueue¶
A LocalQueue is a namespace-specific resource in Kueue that allows for the management of workloads within a particular Kubernetes namespace. It provides a way to define how jobs are queued, prioritized, and scheduled based on the policies set in the associated ClusterQueue.
An example LocalQueue has been added to the Skaha Example Kueue LocalQueue file. Adjust as needed, but it needs to exist in the workload namespace (skaha-workload by default).
Create the LocalQueue in the skaha-workload namespace:
cp ../skaha/kueue/examples/localQueue.config.yaml ./
# Edit as needed.
kubectl apply -f ./localQueue.config.yaml
RBAC¶
The Science Platform (skaha) requires certain RBAC permissions to operate correctly within the Kubernetes cluster. These permissions allow Kueue to manage resources, schedule jobs, and interact with other components in the cluster.
An example RBAC configuration has been added to the Skaha Example Kueue RBAC file. Adjust as needed.
The example files contains rules to allow both the skaha system to query for the existence of any configured LocalQueues to ensure integrity, as well
as permitting Jobs to be scheduled into it from the Workload Namespace.
Create the RBAC objects in the skaha-system namespace:
Skaha install¶
The Skaha service will manage User Sessions. It relies on the POSIX Mapper being deployed, and available to be found via the IVOA Registry:
/reg/resource-caps
...
# Ensure the hostname matches the deployment hostname.
ivo://example.org/posix-mapper = https://example.host.com/posix-mapper/capabilities
...
Create a my-skaha-local-values-file.yaml file to override Values from the main template values.yaml file.
my-skaha-local-values-file.yaml
# Skaha web service deployment
deployment:
hostname: example.org
skaha:
# Optionally set the DEBUG port.
# extraEnv:
# - name: CATALINA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
# - name: JAVA_OPTS
# value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5555"
# Uncomment to debug. Requires options above as well as service port exposure below.
# extraPorts:
# - containerPort: 5555
# protocol: TCP
defaultQuotaGB: "10"
# Space delimited list of allowed Image Registry hosts. These hosts should match the hosts in the User Session images.
registryHosts: "images.canfar.net"
# The IVOA GMS Group URI to verify users against for permission to use the Science Platform.
# See https://www.ivoa.net/documents/GMS/20220222/REC-GMS-1.0.html#tth_sEc3.2
usersGroup: "ivo://example.org/gms?skaha-platform-users"
# The IVOA GMS Group URI to verify images without contacting Harbor.
# See https://www.ivoa.net/documents/GMS/20220222/REC-GMS-1.0.html#tth_sEc3.2
adminsGroup: "ivo://example.org/gms?skaha-admin-users"
# The IVOA GMS Group URI to verify users against for permission to run headless jobs.
# See https://www.ivoa.net/documents/GMS/20220222/REC-GMS-1.0.html#tth_sEc3.2
headlessGroup: "ivo://example.org/gms?skaha-headless-users"
# The IVOA GMS Group URI to verify users against that have priority for their headless jobs.
# See https://www.ivoa.net/documents/GMS/20220222/REC-GMS-1.0.html#tth_sEc3.2
headlessPriorityGroup: "ivo://example.org/gms?skaha-priority-headless-users"
# Class name to set for priority headless jobs.
headlessPriorityClass: "uber-user-vip"
# Array of GMS Group URIs allowed to set the logging level. If none set, then nobody can change the log level.
# See https://www.ivoa.net/documents/GMS/20220222/REC-GMS-1.0.html#tth_sEc3.2 for GMS Group URIs
# See https://github.com/opencadc/core/tree/main/cadc-log for Logging control
loggingGroups:
- "ivo://example.org/gms?skaha-logging-admin-users"
# The Resource ID (URI) of the Service that contains the Posix Mapping information
posixMapperResourceID: "ivo://example.org/posix-mapper"
# URI or URL of the OIDC (IAM) server. Used to validate incoming tokens.
oidcURI: https://iam.example.org/
# The Resource ID (URI) of the GMS Service.
gmsID: ivo://example.org/gms
# The absolute URL of the IVOA Registry where services are registered
registryURL: https://example.org/reg
# Optionally describe how this Pod will be scheduled using the nodeAffinity clause. This applies to Skaha itself.
# Note the different indentation level compared to the sessions.nodeAffinity.
# See https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
# See the [Sample Skaha Values file](skaha/sample-local-values.yaml).
# Example:
nodeAffinity:
# Only allow Skaha to run on specific Nodes.
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-special-node-host
# Settings for User Sessions. Sensible defaults supplied, but can be overridden.
# For units of storage, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory.
sessions:
expirySeconds: "345600" # Duration, in seconds, until they expire and are shut down.
maxCount: "3" # Max number of sessions per user.
minEphemeralStorage: "20Gi" # The initial requested amount of ephemeral (local) storage. Does NOT apply to Desktop sessions.
maxEphemeralStorage: "200Gi" # The maximum amount of ephemeral (local) storage to allow a Session to extend to. Does NOT apply to Desktop sessions.
# Optionally setup a separate host for User Sessions for Skaha to redirect to. The HTTPS scheme is assumed. Defaults to the Skaha hostname (.Values.deployment.hostname).
# Example:
# hostname: myhost.example.org
hostname: sessions.example.org
# When set to 'true' this flag will enable GPU node scheduling. Don't forget to declare any related GPU configurations, if appropriate, in the nodeAffinity below!
# gpuEnabled: false
# Optionally set the node label selector to identify Kubernetes Worker Nodes. This is used to accurately query for available
# resources from schedulable Nodes by eliminating, for example, Nodes that are only cordoned for Web APIs.
# Example:
# nodeLabelSelector: "node-role.kubernetes.io/node-type=worker"
#
# Example (multiple labels ANDed):
# nodeLabelSelector: "node-role.kubernetes.io/node-type=worker,environment=production"
#
#
# Example (multiple labels ORed):
# nodeLabelSelector: "node-role.kubernetes.io/node-type in (worker,worker-gpu)"
nodeLabelSelector:
userStorage:
persistentVolumeClaimName: skaha-workload-cavern-pvc
nodeURIPrefix: "vos://canfar.net~src~cavern"
serviceURI: "ivo://canfar.net/src/cavern"
admin:
auth:
apiKey: "88shdj3en1rBuMVSllWVuuz190HJpF="
# Set the YAML that will go into the "affinity.nodeAffinity" stanza for Pod Spec in User Sessions. This can be used to enable GPU scheduling, for example,
# or to control how and where User Session Pods are scheduled. This can be potentially dangerous unless you know what you are doing.
# See https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity
# nodeAffinity: {}
# Mount CVMFS from the Node's mounted path into all User Sessions.
extraVolumes:
- name: cvmfs-mount
volume:
type: HOST_PATH # HOST_PATH is for host path
hostPath: "/cvmfs" # Path on the Node to look for a source folder
hostPathType: Directory
volumeMount:
mountPath: "/cvmfs" # Path to mount on the User Sesssion Pod.
readOnly: false
mountPropagation: HostToContainer
# Kueue configurations for User Sessions
kueue:
default:
# Ensure this name matches whatever was created as the LocalQueue in the workload namespace.
queueName: canfar-science-platform-local-queue
priorityClass: low
# Optionally mount a custom CA certificate as an extra mount in Skaha (*not* user sessions)
# extraVolumeMounts:
# - mountPath: "/config/cacerts"
# name: cacert-volume
# Create the CA certificate volume to be mounted in extraVolumeMounts
# extraVolumes:
# - name: cacert-volume
# secret:
# defaultMode: 420
# secretName: skaha-cacert-secret
# Other data to be included in the main ConfigMap of this deployment.
# Of note, files that end in .key are special and base64 decoded.
#
# extraConfigData:
# Resources provided to the Skaha service.
# For units of storage, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory.
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1500Mi"
cpu: "750m"
# Specify extra hostnames that will be added to the Pod's /etc/hosts file. Note that this is in the
# deployment object, not the skaha one.
#
# These entries get added as hostAliases entries to the Deployment.
#
# Example:
# extraHosts:
# - ip: 127.3.34.5
# hostname: myhost.example.org
#
# extraHosts: []
# Enable experimental feature flags
experimentalFeatures:
enabled: true
sessionLimitRange:
enabled: true
rbac:
create: false
limitSpec:
max: # maximum allowed requested
memory: "192Gi"
cpu: "16"
nvidia.com/gpu: "1"
default: # actually refers to default limits
memory: "24Gi"
cpu: "4"
nvidia.com/gpu: "0"
defaultRequest: # default requests
memory: "4Gi"
cpu: "1"
nvidia.com/gpu: "0"
secrets:
# Uncomment to enable local or self-signed CA certificates for your domain to be trusted.
# skaha-cacert-secret:
# ca.crt: <base64 encoded ca crt>
helm -n skaha-system upgrade --install --values my-skaha-local-values-file.yaml skaha science-platform/skaha
NAME: skaha
LAST DEPLOYED: Thu Oct 31 02:01:10 2025
NAMESPACE: skaha-system
STATUS: deployed
REVISION: 8
Test it.
# See below for tokens
export SKA_TOKEN=...
curl -SsL --header "Authorization: Bearer ${SKA_TOKEN}" https://example.host.com/skaha/v1/session
[]%
# xxxxxx is the returned session ID.
curl -SsL --header "Authorization: Bearer ${SKA_TOKEN}" -d "ram=1" -d "cores=1" -d "image=images.canfar.net/canucs/canucs:1.2.5" -d "name=myjupyternotebook" "https://example.host.com/skaha/v1/session"
Science Portal User Interface install¶
The Science Portal service will manage User Sessions. It relies on the Skaha service being deployed, and available to be found via the IVOA Registry:
/reg/resource-caps
...
# Ensure the hostname matches the deployment hostname.
ivo://example.org/skaha = https://example.host.com/skaha/capabilities
...
Create a my-science-portal-local-values-file.yaml file to override Values from the main template values.yaml file.
my-science-portal-local-values-file.yaml
deployment:
hostname: example.org
sciencePortal:
# The Resource ID of the Service that contains the URL of the Skaha service in the IVOA Registry
skahaResourceID: ivo://example.org/skaha
# OIDC (IAM) server configuration. These are required
# oidc:
#
# Location of the OpenID Provider (OIdP), and where users will login
# uri: https://iam.example.org/
# The Client ID as listed on the OIdP. Create one at the uri above.
# clientID: my-client-id
# The Client Secret, which should be generated by the OIdP.
# clientSecret: my-client-secret
# Where the OIdP should send the User after successful authentication. This is also known as the redirect_uri in OpenID.
# redirectURI: https://example.com/science-portal/oidc-callback
# Where to redirect to after the redirectURI callback has completed. This will almost always be the URL to the /science-portal main page (https://example.com/science-portal).
# callbackURI: https://example.com/science-portal/
# The standard OpenID scopes for token requests. This is required, and if using the SKAO IAM, can be left as-is.
# scope: "openid profile offline_access"
# Optionally mount a custom CA certificate
# extraVolumeMounts:
# - mountPath: "/config/cacerts"
# name: cacert-volume
# Create the CA certificate volume to be mounted in extraVolumeMounts
# extraVolumes:
# - name: cacert-volume
# secret:
# defaultMode: 420
# secretName: science-portal-cacert-secret
# The theme name for styling.
# src: The SRCNet theme
# canfar: The CANFAR theme for internal CADC deployment
# themeName: {src | canfar}
# Labels on the tabs
# Default:
# tabLabels:
# - Public
# - Advanced
# tabLabels: []
# Other data to be included in the main ConfigMap of this deployment.
# Of note, files that end in .key are special and base64 decoded.
#
# extraConfigData:
# Resources provided to the Science Portal service.
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1500Mi"
cpu: "750m"
# Optionally describe how this Pod will be scheduled using the nodeAffinity clause. This applies to Science Portal itself.
# See https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
# Example:
nodeAffinity:
# Only allow Science Portal to run on specific Nodes.
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-special-ui-host
experimentalFeatures:
enabled: true
slider:
enabled: true
# Specify extra hostnames that will be added to the Pod's /etc/hosts file. Note that this is in the
# deployment object, not the sciencePortal one.
#
# These entries get added as hostAliases entries to the Deployment.
#
# Example:
# extraHosts:
# - ip: 127.3.34.5
# hostname: myhost.example.org
#
# extraHosts: []
# secrets:
# Uncomment to enable local or self-signed CA certificates for your domain to be trusted.
# science-portal-cacert-secret:
# ca.crt: <base64 encoded ca.crt blob>
User Storage UI installation¶
Create a my-storage-ui-local-values-file.yaml file to override Values from the main template values.yaml file.
my-storage-ui-local-values-file.yaml
deployment:
hostname: example.org
storageUI:
# OIDC (IAM) server configuration. These are required
oidc:
# Location of the OpenID Provider (OIdP), and where users will login
uri: https://iam.example.org/
# The Client ID as listed on the OIdP. Create one at the uri above.
clientID: my-client-id
# The Client Secret, which should be generated by the OIdP.
clientSecret: my-client-secret
# Where the OIdP should send the User after successful authentication. This is also known as the redirect_uri in OpenID.
redirectURI: https://example.com/science-portal/oidc-callback
# Where to redirect to after the redirectURI callback has completed. This will almost always be the URL to the /science-portal main page (https://example.com/science-portal).
callbackURI: https://example.com/science-portal/
# The standard OpenID scopes for token requests. This is required, and if using the SKAO IAM, can be left as-is.
scope: "openid profile offline_access"
# ID (URI) of the GMS Service.
gmsID: ivo://example.org/gms
# Dictionary of all VOSpace APIs (Services) available that will be visible on the UI.
# Format is:
backend:
defaultService: cavern
services:
cavern:
resourceID: "ivo://example.org/cavern"
nodeURIPrefix: "vos://example.org~cavern"
userHomeDir: "/home"
# Some VOSpace services support these features. Cavern does not, but it needs to be explicitly declared here.
features:
batchDownload: false
batchUpload: false
externalLinks: false
paging: false
# Optionally mount a custom CA certificate
# extraVolumeMounts:
# - mountPath: "/config/cacerts"
# name: cacert-volume
# Create the CA certificate volume to be mounted in extraVolumeMounts
# extraVolumes:
# - name: cacert-volume
# secret:
# defaultMode: 420
# secretName: storage-ui-cacert-secret
# Other data to be included in the main ConfigMap of this deployment.
# Of note, files that end in .key are special and base64 decoded.
#
# extraConfigData:
# Resources provided to the StorageUI service.
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1500Mi"
cpu: "750m"
# Optionally describe how this Pod will be scheduled using the nodeAffinity clause. This applies to the Storage UI Pod(s).
# See https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
# Example:
nodeAffinity:
# Only allow Storage UI to run on specific Nodes.
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-special-ui-host
# Specify extra hostnames that will be added to the Pod's /etc/hosts file. Note that this is in the
# deployment object, not the storageUI one.
#
# These entries get added as hostAliases entries to the Deployment.
#
# Example:
# extraHosts:
# - ip: 127.3.34.5
# hostname: myhost.example.org
#
# extraHosts: []
# secrets:
# Uncomment to enable local or self-signed CA certificates for your domain to be trusted.
# storage-ui-cacert-secret:
# ca.crt: <base64 encoded ca.crt blob>
Browser Authentication¶
Encrypted cookies are used to gain access to a Bearer Token for API access. These cookies are managed by the browser based applications (Storage UI and Science Portal). See the Browser Authentication documentation for details.
Obtaining a Bearer Token¶
See the JIRA Confluence page on obtaining a Bearer Token.
Flow¶

The Skaha service depends on several installations being in place.
Structure¶
