This section describes how you can configure |
The runs in your VPC in the
. No additional configuration is required.
Optionally, you can configure jobs to be executed within your VPC. When enabled, data remains in your VPC during full execution of the job.
NOTE: Previewing and sampling use the default network settings. |
To enable in-VPC execution, the VPC network mode must be set to custom
, and additional VPC properties must be provided. In-VPC job execution can be configured per-user or per-output:
Per-output: For more information, see Dataflow Execution Settings.
NOTE: Per-output settings override any settings specified in your preferences. |
By default, and connectivity jobs execute in the
. As needed, you can configure these jobs to run in your VPC.
Job Type | Description |
---|---|
These jobs are transformation and quick scan sampling jobs that execute in memory. This type of job execution is suitable for small- to medium-sized jobs. | |
Connectivity | If your data source or publishing target is a relational or API-based source, some or all of the job occurs through the connectivity framework. |
For these two job types, there are two types of configuration:
Configuration Type | Description |
---|---|
Basic | Uses the GKE default namespace and default node pool. |
Advanced | User-configured GKE namespace and user-specified node pool. |
Details on these configuration methods are provided below.
The following limitations apply to this release. These limitations may change in the future:
Before you begin, please verify that your VPC environment has the following:
A GKE cluster is available for and connectivity jobs to use.
gloud
command line interface (CLI)kubectl
openssl
base64
Acquire from :
In-VPC execution must be enabled by an administrator. See Dataprep Project Settings Page.
Please complete the following steps for the Basic configuration.
This Service Account is assigned to the nodes in the GKE node pool and is configured to have minimal privileges.
Following are variables listed in the configuration steps. They can be modified based on your requirements and supported values:
Variable | Description |
---|---|
Default service account name | |
myproject | Name of your Google project |
myregion | Your Google Cloud region |
Please execute the following commands from the gcloud
CLI:
gcloud iam service-accounts create trifacta-service-account \ --display-name="Service Account for running Trifacta Remote jobs" gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/logging.logWriter gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/monitoring.metricWriter gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/monitoring.viewer gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/stackdriver.resourceMetadata.writer |
Verification steps:
Command:
gcloud projects get-iam-policy myproject --flatten="bindings[].members" --format="table(bindings.role)" --filter="bindings.members:serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" |
The output should look like the following:
ROLE roles/artifactregistry.reader roles/logging.logWriter roles/monitoring.metricWriter roles/monitoring.viewer roles/stackdriver.resourceMetadata.writer |
The following configuration is required for Internet access to acquire assets from , if the GKE cluster has private nodes.
gcloud compute routers create myproject-myregion \ --network myproject-network \ --region=myregion gcloud compute routers nats create myproject-myregion \ --router=myproject-myregion \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges \ --enable-logging |
Verification Steps:
You can verify that the router NAT was created in the Console: https://console.cloud.google.com/net-services/nat/list.
This configuration creates the GKE cluster for use in executing jobs. This cluster must be created in the VPC/sub-network that has access to your datasources, such as your databases and
.
In the following, please replace w.x.y.z
with the IP address provided to you by for authorized control plane access.
node-locations
, please see https://console.cloud.google.com/compute/zones.gcloud container clusters create "trifacta-cluster" \ --project "myproject" \ --region "myregion" \ --no-enable-basic-auth \ --cluster-version "1.20.8-gke.900" \ --release-channel "None" \ --machine-type "n1-standard-16" \ --image-type "COS_CONTAINERD" \ --disk-type "pd-standard" \ --disk-size "100" \ --metadata disable-legacy-endpoints=true \ --service-account "trifacta-service-account@myproject.iam.gserviceaccount.com" \ --max-pods-per-node "110" \ --num-nodes "1" \ --logging=SYSTEM,WORKLOAD \ --monitoring=SYSTEM \ --enable-ip-alias \ --network "projects/myproject/global/networks/myproject-network" \ --subnetwork "projects/myproject/regions/myregion/subnetworks/myproject-subnet-myregion" \ --no-enable-intra-node-visibility \ --default-max-pods-per-node "110" \ --enable-autoscaling \ --min-nodes "0" \ --max-nodes "3" \ --enable-master-authorized-networks \ --master-authorized-networks w.x.y.z/32 \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \ --no-enable-autoupgrade \ --enable-autorepair \ --max-surge-upgrade 1 \ --max-unavailable-upgrade 0 \ --workload-pool "myproject.svc.id.goog" \ --enable-private-nodes \ --enable-shielded-nodes \ --shielded-secure-boot \ --node-locations "myregion-a","myregion-b","myregion-c" \ --master-ipv4-cidr=10.1.0.0/28 \ --enable-binauthz |
Verification Steps:
You can verify that the cluster was created through the Console: https://console.cloud.google.com/kubernetes/list/overview.
Use the following command to set up configuration to connect to the new cluster:
gcloud container clusters get-credentials trifacta-cluster --region myregion --project myproject |
The following commands whitelist the Cloud shell for use on the cluster:
Get the IP for the shell instance:
dig +short myip.opendns.com @resolver1.opendns.com |
Modify the authorized networks to include the IP. You must include the IP each time, since the IP addresses are not static.
gcloud container clusters update mycluster \ --enable-master-authorized-networks \ --master-authorized-networks 34.68.114.64/28,192.77.238.35/32,34.75.7.151/32 |
After you have acquired access, you can whitelist the following accounts and roles:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: namespace: default name: trifacta-job-runner EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: trifacta-job-runner-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] - apiGroups: [""] resources: ["pods"] verbs: ["list"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "create", "delete", "watch"] - apiGroups: [""] resources: ["serviceAccounts"] verbs: ["list", "get"] EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: trifacta-job-runner-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: Role name: trifacta-job-runner-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-list-role rules: - apiGroups: [""] resources: ["nodes"] verbs: ["list"] EOF |
cat <<EOF | kubectl apply -f - kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-list-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: ClusterRole name: node-list-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: name: trifacta-pod-sa EOF |
For basic configuration, uses the
default
node pool. No additional configuration is required.
For basic configuration, uses the
default
namespace. No additional configuration is required.
Variable | Description |
---|---|
trifacta-job-runner | Service Account used by |
trifacta-pod-sa | Service Account assigned to the job pod running in the GKE cluster. |
Please execute the following commands:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: namespace: default name: trifacta-job-runner EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: trifacta-job-runner-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] - apiGroups: [""] resources: ["pods"] verbs: ["list"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "create", "delete", "watch"] - apiGroups: [""] resources: ["serviceAccounts"] verbs: ["list", "get"] EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: trifacta-job-runner-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: Role name: trifacta-job-runner-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-list-role rules: - apiGroups: [""] resources: ["nodes"] verbs: ["list"] EOF |
cat <<EOF | kubectl apply -f - kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-list-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: ClusterRole name: node-list-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: name: trifacta-pod-sa EOF |
The following commands create the encryption keys for credentials:
openssl genrsa -out private_key.pem 2048 openssl pkcs8 -topk8 -inform PEM -outform DER -in private_key.pem -out private_key.der -nocrypt openssl rsa -in private_key.pem -pubout -outform DER -out public_key.der base64 -i public_key.der > public_key.der.base64 base64 -i private_key.der > private_key.der.base64 kubectl create secret generic trifacta-credential-encryption -n default \ --from-file=privateKey=private_key.der.base64 |
After you have completed the above configuration, you must configure the based on the commands that you have executed.
Steps:
Please complete the configuration.
Kubernetes cluster tab:
Setting | Command or Value | ||
---|---|---|---|
Master URL | Command:
Returns: This command returns a URL that looks similar to the following:
| ||
OAuth token | Command:
| ||
Cluster CA certificate | Command:
| ||
Service account name | Value: | ||
Public key (optional) | Insert the contents of: To acquire this value:
| ||
Private key secret name (optional) | Value: trifacta-credential-encryption |
Photon tab:
Setting | Command or Value | |
---|---|---|
Namespace | Value: To acquire the namespace value:
| |
CPU, memory - request, limits | Adjust as needed.
| |
Node selector, tolerations- diff | Values:
|
Connectivity/DataSystem tab:
Setting | Command or Value | |
---|---|---|
Namespace | data-system-job-namespace | |
CPU, memory - request, limits | Adjust defaults, if necessary. | |
Node selector, tolerations |
|
If you have tested and saved your configuration, you should be able to run a job in your VPC. See "Testing" below.
Google access tokens are valid for 1 hour. Jobs that are sourced or targeted from relational systems can be long running. To protect against timeouts during these jobs and to support recommended practices for security, supports the use of Workload Identity, which is Google's recommended approach for accessing Google APIs.
NOTE: Workload Identity requires the use of Companion Service Accounts. Each user in your |
For each Companion Service Account assigned to a user in :
A new Kubernetes ServiceAccount must be created on the GKE cluster.
NOTE: This step must be completed by your |
allAccess@myproject.iam.gserviceaccount.com
already exists:// Create a new Kubernetes ServiceAccount on the GKE cluster with an annotation to bind it to the allAccess@myproject.iam.gserviceaccount.com Companion ServiceAccount. cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: annotations: iam.gke.io/gcp-service-account: allAccess@myproject.iam.gserviceaccount.com name: trifacta-pod-sa-allaccess EOF // Allow the Kubernetes ServiceAccount to impersonate the Google IAM ServiceAccount by adding an IAM policy binding between the two service accounts. This binding allows the Kubernetes ServiceAccount to act as the IAM ServiceAccount. gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:myproject.svc.id.goog[default/trifacta-pod-sa-allaccess]" \ allAccess@myproject.iam.gserviceaccount.com |
Wait a couple of minutes for the binding to take effect.
NOTE: For relational connectivity, additional configuration is required. Search for |
Please complete the following steps for the Advanced setup. These steps allow you to specify:
This Service Account is assigned to the nodes in the GKE node pool and is configured to have minimal privileges.
Following are variables listed in the configuration steps. They can be modified based on your requirements and supported values:
Variable | Description |
---|---|
Default service account name | |
myproject | Name of your Google project |
myregion | Your Google Cloud region |
Please execute the following commands from the gcloud
CLI:
gcloud iam service-accounts create trifacta-service-account \ --display-name="Service Account for running Trifacta Remote jobs" gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/logging.logWriter gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/monitoring.metricWriter gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/monitoring.viewer gcloud projects add-iam-policy-binding myproject \ --member "serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" \ --role roles/stackdriver.resourceMetadata.writer |
Verification steps:
Command:
gcloud projects get-iam-policy myproject --flatten="bindings[].members" --format="table(bindings.role)" --filter="bindings.members:serviceAccount:trifacta-service-account@myproject.iam.gserviceaccount.com" |
The output should look like the following:
ROLE roles/artifactregistry.reader roles/logging.logWriter roles/monitoring.metricWriter roles/monitoring.viewer roles/stackdriver.resourceMetadata.writer |
The following configuration is required for Internet access to acquire assets from , if the GKE cluster has private nodes.
gcloud compute routers create myproject-myregion \ --network myproject-network \ --region=myregion gcloud compute routers nats create myproject-myregion \ --router=myproject-myregion \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges \ --enable-logging |
Verification Steps:
You can verify that the router NAT was created in the Console: https://console.cloud.google.com/net-services/nat/list.
This configuration creates the GKE cluster for use in executing jobs. This cluster must be created in the VPC/sub-network that has access to your datasources, such as your databases and
.
In the following, please replace w.x.y.z
with the IP address provided to you by for authorized control plane access.
node-locations
, please see https://console.cloud.google.com/compute/zones.gcloud container clusters create "trifacta-cluster" \ --project "myproject" \ --region "myregion" \ --no-enable-basic-auth \ --cluster-version "1.20.8-gke.900" \ --release-channel "None" \ --machine-type "n1-standard-16" \ --image-type "COS_CONTAINERD" \ --disk-type "pd-standard" \ --disk-size "100" \ --metadata disable-legacy-endpoints=true \ --service-account "trifacta-service-account@myproject.iam.gserviceaccount.com" \ --max-pods-per-node "110" \ --num-nodes "1" \ --logging=SYSTEM,WORKLOAD \ --monitoring=SYSTEM \ --enable-ip-alias \ --network "projects/myproject/global/networks/myproject-network" \ --subnetwork "projects/myproject/regions/myregion/subnetworks/myproject-subnet-myregion" \ --no-enable-intra-node-visibility \ --default-max-pods-per-node "110" \ --enable-autoscaling \ --min-nodes "0" \ --max-nodes "3" \ --enable-master-authorized-networks \ --master-authorized-networks x.y.z.w/32 \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \ --no-enable-autoupgrade \ --enable-autorepair \ --max-surge-upgrade 1 \ --max-unavailable-upgrade 0 \ --workload-pool "myproject.svc.id.goog" \ --enable-private-nodes \ --enable-shielded-nodes \ --shielded-secure-boot \ --node-locations "myregion-a","myregion-b","myregion-c" \ --master-ipv4-cidr=10.1.0.0/28 \ --enable-binauthz |
Verification Steps:
You can verify that the cluster was created through the Console: https://console.cloud.google.com/kubernetes/list/overview.
Use the following command to switch to the new GKE cluster that you just created:
gcloud container clusters get-credentials trifacta-cluster --region myregion --project myproject |
Please complete the following configuration to specify a non-default node pool. In this example, the value is photon-job-pool
:
gcloud container node-pools create photon-job-pool \ --cluster trifacta-cluster \ --enable-autorepair \ --no-enable-autoupgrade \ --image-type=COS_CONTAINERD \ --machine-type=n1-standard-16 \ --max-surge-upgrade 1 \ --max-unavailable-upgrade=0 \ --node-locations=myregion-a,myregion-b,myregion-c \ --node-taints=jobType=photon:NoSchedule \ --node-version=1.20.8-gke.900 \ --num-nodes=1 \ --shielded-integrity-monitoring \ --shielded-secure-boot \ --workload-metadata=GKE_METADATA \ --enable-autoscaling \ --max-nodes=10 \ --min-nodes=1 \ --region=myregion \ --service-account=trifacta-service-account@myproject.iam.gserviceaccount.com |
You can use the following command to get the list of available node pools for your cluster:
gcloud container node-pools list --cluster trifacta-cluster --region=myregion |
Please complete the following configuration to specify a non-default namespace. In this example, the value is photon-job-namespace
:
kubectl create namespace photon-job-namespace |
Variable | Description |
---|---|
trifacta-job-runner | Service Account used by |
trifacta-pod-sa | Service Account assigned to the job pod running in the GKE cluster. |
Please execute the following commands:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: namespace: default name: trifacta-job-runner EOF |
cat <<EOF | kubectl apply -n photon-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: trifacta-job-runner-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] - apiGroups: [""] resources: ["pods"] verbs: ["list"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "create", "delete", "watch"] - apiGroups: [""] resources: ["serviceAccounts"] verbs: ["list", "get"] EOF |
cat <<EOF | kubectl apply -n photon-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: trifacta-job-runner-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: Role name: trifacta-job-runner-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -n photon-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-list-role rules: - apiGroups: [""] resources: ["nodes"] verbs: ["list"] EOF |
cat <<EOF | kubectl apply -n photon-job-namespace -f - kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-list-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: ClusterRole name: node-list-role apiGroup: rbac.authorization.k8s.io EOF |
cat <<EOF | kubectl apply -n photon-job-namespace -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: name: trifacta-pod-sa EOF |
The following commands create the encryption keys for credentials:
openssl genrsa -out private_key.pem 2048 openssl pkcs8 -topk8 -inform PEM -outform DER -in private_key.pem -out private_key.der -nocrypt openssl rsa -in private_key.pem -pubout -outform DER -out public_key.der base64 -i public_key.der > public_key.der.base64 base64 -i private_key.der > private_key.der.base64 kubectl create secret generic trifacta-credential-encryption -n photon-job-namespace \ --from-file=privateKey=private_key.der.base64 |
gcloud container node-pools create data-system-job-pool \ --cluster=trifacta-cluster \ --enable-autorepair \ --no-enable-autoupgrade \ --image-type=COS_CONTAINERD \ --machine-type=n1-standard-16 \ --max-surge-upgrade=1 \ --max-unavailable-upgrade=0 \ --node-locations=us-central1-a,us-central1-b,us-central1-c \ --node-taints=jobType=dataSystem:NoSchedule \ --node-version=1.22.7-gke.1300 \ --num-nodes=1 \ --shielded-integrity-monitoring \ --shielded-secure-boot \ --workload-metadata=GKE_METADATA \ --enable-autoscaling \ --max-nodes=10 \ --min-nodes=1 \ --region=us-central1 \ --service-account=trifacta-service-account@myproject.iam.gserviceaccount.com |
kubectl create namespace data-system-job-namespace |
cat <<EOF | kubectl apply -n data-system-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: trifacta-job-runner-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] - apiGroups: [""] resources: ["pods"] verbs: ["list"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "create", "delete", "watch"] - apiGroups: [""] resources: ["serviceAccounts"] verbs: ["list", "get"] EOF cat <<EOF | kubectl apply -n data-system-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: trifacta-job-runner-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: Role name: trifacta-job-runner-role apiGroup: rbac.authorization.k8s.io EOF cat <<EOF | kubectl apply -n data-system-job-namespace -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-list-role rules: - apiGroups: [""] resources: ["nodes"] verbs: ["list"] EOF cat <<EOF | kubectl apply -n data-system-job-namespace -f - kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-list-rb subjects: - kind: ServiceAccount name: trifacta-job-runner namespace: default roleRef: kind: ClusterRole name: node-list-role apiGroup: rbac.authorization.k8s.io EOF cat <<EOF | kubectl apply -n data-system-job-namespace -f - apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: false metadata: name: trifacta-pod-sa EOF |
Create a secret to store the private key in the Connectivity/DataSystem job namespace.
kubectl create secret generic trifacta-credential-encryption -n data-system-job-namespace \ --from-file=privateKey=private_key.der.base64 |
After you have completed the above configuration, you must populate the following values in the based on the commands that you execute below.
Steps:
Kubernetes cluster tab:
Setting | Command or Value | |
---|---|---|
Master URL | Command:
| |
OAuth token | Command:
| |
Cluster CA certificate | Command:
| |
Service account name - diff? | Value: | |
Public key (optional) | Insert the contents of: To acquire this value:
| |
Private key secret name (optional) | Value: trifacta-credential-encryption |
Photon tab:
Setting | Command or Value | |
---|---|---|
Namespace | Value: To acquire the namespace value:
| |
CPU, memory - request, limits | Adjust as needed. | |
Node selector, tolerations - diff | Values:
|
If you have tested and saved your configuration, you should be able to run a job in your VPC. See "Testing" below.
You can use the following command to watch the Kubernetes clusters for job execution:
kubectl get pods -n photon-job-namespace -w |
To check active pods:
kubectl get pods -n default -w |
To get details on a specific pod:
kubectl describe <podId> |
Then, run a job in through the
. If the job runs successfully, then the configuration has been properly applied. See Run Job Page .