pipeline_jobs
Creates, updates, deletes, gets or lists a pipeline_jobs
resource.
Overview
Name | pipeline_jobs |
Type | Resource |
Id | google.aiplatform.pipeline_jobs |
Fields
Name | Datatype | Description |
---|---|---|
name | string | Output only. The resource name of the PipelineJob. |
createTime | string | Output only. Pipeline creation time. |
displayName | string | The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec | object | Represents a customer-managed encryption key spec that can be applied to a top-level resource. |
endTime | string | Output only. Pipeline end time. |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. |
jobDetail | object | The runtime detail of PipelineJob. |
labels | object | The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id , user set value will get overrided. |
network | string | The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC . Format is of the form projects/{project}/global/networks/{network} . Where {project} is a project number, as in 12345 , and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network. |
pipelineSpec | object | The spec of the pipeline. |
preflightValidations | boolean | Optional. Whether to do component level validations before job creation. |
reservedIpRanges | array | A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
runtimeConfig | object | The runtime config of a PipelineJob. |
scheduleName | string | Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API. |
serviceAccount | string | The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account. |
startTime | string | Output only. Pipeline start time. |
state | string | Output only. The detailed state of the job. |
templateMetadata | object | Pipeline template metadata if PipelineJob.template_uri is from supported template registry. Currently, the only supported registry is Artifact Registry. |
templateUri | string | A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template. |
updateTime | string | Output only. Timestamp when this PipelineJob was most recently updated. |
Methods
Name | Accessible by | Required Params | Description |
---|---|---|---|
get | SELECT | locationsId, pipelineJobsId, projectsId | Gets a PipelineJob. |
list | SELECT | locationsId, projectsId | Lists PipelineJobs in a Location. |
create | INSERT | locationsId, projectsId | Creates a PipelineJob. A PipelineJob will run immediately when created. |
batch_delete | DELETE | locationsId, projectsId | Batch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted. |
delete | DELETE | locationsId, pipelineJobsId, projectsId | Deletes a PipelineJob. |
batch_cancel | EXEC | locationsId, projectsId | Batch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO. |
cancel | EXEC | locationsId, pipelineJobsId, projectsId | Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED , and PipelineJob.state is set to CANCELLED . |
SELECT
examples
Lists PipelineJobs in a Location.
SELECT
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
FROM google.aiplatform.pipeline_jobs
WHERE locationsId = '{{ locationsId }}'
AND projectsId = '{{ projectsId }}';
INSERT
example
Use the following StackQL query and manifest file to create a new pipeline_jobs
resource.
- All Properties
- Manifest
/*+ create */
INSERT INTO google.aiplatform.pipeline_jobs (
locationsId,
projectsId,
pipelineSpec,
displayName,
network,
preflightValidations,
labels,
templateUri,
serviceAccount,
reservedIpRanges,
encryptionSpec,
runtimeConfig
)
SELECT
'{{ locationsId }}',
'{{ projectsId }}',
'{{ pipelineSpec }}',
'{{ displayName }}',
'{{ network }}',
{{ preflightValidations }},
'{{ labels }}',
'{{ templateUri }}',
'{{ serviceAccount }}',
'{{ reservedIpRanges }}',
'{{ encryptionSpec }}',
'{{ runtimeConfig }}'
;
- name: your_resource_model_name
props:
- name: pipelineSpec
value: object
- name: displayName
value: string
- name: templateMetadata
value:
- name: version
value: string
- name: network
value: string
- name: preflightValidations
value: boolean
- name: startTime
value: string
- name: labels
value: object
- name: createTime
value: string
- name: updateTime
value: string
- name: templateUri
value: string
- name: scheduleName
value: string
- name: name
value: string
- name: error
value:
- name: code
value: integer
- name: message
value: string
- name: details
value:
- object
- name: endTime
value: string
- name: state
value: string
- name: jobDetail
value:
- name: pipelineRunContext
value:
- name: parentContexts
value:
- string
- name: schemaVersion
value: string
- name: etag
value: string
- name: schemaTitle
value: string
- name: description
value: string
- name: updateTime
value: string
- name: name
value: string
- name: labels
value: object
- name: displayName
value: string
- name: metadata
value: object
- name: createTime
value: string
- name: taskDetails
value:
- - name: executorDetail
value:
- name: containerDetail
value:
- name: failedPreCachingCheckJobs
value:
- string
- name: mainJob
value: string
- name: preCachingCheckJob
value: string
- name: failedMainJobs
value:
- string
- name: customJobDetail
value:
- name: failedJobs
value:
- string
- name: job
value: string
- name: inputs
value: object
- name: execution
value:
- name: schemaVersion
value: string
- name: metadata
value: object
- name: createTime
value: string
- name: labels
value: object
- name: name
value: string
- name: updateTime
value: string
- name: displayName
value: string
- name: description
value: string
- name: state
value: string
- name: schemaTitle
value: string
- name: etag
value: string
- name: pipelineTaskStatus
value:
- - name: updateTime
value: string
- name: state
value: string
- name: taskName
value: string
- name: createTime
value: string
- name: outputs
value: object
- name: endTime
value: string
- name: parentTaskId
value: string
- name: state
value: string
- name: startTime
value: string
- name: taskId
value: string
- name: serviceAccount
value: string
- name: reservedIpRanges
value:
- string
- name: encryptionSpec
value:
- name: kmsKeyName
value: string
- name: runtimeConfig
value:
- name: failurePolicy
value: string
- name: inputArtifacts
value: object
- name: parameters
value: object
- name: parameterValues
value: object
- name: gcsOutputDirectory
value: string
DELETE
example
Deletes the specified pipeline_jobs
resource.
/*+ delete */
DELETE FROM google.aiplatform.pipeline_jobs
WHERE locationsId = '{{ locationsId }}'
AND projectsId = '{{ projectsId }}';