Skip to main content

jobs

Overview

Namejobs
TypeResource
Idgoogle.dataproc.jobs

Fields

NameDatatypeDescription
donebooleanOutput only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
hiveJobobjectA Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
pigJobobjectA Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
jobUuidstringOutput only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
placementobjectDataproc job config.
yarnApplicationsarrayOutput only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
referenceobjectEncapsulates the full scoping used to reference a job.
statusobjectDataproc job status.
sparkRJobobjectA Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN.
driverControlFilesUristringOutput only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
driverOutputResourceUristringOutput only. A URI pointing to the location of the stdout of the job's driver program.
statusHistoryarrayOutput only. The previous job status.
pysparkJobobjectA Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN.
hadoopJobobjectA Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
flinkJobobjectA Dataproc job for running Apache Flink (https://flink.apache.org/) applications on YARN.
prestoJobobjectA Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster.
driverSchedulingConfigobjectDriver scheduling configuration.
sparkJobobjectA Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN.
sparkSqlJobobjectA Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries.
labelsobjectOptional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
schedulingobjectJob scheduling options.
trinoJobobjectA Dataproc job for running Trino (https://trino.io/) queries. IMPORTANT: The Dataproc Trino Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/trino) must be enabled when the cluster is created to submit a Trino job to the cluster.

Methods

NameAccessible byRequired ParamsDescription
projects_regions_jobs_getSELECTjobId, projectId, regionGets the resource representation for a job in a project.
projects_regions_jobs_listSELECTprojectId, regionLists regions/{region}/jobs in a project.
projects_regions_jobs_deleteDELETEjobId, projectId, regionDeletes the job from the project. If the job is active, the delete fails, and the response returns FAILED_PRECONDITION.
_projects_regions_jobs_listEXECprojectId, regionLists regions/{region}/jobs in a project.
projects_regions_jobs_cancelEXECjobId, projectId, regionStarts a job cancellation request. To access the job resource after cancellation, call regions/{region}/jobs.list (https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/list) or regions/{region}/jobs.get (https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/get).
projects_regions_jobs_patchEXECjobId, projectId, regionUpdates a job in a project.
projects_regions_jobs_submitEXECprojectId, regionSubmits a job to a cluster.
projects_regions_jobs_submit_as_operationEXECprojectId, regionSubmits job to a cluster.