Skip to main content

evaluation_jobs

Overview

Nameevaluation_jobs
TypeResource
Idgoogle.datalabeling.evaluation_jobs

Fields

NameDatatypeDescription
namestringOutput only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
descriptionstringRequired. Description of the job. The description can be up to 25,000 characters long.
statestringOutput only. Describes the current state of the job.
modelVersionstringRequired. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
labelMissingGroundTruthbooleanRequired. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.
schedulestringRequired. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
attemptsarrayOutput only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
createTimestringOutput only. Timestamp of when this evaluation job was created.
evaluationJobConfigobjectConfigures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob.
annotationSpecSetstringRequired. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Methods

NameAccessible byRequired ParamsDescription
projects_evaluation_jobs_getSELECTevaluationJobsId, projectsIdGets an evaluation job by resource name.
projects_evaluation_jobs_listSELECTprojectsIdLists all evaluation jobs within a project with possible filters. Pagination is supported.
projects_evaluation_jobs_createINSERTprojectsIdCreates an evaluation job.
projects_evaluation_jobs_deleteDELETEevaluationJobsId, projectsIdStops and deletes an evaluation job.
_projects_evaluation_jobs_listEXECprojectsIdLists all evaluation jobs within a project with possible filters. Pagination is supported.
projects_evaluation_jobs_patchEXECevaluationJobsId, projectsIdUpdates an evaluation job. You can only update certain fields of the job's EvaluationJobConfig: humanAnnotationConfig.instruction, exampleCount, and exampleSamplePercentage. If you want to change any other aspect of the evaluation job, you must delete the job and create a new one.
projects_evaluation_jobs_pauseEXECevaluationJobsId, projectsIdPauses an evaluation job. Pausing an evaluation job that is already in a PAUSED state is a no-op.
projects_evaluation_jobs_resumeEXECevaluationJobsId, projectsIdResumes a paused evaluation job. A deleted evaluation job can't be resumed. Resuming a running or scheduled evaluation job is a no-op.