model_deployment_monitoring_jobs
Creates, updates, deletes, gets or lists a model_deployment_monitoring_jobs
resource.
Overview
Name | model_deployment_monitoring_jobs |
Type | Resource |
Id | google.aiplatform.model_deployment_monitoring_jobs |
Fields
Name | Datatype | Description |
---|---|---|
name | string | Output only. Resource name of a ModelDeploymentMonitoringJob. |
analysisInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
bigqueryTables | array | Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response |
createTime | string | Output only. Timestamp when this ModelDeploymentMonitoringJob was created. |
displayName | string | Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob. |
enableMonitoringPipelineLogs | boolean | If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing. |
encryptionSpec | object | Represents a customer-managed encryption key spec that can be applied to a top-level resource. |
endpoint | string | Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint} |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. |
labels | object | The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
latestMonitoringPipelineMetadata | object | All metadata of most recent monitoring pipelines. |
logTtl | string | The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day. |
loggingSamplingStrategy | object | Sampling Strategy for logging, can be for both training and prediction dataset. |
modelDeploymentMonitoringObjectiveConfigs | array | Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately. |
modelDeploymentMonitoringScheduleConfig | object | The config for scheduling monitoring job. |
modelMonitoringAlertConfig | object | The alert config for model monitoring. |
nextScheduleTime | string | Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round. |
predictInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests. |
samplePredictInstance | any | Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests. |
satisfiesPzi | boolean | Output only. Reserved for future use. |
satisfiesPzs | boolean | Output only. Reserved for future use. |
scheduleState | string | Output only. Schedule state when the monitoring job is in Running state. |
state | string | Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'. |
statsAnomaliesBaseDirectory | object | The Google Cloud Storage location where the output is to be written to. |
updateTime | string | Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently. |
Methods
Name | Accessible by | Required Params | Description |
---|---|---|---|
get | SELECT | locationsId, modelDeploymentMonitoringJobsId, projectsId | Gets a ModelDeploymentMonitoringJob. |
list | SELECT | locationsId, projectsId | Lists ModelDeploymentMonitoringJobs in a Location. |
create | INSERT | locationsId, projectsId | Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval. |
delete | DELETE | locationsId, modelDeploymentMonitoringJobsId, projectsId | Deletes a ModelDeploymentMonitoringJob. |
patch | UPDATE | locationsId, modelDeploymentMonitoringJobsId, projectsId | Updates a ModelDeploymentMonitoringJob. |
pause | EXEC | locationsId, modelDeploymentMonitoringJobsId, projectsId | Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark ModelDeploymentMonitoringJob.state to 'PAUSED'. |
resume | EXEC | locationsId, modelDeploymentMonitoringJobsId, projectsId | Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed. |
search_model_deployment_monitoring_stats_anomalies | EXEC | locationsId, modelDeploymentMonitoringJobsId, projectsId | Searches Model Monitoring Statistics generated within a given time window. |
SELECT
examples
Lists ModelDeploymentMonitoringJobs in a Location.
SELECT
name,
analysisInstanceSchemaUri,
bigqueryTables,
createTime,
displayName,
enableMonitoringPipelineLogs,
encryptionSpec,
endpoint,
error,
labels,
latestMonitoringPipelineMetadata,
logTtl,
loggingSamplingStrategy,
modelDeploymentMonitoringObjectiveConfigs,
modelDeploymentMonitoringScheduleConfig,
modelMonitoringAlertConfig,
nextScheduleTime,
predictInstanceSchemaUri,
samplePredictInstance,
satisfiesPzi,
satisfiesPzs,
scheduleState,
state,
statsAnomaliesBaseDirectory,
updateTime
FROM google.aiplatform.model_deployment_monitoring_jobs
WHERE locationsId = '{{ locationsId }}'
AND projectsId = '{{ projectsId }}';
INSERT
example
Use the following StackQL query and manifest file to create a new model_deployment_monitoring_jobs
resource.
- All Properties
- Manifest
/*+ create */
INSERT INTO google.aiplatform.model_deployment_monitoring_jobs (
locationsId,
projectsId,
logTtl,
statsAnomaliesBaseDirectory,
enableMonitoringPipelineLogs,
labels,
modelDeploymentMonitoringScheduleConfig,
encryptionSpec,
predictInstanceSchemaUri,
analysisInstanceSchemaUri,
modelDeploymentMonitoringObjectiveConfigs,
samplePredictInstance,
loggingSamplingStrategy,
displayName,
endpoint,
modelMonitoringAlertConfig
)
SELECT
'{{ locationsId }}',
'{{ projectsId }}',
'{{ logTtl }}',
'{{ statsAnomaliesBaseDirectory }}',
{{ enableMonitoringPipelineLogs }},
'{{ labels }}',
'{{ modelDeploymentMonitoringScheduleConfig }}',
'{{ encryptionSpec }}',
'{{ predictInstanceSchemaUri }}',
'{{ analysisInstanceSchemaUri }}',
'{{ modelDeploymentMonitoringObjectiveConfigs }}',
'{{ samplePredictInstance }}',
'{{ loggingSamplingStrategy }}',
'{{ displayName }}',
'{{ endpoint }}',
'{{ modelMonitoringAlertConfig }}'
;
- name: your_resource_model_name
props:
- name: logTtl
value: string
- name: createTime
value: string
- name: statsAnomaliesBaseDirectory
value:
- name: outputUriPrefix
value: string
- name: enableMonitoringPipelineLogs
value: boolean
- name: latestMonitoringPipelineMetadata
value:
- name: runTime
value: string
- name: status
value:
- name: code
value: integer
- name: message
value: string
- name: details
value:
- object
- name: labels
value: object
- name: modelDeploymentMonitoringScheduleConfig
value:
- name: monitorInterval
value: string
- name: monitorWindow
value: string
- name: encryptionSpec
value:
- name: kmsKeyName
value: string
- name: predictInstanceSchemaUri
value: string
- name: scheduleState
value: string
- name: satisfiesPzi
value: boolean
- name: analysisInstanceSchemaUri
value: string
- name: modelDeploymentMonitoringObjectiveConfigs
value:
- - name: objectiveConfig
value:
- name: trainingDataset
value:
- name: dataset
value: string
- name: bigquerySource
value:
- name: inputUri
value: string
- name: loggingSamplingStrategy
value:
- name: randomSampleConfig
value:
- name: sampleRate
value: number
- name: gcsSource
value:
- name: uris
value:
- string
- name: targetField
value: string
- name: dataFormat
value: string
- name: explanationConfig
value:
- name: explanationBaseline
value:
- name: bigquery
value:
- name: outputUri
value: string
- name: predictionFormat
value: string
- name: enableFeatureAttributes
value: boolean
- name: trainingPredictionSkewDetectionConfig
value:
- name: attributionScoreSkewThresholds
value: object
- name: skewThresholds
value: object
- name: defaultSkewThreshold
value:
- name: value
value: number
- name: predictionDriftDetectionConfig
value:
- name: attributionScoreDriftThresholds
value: object
- name: driftThresholds
value: object
- name: deployedModelId
value: string
- name: samplePredictInstance
value: any
- name: state
value: string
- name: displayName
value: string
- name: name
value: string
- name: satisfiesPzs
value: boolean
- name: endpoint
value: string
- name: nextScheduleTime
value: string
- name: modelMonitoringAlertConfig
value:
- name: enableLogging
value: boolean
- name: notificationChannels
value:
- string
- name: emailAlertConfig
value:
- name: userEmails
value:
- string
- name: bigqueryTables
value:
- - name: logSource
value: string
- name: requestResponseLoggingSchemaVersion
value: string
- name: logType
value: string
- name: bigqueryTablePath
value: string
- name: updateTime
value: string
UPDATE
example
Updates a model_deployment_monitoring_jobs
resource.
/*+ update */
UPDATE google.aiplatform.model_deployment_monitoring_jobs
SET
logTtl = '{{ logTtl }}',
statsAnomaliesBaseDirectory = '{{ statsAnomaliesBaseDirectory }}',
enableMonitoringPipelineLogs = true|false,
labels = '{{ labels }}',
modelDeploymentMonitoringScheduleConfig = '{{ modelDeploymentMonitoringScheduleConfig }}',
encryptionSpec = '{{ encryptionSpec }}',
predictInstanceSchemaUri = '{{ predictInstanceSchemaUri }}',
analysisInstanceSchemaUri = '{{ analysisInstanceSchemaUri }}',
modelDeploymentMonitoringObjectiveConfigs = '{{ modelDeploymentMonitoringObjectiveConfigs }}',
samplePredictInstance = '{{ samplePredictInstance }}',
loggingSamplingStrategy = '{{ loggingSamplingStrategy }}',
displayName = '{{ displayName }}',
endpoint = '{{ endpoint }}',
modelMonitoringAlertConfig = '{{ modelMonitoringAlertConfig }}'
WHERE
locationsId = '{{ locationsId }}'
AND modelDeploymentMonitoringJobsId = '{{ modelDeploymentMonitoringJobsId }}'
AND projectsId = '{{ projectsId }}';
DELETE
example
Deletes the specified model_deployment_monitoring_jobs
resource.
/*+ delete */
DELETE FROM google.aiplatform.model_deployment_monitoring_jobs
WHERE locationsId = '{{ locationsId }}'
AND modelDeploymentMonitoringJobsId = '{{ modelDeploymentMonitoringJobsId }}'
AND projectsId = '{{ projectsId }}';