npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@datafire/google_vision

v6.0.0

Published

DataFire integration for Cloud Vision API

Downloads

10

Readme

@datafire/google_vision

Client library for Cloud Vision API

Installation and Usage

npm install --save @datafire/google_vision
let google_vision = require('@datafire/google_vision').create({
  access_token: "",
  refresh_token: "",
  client_id: "",
  client_secret: "",
  redirect_uri: ""
});

.then(data => {
  console.log(data);
});

Description

Integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.

Actions

oauthCallback

Exchange the code passed to your redirect URI for an access_token

google_vision.oauthCallback({
  "code": ""
}, context)

Input

  • input object
    • code required string

Output

  • output object
    • access_token string
    • refresh_token string
    • token_type string
    • scope string
    • expiration string

oauthRefresh

Exchange a refresh_token for an access_token

google_vision.oauthRefresh(null, context)

Input

This action has no parameters

Output

  • output object
    • access_token string
    • refresh_token string
    • token_type string
    • scope string
    • expiration string

vision.files.annotate

Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.

google_vision.vision.files.annotate({}, context)

Input

  • input object
    • body GoogleCloudVisionV1p2beta1BatchAnnotateFilesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.files.asyncBatchAnnotate

Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results).

google_vision.vision.files.asyncBatchAnnotate({}, context)

Input

  • input object
    • body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.images.annotate

Run image detection and annotation for a batch of images.

google_vision.vision.images.annotate({}, context)

Input

  • input object
    • body GoogleCloudVisionV1p2beta1BatchAnnotateImagesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.images.asyncBatchAnnotate

Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateImagesResponse (results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.

google_vision.vision.images.asyncBatchAnnotate({}, context)

Input

  • input object
    • body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateImagesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.projects.locations.files.annotate

Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.

google_vision.vision.projects.locations.files.annotate({
  "parent": ""
}, context)

Input

  • input object
    • parent required string: Optional. Target project and location to make a call. Format: projects/{project-id}/locations/{location-id}. If no parent is specified, a region will be chosen automatically. Supported location-ids: us: USA country only, asia: East asia areas, like Japan, Taiwan, eu: The European Union. Example: projects/project-A/locations/eu.
    • body GoogleCloudVisionV1p2beta1BatchAnnotateFilesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.projects.locations.files.asyncBatchAnnotate

Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results).

google_vision.vision.projects.locations.files.asyncBatchAnnotate({
  "parent": ""
}, context)

Input

  • input object
    • parent required string: Optional. Target project and location to make a call. Format: projects/{project-id}/locations/{location-id}. If no parent is specified, a region will be chosen automatically. Supported location-ids: us: USA country only, asia: East asia areas, like Japan, Taiwan, eu: The European Union. Example: projects/project-A/locations/eu.
    • body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.projects.locations.images.annotate

Run image detection and annotation for a batch of images.

google_vision.vision.projects.locations.images.annotate({
  "parent": ""
}, context)

Input

  • input object
    • parent required string: Optional. Target project and location to make a call. Format: projects/{project-id}/locations/{location-id}. If no parent is specified, a region will be chosen automatically. Supported location-ids: us: USA country only, asia: East asia areas, like Japan, Taiwan, eu: The European Union. Example: projects/project-A/locations/eu.
    • body GoogleCloudVisionV1p2beta1BatchAnnotateImagesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

vision.projects.locations.images.asyncBatchAnnotate

Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateImagesResponse (results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.

google_vision.vision.projects.locations.images.asyncBatchAnnotate({
  "parent": ""
}, context)

Input

  • input object
    • parent required string: Optional. Target project and location to make a call. Format: projects/{project-id}/locations/{location-id}. If no parent is specified, a region will be chosen automatically. Supported location-ids: us: USA country only, asia: East asia areas, like Japan, Taiwan, eu: The European Union. Example: projects/project-A/locations/eu.
    • body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateImagesRequest
    • $.xgafv string (values: 1, 2): V1 error format.
    • access_token string: OAuth access token.
    • alt string (values: json, media, proto): Data format for response.
    • callback string: JSONP
    • fields string: Selector specifying which fields to include in a partial response.
    • key string: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
    • oauth_token string: OAuth 2.0 token for the current user.
    • prettyPrint boolean: Returns response with indentations and line breaks.
    • quotaUser string: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
    • upload_protocol string: Upload protocol for media (e.g. "raw", "multipart").
    • uploadType string: Legacy upload protocol for media (e.g. "media", "multipart").

Output

Definitions

AnnotateFileResponse

  • AnnotateFileResponse object: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
    • responses array: Individual responses to images found within the file. This field will be empty if the error field is set.
    • error Status
    • inputConfig InputConfig
    • totalPages integer: This field gives the total number of pages in the file.

AnnotateImageResponse

AsyncAnnotateFileResponse

  • AsyncAnnotateFileResponse object: The response for a single offline file annotation request.

AsyncBatchAnnotateFilesResponse

  • AsyncBatchAnnotateFilesResponse object: Response to an async batch file annotation request.
    • responses array: The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.

AsyncBatchAnnotateImagesResponse

  • AsyncBatchAnnotateImagesResponse object: Response to an async batch image annotation request.

BatchAnnotateFilesResponse

  • BatchAnnotateFilesResponse object: A list of file annotation responses.
    • responses array: The list of file annotation responses, each response corresponding to each AnnotateFileRequest in BatchAnnotateFilesRequest.

BatchOperationMetadata

  • BatchOperationMetadata object: Metadata for the batch operations such as the current state. This is included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.
    • endTime string: The time when the batch request is finished and google.longrunning.Operation.done is set to true.
    • state string (values: STATE_UNSPECIFIED, PROCESSING, SUCCESSFUL, FAILED, CANCELLED): The current state of the batch operation.
    • submitTime string: The time when the batch request was submitted to the server.

Block

  • Block object: Logical element on the page.
    • blockType string (values: UNKNOWN, TEXT, TABLE, PICTURE, RULER, BARCODE): Detected block type (text, image etc) for this block.
    • boundingBox BoundingPoly
    • confidence number: Confidence of the OCR results on the block. Range [0, 1].
    • paragraphs array: List of paragraphs in this block (if this blocks is of type text).
    • property TextProperty

BoundingPoly

  • BoundingPoly object: A bounding polygon for the detected image annotation.
    • normalizedVertices array: The bounding polygon normalized vertices.
    • vertices array: The bounding polygon vertices.

Color

  • Color object: Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness; for example, the fields of this representation can be trivially provided to the constructor of "java.awt.Color" in Java; it can also be trivially provided to UIColor's "+colorWithRed:green:blue:alpha" method in iOS; and, with just a little work, it can be easily formatted into a CSS "rgba()" string in JavaScript, as well. Note: this proto does not carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color space. Note: when color equality needs to be decided, implementations, unless documented otherwise, will treat two colors to be equal if all their red, green, blue and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor_(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor_ = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
    • alpha number: The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: pixel color = alpha * (this color) + (1.0 - alpha) * (background color) This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is to be rendered as a solid color (as if the alpha value had been explicitly given with a value of 1.0).
    • blue number: The amount of blue in the color as a value in the interval [0, 1].
    • green number: The amount of green in the color as a value in the interval [0, 1].
    • red number: The amount of red in the color as a value in the interval [0, 1].

ColorInfo

  • ColorInfo object: Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
    • color Color
    • pixelFraction number: The fraction of pixels the color occupies in the image. Value in range [0, 1].
    • score number: Image-specific score for this color. Value in range [0, 1].

CropHint

  • CropHint object: Single crop hint that is used to generate a new crop when serving an image.
    • boundingPoly BoundingPoly
    • confidence number: Confidence of this being a salient region. Range [0, 1].
    • importanceFraction number: Fraction of importance of this salient region with respect to the original image.

CropHintsAnnotation

  • CropHintsAnnotation object: Set of crop hints that are used to generate new crops when serving images.
    • cropHints array: Crop hint results.

DetectedBreak

  • DetectedBreak object: Detected start or end of a structural component.
    • isPrefix boolean: True if break prepends the element.
    • type string (values: UNKNOWN, SPACE, SURE_SPACE, EOL_SURE_SPACE, HYPHEN, LINE_BREAK): Detected break type.

DetectedLanguage

  • DetectedLanguage object: Detected language for a structural component.
    • confidence number: Confidence of detected language. Range [0, 1].
    • languageCode string: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.

DominantColorsAnnotation

  • DominantColorsAnnotation object: Set of dominant colors and their corresponding scores.
    • colors array: RGB color values with their score and pixel fraction.

EntityAnnotation

  • EntityAnnotation object: Set of detected entity features.
    • boundingPoly BoundingPoly
    • confidence number: Deprecated. Use score instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
    • description string: Entity textual description, expressed in its locale language.
    • locale string: The language code for the locale in which the entity textual description is expressed.
    • locations array: The location information for the detected entity. Multiple LocationInfo elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
    • mid string: Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
    • properties array: Some entities may have optional user-supplied Property (name/value) fields, such a score or string that qualifies the entity.
    • score number: Overall score of the result. Range [0, 1].
    • topicality number: The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].

FaceAnnotation

  • FaceAnnotation object: A face annotation object contains the results of face detection.
    • angerLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Anger likelihood.
    • blurredLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Blurred likelihood.
    • boundingPoly BoundingPoly
    • detectionConfidence number: Detection confidence. Range [0, 1].
    • fdBoundingPoly BoundingPoly
    • headwearLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Headwear likelihood.
    • joyLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Joy likelihood.
    • landmarkingConfidence number: Face landmarking confidence. Range [0, 1].
    • landmarks array: Detected face landmarks.
    • panAngle number: Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
    • rollAngle number: Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
    • sorrowLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Sorrow likelihood.
    • surpriseLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Surprise likelihood.
    • tiltAngle number: Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
    • underExposedLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Under-exposed likelihood.

GcsDestination

  • GcsDestination object: The Google Cloud Storage location where the output will be written to.
    • uri string: Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.

GcsSource

  • GcsSource object: The Google Cloud Storage location where the input will be read from.
    • uri string: Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.

GoogleCloudVisionV1p1beta1AnnotateFileResponse

  • GoogleCloudVisionV1p1beta1AnnotateFileResponse object: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.

GoogleCloudVisionV1p1beta1AnnotateImageResponse

GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse

GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse

  • GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse object: Response to an async batch file annotation request.

GoogleCloudVisionV1p1beta1Block

GoogleCloudVisionV1p1beta1BoundingPoly

GoogleCloudVisionV1p1beta1ColorInfo

  • GoogleCloudVisionV1p1beta1ColorInfo object: Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
    • color Color
    • pixelFraction number: The fraction of pixels the color occupies in the image. Value in range [0, 1].
    • score number: Image-specific score for this color. Value in range [0, 1].

GoogleCloudVisionV1p1beta1CropHint

  • GoogleCloudVisionV1p1beta1CropHint object: Single crop hint that is used to generate a new crop when serving an image.
    • boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
    • confidence number: Confidence of this being a salient region. Range [0, 1].
    • importanceFraction number: Fraction of importance of this salient region with respect to the original image.

GoogleCloudVisionV1p1beta1CropHintsAnnotation

  • GoogleCloudVisionV1p1beta1CropHintsAnnotation object: Set of crop hints that are used to generate new crops when serving images.

GoogleCloudVisionV1p1beta1DominantColorsAnnotation

  • GoogleCloudVisionV1p1beta1DominantColorsAnnotation object: Set of dominant colors and their corresponding scores.

GoogleCloudVisionV1p1beta1EntityAnnotation

  • GoogleCloudVisionV1p1beta1EntityAnnotation object: Set of detected entity features.
    • boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
    • confidence number: Deprecated. Use score instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
    • description string: Entity textual description, expressed in its locale language.
    • locale string: The language code for the locale in which the entity textual description is expressed.
    • locations array: The location information for the detected entity. Multiple LocationInfo elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.
    • mid string: Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
    • properties array: Some entities may have optional user-supplied Property (name/value) fields, such a score or string that qualifies the entity.
    • score number: Overall score of the result. Range [0, 1].
    • topicality number: The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].

GoogleCloudVisionV1p1beta1FaceAnnotation

  • GoogleCloudVisionV1p1beta1FaceAnnotation object: A face annotation object contains the results of face detection.
    • angerLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Anger likelihood.
    • blurredLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Blurred likelihood.
    • boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
    • detectionConfidence number: Detection confidence. Range [0, 1].
    • fdBoundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
    • headwearLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Headwear likelihood.
    • joyLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Joy likelihood.
    • landmarkingConfidence number: Face landmarking confidence. Range [0, 1].
    • landmarks array: Detected face landmarks.
    • panAngle number: Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
    • rollAngle number: Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
    • sorrowLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Sorrow likelihood.
    • surpriseLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Surprise likelihood.
    • tiltAngle number: Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
    • underExposedLikelihood string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Under-exposed likelihood.

GoogleCloudVisionV1p1beta1FaceAnnotationLandmark

  • GoogleCloudVisionV1p1beta1FaceAnnotationLandmark object: A face-specific landmark (for example, a face feature).
    • position GoogleCloudVisionV1p1beta1Position
    • type string (values: UNKNOWN_LANDMARK, LEFT_EYE, RIGHT_EYE, LEFT_OF_LEFT_EYEBROW, RIGHT_OF_LEFT_EYEBROW, LEFT_OF_RIGHT_EYEBROW, RIGHT_OF_RIGHT_EYEBROW, MIDPOINT_BETWEEN_EYES, NOSE_TIP, UPPER_LIP, LOWER_LIP, MOUTH_LEFT, MOUTH_RIGHT, MOUTH_CENTER, NOSE_BOTTOM_RIGHT, NOSE_BOTTOM_LEFT, NOSE_BOTTOM_CENTER, LEFT_EYE_TOP_BOUNDARY, LEFT_EYE_RIGHT_CORNER, LEFT_EYE_BOTTOM_BOUNDARY, LEFT_EYE_LEFT_CORNER, RIGHT_EYE_TOP_BOUNDARY, RIGHT_EYE_RIGHT_CORNER, RIGHT_EYE_BOTTOM_BOUNDARY, RIGHT_EYE_LEFT_CORNER, LEFT_EYEBROW_UPPER_MIDPOINT, RIGHT_EYEBROW_UPPER_MIDPOINT, LEFT_EAR_TRAGION, RIGHT_EAR_TRAGION, LEFT_EYE_PUPIL, RIGHT_EYE_PUPIL, FOREHEAD_GLABELLA, CHIN_GNATHION, CHIN_LEFT_GONION, CHIN_RIGHT_GONION, LEFT_CHEEK_CENTER, RIGHT_CHEEK_CENTER): Face landmark type.

GoogleCloudVisionV1p1beta1GcsDestination

  • GoogleCloudVisionV1p1beta1GcsDestination object: The Google Cloud Storage location where the output will be written to.
    • uri string: Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.

GoogleCloudVisionV1p1beta1GcsSource

  • GoogleCloudVisionV1p1beta1GcsSource object: The Google Cloud Storage location where the input will be read from.
    • uri string: Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.

GoogleCloudVisionV1p1beta1ImageAnnotationContext

  • GoogleCloudVisionV1p1beta1ImageAnnotationContext object: If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
    • pageNumber integer: If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image.
    • uri string: The URI of the file used to produce the image.

GoogleCloudVisionV1p1beta1ImageProperties

GoogleCloudVisionV1p1beta1InputConfig

  • GoogleCloudVisionV1p1beta1InputConfig object: The desired input location and metadata.
    • content string: File content, represented as a stream of bytes. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests.
    • gcsSource GoogleCloudVisionV1p1beta1GcsSource
    • mimeType string: The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.

GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation

  • GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation object: Set of detected objects with bounding boxes.
    • boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
    • languageCode string: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
    • mid string: Object ID that should align with EntityAnnotation mid.
    • name string: Object name, expressed in its language_code language.
    • score number: Score of the result. Range [0, 1].

GoogleCloudVisionV1p1beta1LocationInfo

  • GoogleCloudVisionV1p1beta1LocationInfo object: Detected entity location information.

GoogleCloudVisionV1p1beta1NormalizedVertex

  • GoogleCloudVisionV1p1beta1NormalizedVertex object: A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
    • x number: X coordinate.
    • y number: Y coordinate.

GoogleCloudVisionV1p1beta1OperationMetadata

  • GoogleCloudVisionV1p1beta1OperationMetadata object: Contains metadata for the BatchAnnotateImages operation.
    • createTime string: The time when the batch request was received.
    • state string (values: STATE_UNSPECIFIED, CREATED, RUNNING, DONE, CANCELLED): Current state of the batch operation.
    • updateTime string: The time when the operation result was last updated.

GoogleCloudVisionV1p1beta1OutputConfig

  • GoogleCloudVisionV1p1beta1OutputConfig object: The desired output location and metadata.
    • batchSize integer: The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. If batch_size = 20, then 5 json files each containing 20 response protos will be written under the prefix gcs_destination.uri. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations.
    • gcsDestination GoogleCloudVisionV1p1beta1GcsDestination

GoogleCloudVisionV1p1beta1Page

  • GoogleCloudVisionV1p1beta1Page object: Detected page from OCR.
    • blocks array: List of blocks of text, images etc on this page.
    • confidence number: Confidence of the OCR results on the page. Range [0, 1].
    • height integer: Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
    • property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
    • width integer: Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.

GoogleCloudVisionV1p1beta1Paragraph

GoogleCloudVisionV1p1beta1Position

  • GoogleCloudVisionV1p1beta1Position object: A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
    • x number: X coordinate.
    • y number: Y coordinate.
    • z number: Z coordinate (or depth).

GoogleCloudVisionV1p1beta1Product

  • GoogleCloudVisionV1p1beta1Product object: A Product contains ReferenceImages.
    • description string: User-provided metadata to be stored with this product. Must be at most 4096 characters long.
    • displayName string: The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long.
    • name string: The resource name of the product. Format is: projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID. This field is ignored when creating a product.
    • productCategory string: Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products.
    • productLabels array: Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.

GoogleCloudVisionV1p1beta1ProductKeyValue

  • GoogleCloudVisionV1p1beta1ProductKeyValue object: A product label represented as a key-value pair.
    • key string: The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
    • value string: The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.

GoogleCloudVisionV1p1beta1ProductSearchResults

  • GoogleCloudVisionV1p1beta1ProductSearchResults object: Results for a product search request.
    • indexTime string: Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results.
    • productGroupedResults array: List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results.
    • results array: List of results, one for each product match.

GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult

GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation

  • GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation object: Prediction for what the object in the bounding box is.
    • languageCode string: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
    • mid string: Object ID that should align with EntityAnnotation mid.
    • name string: Object name, expressed in its language_code language.
    • score number: Score of the result. Range [0, 1].

GoogleCloudVisionV1p1beta1ProductSearchResultsResult

  • GoogleCloudVisionV1p1beta1ProductSearchResultsResult object: Information about a product.
    • image string: The resource name of the image from the product that is the closest match to the query.
    • product GoogleCloudVisionV1p1beta1Product
    • score number: A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).

GoogleCloudVisionV1p1beta1Property

  • GoogleCloudVisionV1p1beta1Property object: A Property consists of a user-supplied name/value pair.
    • name string: Name of the property.
    • uint64Value string: Value of numeric properties.
    • value string: Value of the property.

GoogleCloudVisionV1p1beta1SafeSearchAnnotation

  • GoogleCloudVisionV1p1beta1SafeSearchAnnotation object: Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
    • adult string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
    • medical string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that this is a medical image.
    • racy string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
    • spoof string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
    • violence string (values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that this image contains violent content.

GoogleCloudVisionV1p1beta1Symbol

GoogleCloudVisionV1p1beta1TextAnnotation

  • GoogleCloudVisionV1p1beta1TextAnnotation object: TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.

GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak

  • GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak object: Detected start or end of a structural component.
    • isPrefix boolean: True if break prepends the element.
    • type string (values: UNKNOWN, SPACE, SURE_SPACE, EOL_SURE_SPACE, HYPHEN, LINE_BREAK): Detected break type.

GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage

  • GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage object: Detected language for a structural component.
    • confidence number: Confidence of detected language. Range [0, 1].
    • languageCode string: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.

GoogleCloudVisionV1p1beta1TextAnnotationTextProperty

GoogleCloudVisionV1p1beta1Vertex

  • GoogleCloudVisionV1p1beta1Vertex object: A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
    • x integer: X coordinate.
    • y integer: Y coordinate.

GoogleCloudVisionV1p1beta1WebDetection

GoogleCloudVisionV1p1beta1WebDetectionWebEntity

  • GoogleCloudVisionV1p1beta1WebDetectionWebEntity object: Entity deduced from similar images on the Internet.
    • description string: Canonical description of the entity, in English.
    • entityId string: Opaque entity ID.
    • score number: Overall relevancy score for the entity. Not normalized and not comparable across different image queries.

GoogleCloudVisionV1p1beta1WebDetectionWebImage

  • GoogleCloudVisionV1p1beta1WebDetectionWebImage object: Metadata for online images.
    • score number: (Deprecated) Overall relevancy score for the image.
    • url string: The result image URL.

GoogleCloudVisionV1p1beta1WebDetectionWebLabel

  • GoogleCloudVisionV1p1beta1WebDetectionWebLabel object: Label to provide extra metadata for the web detection.
    • label string: Label for extra metadata.
    • languageCode string: The BCP-47 language code for label, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.

GoogleCloudVisionV1p1beta1WebDetectionWebPage

  • GoogleCloudVisionV1p1beta1WebDetectionWebPage object: Metadata for web pages.
    • fullMatchingImages array: Fully matching images on the page. Can include resized copies of the query image.
    • pageTitle string: Title for the web page, may contain HTML markups.
    • partialMatchingImages array: Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops.
    • score number: (Deprecated) Overall relevancy score for the web page.
    • url string: The result web page URL.

GoogleCloudVisionV1p1beta1Word

GoogleCloudVisionV1p2beta1AnnotateFileRequest

  • GoogleCloudVisionV1p2beta1AnnotateFileRequest object: A request to annotate one single file, e.g. a PDF, TIFF or GIF file.
    • features array: Required. Requested features.
    • imageContext GoogleCloudVisionV1p2beta1ImageContext
    • inputConfig GoogleCloudVisionV1p2beta1InputConfig
    • pages array: Pages of the file to perform image annotation. Pages starts from 1, we assume the first page of the file is page 1. At most 5 pages are supported per request. Pages can be negative. Page 1 means the first page. Page 2 means the second page. Page -1 means the last page. Page -2 means the second to the last page. If the file is GIF instead of PDF or TIFF, page refers to GIF frames. If this field is empty, by default the service performs image annotation for the first 5 pages of the file.
      • items integer

GoogleCloudVisionV1p2beta1AnnotateFileResponse

  • GoogleCloudVisionV1p2beta1AnnotateFileResponse object: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.

GoogleCloudVisionV1p2beta1AnnotateImageRequest

GoogleCloudVisionV1p2beta1AnnotateImageResponse