Documentation
FrançaisEnglish
Contact Us
 

LTU Compute API(beta)

LTU Compute API allows to send an image and get all data related to it:

  • heatmap
  • colours
  • text
  • objects detection
  • labels

Analyze

It also allows to compare two images in order to get the distance and the fine differences between them.

No image database is needed. There is no image stored in the platform.

A token has to be send in the header of the request to be authorized.

In this documentation are described the available functions and how to call them. The LTU compute API is available through this url : https://api.ltutech.com/compute/v1/image. The request has been developed in order to make you the task easier. You don't have to worry about the type of the request as long as the name of the parameters are correct.

All the functions return a JSON result that contains:

  • an answer in the "content" part. The content is dependent of the called service.
  • a return code. 0 means all went well.
  • a message to explain issues in human readable language.
{
    "content": {
        ...
    },
    "code": "0",
    "message": "ok"
}

Request AccessTo get access to our services, you will have to request an API Key and a platform account to our development team. To do so, please fill in that request form.

Contact UsShould you have any feedback or question please feel free to contact us.

How to get a token

To have access to this service you must to be logged. That means you need an account. With your API Key - that you can get on the platform in the admin tab - and with your login, you could get an token. If you don’t have an account yet, please contact us (support@ltutech.com).

Authentication

- Syntaxe:

POST  https://api.ltutech.com/compute/v1/image/getToken

- Parameters:

The function getToken takes as input:

  • headers:

    • Apikey: the APi Key of your application
  • data:

    • login: your login account
    • password: your password

- Returns:

A JSON object containing the account data and the access token

- Curl:

curl -X POST https://api.ltutech.com/compute/v1/image/getToken -d login=$USER -d password=$PASSWORD --header "Apikey: $API_KEY"

- Example of response :

{
   "message": "ok",
  "code": "0",
  "content": {
                    "token": {
                                   "access_token": "AQAAA...1F",
                                   "token_type": "bearer",
                                   "expires_in": 1209599, "userName": "xxx",
                                   "userId": "8a57...21a",
                                   "organizationId": "5211196...afe968ca9",
                                   "organizationName": "xxx",
                                  "roles": "Administrator,CustomerAccountManager,Designer",
                                  "apiversion": "3.2.0",
                                  ".issued": "Tue, 09 Jun 2020 09:40:28 GMT",
                                  ".expires": "Tue, 23 Jun 2020 09:40:28 GMT"
}}}

How to obtain information of an image

heatMap

Call this API point to compute and obtain the heatmap of an image. Heatmap could be used to understand where are the interesting points for the image searches.

Example of Heatmap

- Syntaxe:

POST  https://api.ltutech.com/compute/v1/image/heatMap

- Parameters:

The function heatMap takes as input:

  • headers

    • token
    • userName
  • data:

    • image : a buffer image or URL

- Returns:

A JSON object containing the heatmap image

- Curl:

curl -d 'image=http://data.onprint.com/ltu-core-api/Nobita.jpg'  -X POST https://api.ltutech.com/compute/v1/image/heatMap --header "userName: $USER" --header "token: $TOKEN"

- Example of response:

{
  "message": "ok",
  "code": "0",
  "content": {
    "heatmap": "/9j/4AAQSkZJRgABAQAAAQABAAD/[...]LgKKKBhRRRQAUUUUAf/Z”
      }
}

colorPalette

Call this API point to get the most prevalent colors within an image.

color palette

- Syntaxe:

POST https://api.ltutech.com/compute/v1/image/colorPalette

- Parameters:

The function colorPalette takes as input:

  • headers
    • token
    • userName
  • data:
    • image : a buffer image or URL

- Returns:

a JSON object containing a list of colors.

- Curl:

curl -d 'image=http://data.onprint.com/ltu-core-api/Nobita.jpg'  -X POST https://api.ltutech.com/compute/v1/image/colorPalette --header "userName: $USER" --header "token: $TOKEN"

- Example of response:

{
 "code": "0",
  "message": "ok",
  "content": {
                    "colors": [
                                    [163, 147, 126, 30],
                                    [105, 100, 91, 27],
                                    [56, 53, 49, 24],
                                    [200, 194, 184, 11],
                                    [38, 29, 22, 4],
                                    [103, 69, 46, 2],
                                    [173, 117, 88, 1],
                                    [32, 34, 35, 1]
                                  ]
                   }
}

How to analyse images content

The LTU Compute API allows to access external functions library to get more information from your images, such as OCR, objects detection or labellisation. The LTU Core API currently calls Google images algorithms as external library.

textDetection

OCR

Call to API point to perform Optical character recognition which detects printed and handwritten text in an image.

- Syntaxe:

POST  https://api.ltutech.com/compute/v1/image/textDetection

- Parameters:

The function textDetection takes as input:

  • headers
    • token
    • userName
  • data:
    • image : a buffer image or URL

- Returns:

A JSON object containing text detected

- Curl:

curl -d 'image=http://data.onprint.com/ltu-core-api/chanel-crayon.jpg'  -X POST https://api.ltutech.com/compute/v1/image/textDetection --header "userName: $USER" --header "token: $TOKEN"

- Example of response:

{
    "content": {
        "textAnnotations": [{
            "locale": "fr",
            "description": "MADE IN ITALY\n92200 NEUILLY\nLE ROUGE CRAYON\nDE COULEUR MAT\nJUMBO L\u00c8VRES MAT\nLONGUE TENUE\n",
            "boundingPoly": {
                "vertices": [{
                    "y": 648,
                    "x": 465
                }, {
                    "y": 648,
                    "x": 907
                }, {
                    "y": 2286,
                    "x": 907
                }, {
                    "y": 2286,
                    "x": 465
                }]
            }
        }, {
            "description": "MADE",
            "boundingPoly": {
                "vertices": [{
                    "y": 2125,
                    "x": 513
                }, {
                    "y": 2113,
                    "x": 666
                }, {
                    "y": 2174,
                    "x": 670
                }, {
                    "y": 2186,
                    "x": 518
                }]
            }
        },
             [...]
    },
    "message": "ok",
    "code": "0"
}

classification

Classification

Call to API point to perform label prediction of an image.

POST  https://api.ltutech.com/compute/v1/image/classification

- Parameters:

The function classification takes as input:

  • headers
    • token
    • userName
  • data:
    • image : a buffer image or URL

- Returns:

A JSON object containing labels list

- Curl:

curl -d 'image=http://data.onprint.com/ltu-core-api/Nobita.jpg'  -X POST https://api.ltutech.com/compute/v1/image/classification --header "userName: $USER" --header "token: $TOKEN"

- Example of result:

{
    "content": {
        "labelAnnotations": [{
            "score": 0.9805248379707336,
            "topicality": 0.9805248379707336,
            "mid": "/m/0215n",
            "description": "Cartoon"
        }, {
            "score": 0.978813886642456,
            "topicality": 0.978813886642456,
            "mid": "/m/095bb",
            "description": "Animated cartoon"
        }, {
            "score": 0.9443857669830322,
            "topicality": 0.9443857669830322,
            "mid": "/m/01k74n",
            "description": "Facial expression"
        }, {
            "score": 0.7678998708724976,
            "topicality": 0.7678998708724976,
            "mid": "/m/0hcr",
            "description": "Animation"
        }, {
            "score": 0.7291945219039917,
            "topicality": 0.7291945219039917,
            "mid": "/m/019nj4",
            "description": "Smile"
        }, {
            "score": 0.7039424180984497,
            "topicality": 0.7039424180984497,
            "mid": "/m/0ds99lh",
            "description": "Fun"
        }, {
            "score": 0.6912711262702942,
            "topicality": 0.6912711262702942,
            "mid": "/m/01kr8f",
            "description": "Illustration"
        }, {
            "score": 0.6590808033943176,
            "topicality": 0.6590808033943176,
            "mid": "/m/02h7lkt",
            "description": "Fictional character"
        }, {
            "score": 0.6522085666656494,
            "topicality": 0.6522085666656494,
            "mid": "/m/0jyfg",
            "description": "Glasses"
        }]
    },
    "message": "ok",
    "code": "0"
}

objectsDetection

Objects Detection

classification perform localized objects detection.

POST  https://prod.ltutech.net/compute/v1/image/objectsDetection

- Parameters:

The function objectsDetection takes as input:

  • headers
    • token
    • userName
  • data:
    • image : a buffer image or URL

- Returns:

A JSON object containing a list of objects description

- Curl:

curl -d 'image=http://data.onprint.com/ltu-core-api/16-2.png'  -X POST https://api.ltutech.com/compute/v1/image/objectsDetection --header "userName: $USER" --header "token: $TOKEN"

- Example of response:

{
    "content": {
        "localizedObjectAnnotations": [{
            "score": 0.6799721717834473,
            "mid": "/m/01g317",
            "boundingPoly": {
                "normalizedVertices": [{
                    "y": 0.5424494743347168,
                    "x": 0.6926446557044983
                }, {
                    "y": 0.5424494743347168,
                    "x": 0.8713106513023376
                }, {
                    "y": 0.969238817691803,
                    "x": 0.8713106513023376
                }, {
                    "y": 0.969238817691803,
                    "x": 0.6926446557044983
                }]
            },
            "name": "Person"
        }, {
            "score": 0.6125506162643433,
            "mid": "/m/0jbk",
            "boundingPoly": {
                "normalizedVertices": [{
                    "y": 0.7334161400794983,
                    "x": 0.8467774391174316
                }, {
                    "y": 0.7334161400794983,
                    "x": 0.9417657256126404
                }, {
                    "y": 0.9152880311012268,
                    "x": 0.9417657256126404
                }, {
                    "y": 0.9152880311012268,
                    "x": 0.8467774391174316
                }]
            },
            "name": "Animal"
        },    "message": "ok",
    "code": "0"
}

How to compare two images

LTU Core API could also be used to compare two images to

  • get the distance between them
  • get the differences between them

imagesDistance

Call imageDistance to compute distance between two images.

Distance

- Syntaxe:

POST  https://api.ltutech.com/compute/v1/image/imagesDistance

- Parameters:

The function imagesDistance takes as input:

  • headers:
    • token
    • userName
  • data:
    • refImage: a buffer image or URL
    • queryImage: a buffer image or URL

- Returns:

A Json result

- Curl:

curl  -d 'refImage=http://data.onprint.com/ltu-core-api/16-1.png' -d 'queryImage=http://data.onprint.com/ltu-core-api/16-2.png'  -X POST https://api.ltutech.com/compute/v1/image/imagesDistance --header "userName: $USER" --header "token: $TOKEN"

- Example of response:

{
 "code": "0",
 "message": "ok",
 "content": {
              "scores": {
                          "matchStrengthWithoutWeighting": 0.0,
                          "matchStrength": 0.0},
                          "query": {
                                    "resizedDimensions": [512, 512],
                                    "originalDimensions": [225, 225]},
                                    "distance": 4.0,
                                    "category": "LOCALMATCHING",
                                    "decision": "Rejection due to geometric test",
                                    "reference": {
                                                  "resizedDimensions": [512, 491],
                                                  "originalDimensions": [229, 220]
                                                  },
                                    "homography": {
                                                   "destination": "query",
                                                   "coefficients": [],
                                                   "source": "reference"
                                                  }
                                  }
}

imagesDifferences

Compute fine images comparison between two images and get the differences.

FIC

- Syntaxe:

POST  https://api.ltutech.com/compute/v1/image/imagesDifferences

- Parameters:

The function imagesDifferences takes as input:

  • headers:
    • token
    • userName
  • data:
    • image1: a buffer image or URL
    • image2: a buffer image or URL
    • new_format : the new format gives details of the differences
    • version : an optional parameter version to sharpen the comparison as
      • 1: fine detailed difference
      • 2: reduce noise on boarders
      • 3: reduce noise on boarders and smooth the differences
      • 4: reduce noise on boarders and smooth the differences and mask small differences
      • 5: reduce noise on boarders and smooth the differences less than 4 version and mask small differences

- Returns:

A JSON object containing in the "content" part two buffer images with the score described:

  • Image1 which specifies the result over the input image1
  • Image2 which specifies the result over the input image2
  • summary (new_format = True) which provides information about the overall result with:
    • name: the name of the component
    • score: the distance between the two images
    • status: the status of the execution
    • message: a human readable error explanation
    • resultInfos: information about the result (specific to the called function)
    • errors: a list of detailed errors

The content of the fields image1 and image2 is different and will be detailed below.

- With old format:

Curl:

curl -d 'image1=http://data.onprint.com/ltu-core-api/16-1.png' -d 'image2=http://data.onprint.com/ltu-core-api/16-2.png' -X POST https://api.ltutech.com/compute/v1/image/imagesDifferences --header "userName: $USER" --header "token: $TOKEN"

Example of result:

By default (new_format == false ) the returned result contains the fields ref_image, query_image and the score:

"content": {
    "ref_image": "/9j/4AAQSkZJRgAB[...]yxyO+Qhxj1FFFKU3JWYciP//Z",
    "query_image": "/9j/4AAQSkZJRg[...]/umopXDtkZHFFFTKrKSswP//Z",
    "score": 1.4
}

The two images highlight the differences.

- With new format

Curl:

curl -d 'image1=http://data.onprint.com/ltu-core-api/16-1.png' -d 'image2=http://data.onprint.com/ltu-core-api/16-2.png' -d 'new_format=True' -d 'version=1' -X POST https://api.ltutech.com/compute/v1/image/imagesDifferences --header "userName: $USER" --header "token: $TOKEN"

The new_format variable is a temporary parameter which will become the default.

Example of result: Here is an example of the JSON where Image1 and Image2 fields are collapsed to simplify it:

"content": {
    "Image1": {
        ...
    },
    "Image2": {
        ...
    },
    "summary": {
      "name": "imagesDifferences",
      "status": 0,
      "message": "ok",
      "resultInfos": {
          "imagesDifferences": {
              "nbAreas": 3
          }
      },
      "score": 1.4694126844406128,
      "errors": []
   }
}
}

- Image fields details

The image parts reveal details all the informations relative to the images differences. The structure of this part is common to all images. The fields of an image are:

  • UUID: ID of the result
  • version: the version of the format
  • areas: a dictionary of areas representing the differences computed by the FIC. areas contain:
    • A key which is the id of the area with the format UUID/counter
    • boundingBox: the location of the area
    • resultInfos: more detailed informations about the result of a called feature (imagesDifferences here).
"Image1": {
    "areas": {
        "b11fa756/0": {
            "resultInfos": {
                "imagesDifferences": {
                    ...
                }
            }
        },
        "b11fa756/1": {
            "boundingBox": {"x": 96.0, "y": 150.0, "width": 14.0, "height": 19.0},
            "resultInfos": {
                "imagesDifferences": {
                    ...
                }
            }
        },
        "b11fa756/2": {
             '...'
        }
    },
    "UUID": "b11fa756",
    "version": "0.1"
},

The resultInfos field contains two kinds of value depending on the type:

  • the key is the name of the applied feature (imagesDifferences here)
    • type = "input": list of the differences areas found:
      • subAreas: ID of created areas
    • type = "output": differences informations for a created area
      • enclosingCircle: the circle surrounding the differences
      • roiOccupancyRate: the rate of differences inside the area (defined by the bounding box)
      • binaryMask: image indicating a difference if the pixel value is not zero value
      • relativeVertices: a more precise surrounding of the differences with a set of vertices (the last point should be connected with the first one)
"Image1": {
    "areas": {
        "b11fa756/0": {
            "resultInfos": {
                "imagesDifferences": {
                    "type": "input",
                    "subAreas": [
                        "b11fa756/1",
                        "b11fa756/2"
                    ]
                }
            }
        },
        "b11fa756/1": {
            "boundingBox": {"x": 96.0, "y": 150.0, "width": 14.0, "height": 19.0},
            "resultInfos": {
                "imagesDifferences": {
                    "type": "output",
                    "enclosingCircle": {
                        "circleRadius": 11.747891426086426,
                        "relativeCircleCenter": {"x": 7.0, "y": 9.583328247070312}
                    },
                    "roiOccupancyRate": 0.3505692780017853,
                    "binaryMask": "...",
                    "relativeVertices": [
                        {"x": 5.0, "y": 0.0}, {"x": 5.0, "y": 1.0},
                        {"x": 4.0, "y": 2.0}, {"x": 3.0, "y": 2.0},
                        {"x": 2.0, "y": 2.0}
                    ]
                }
            }
        },
        "b11fa756/2": {
             '...'
        }
    },
    "UUID": "b11fa756",
    "version": "0.1"
},

For the binaryMask the fields provided are:

  • encoding: how it has been encoding:
    • base64 means an ascii encoding of the binary buffer with:
    • base64.b64encode(buffer).decode('utf-8')
  • type: informations type inside the buffer:
    • 'image' for an image buffer
  • value: binary buffer with the mask
"binaryMask" : {
    "type": "image",
    "encoding": "base64",
    "value": "..."
}

A full sample can be found here:

{
    "content": {
        "Image1": {
            "areas": {
                "b11fa756/0": {
                    "type": "area",
                    "resultInfos": {
                        "imagesDifferences": {
                            "type": "input",
                            "subAreas": [
                                "b11fa756/1",
                                "b11fa756/2"
                            ]
                        }
                    }
                },
                "b11fa756/1": {
                    "type": "area",
                    "boundingBox": {"x": 96.0, "y": 150.0, "width": 14.0, "height": 19.0},
                    "resultInfos": {
                        "imagesDifferences": {
                            "type": "diff",
                            "enclosingCircle": {
                                "circleRadius": 11.747891426086426,
                                "relativeCircleCenter": {"x": 7.0, "y": 9.583328247070312}
                            },
                            "roiOccupancyRate": 0.3505692780017853,
                            "binaryMask": {...},
                            "relativeVertices": [
                                {"x": 5.0, "y": 0.0}, {"x": 5.0, "y": 1.0},
                                {"x": 4.0, "y": 2.0}, {"x": 3.0, "y": 2.0},
                                {"x": 2.0, "y": 2.0}
                            ]
                        }
                    }
                },
                "b11fa756/2": {
                     '...'
                }
            },
            "UUID": "b11fa756",
            "version": "0.1"
        },
        "Image2": {
            "areas": {
                "b11fa756/0": {
                    "type": "area",
                    "resultInfos": {
                        "imagesDifferences": {
                            "type": "input",
                            "subAreas": [
                                "b11fa756/1",
                                "b11fa756/2"
                            ]
                        }
                    }
                },
                "b11fa756/1": {
                    "type": "area",
                    "boundingBox": {"x": 96.0, "y": 149.0, "width": 14.0, "height": 19.0},
                    "resultInfos": {
                        "imagesDifferences": {
                            "type": "diff",
                            "enclosingCircle": {
                                "circleRadius": 11.747891426086426,
                                "relativeCircleCenter": {"x": 7.0, "y": 9.583328247070312}
                            },
                            "roiOccupancyRate": 0.3505692780017853,
                            "binaryMask": {...},
                            "relativeVertices": [{"x": 5.0, "y": 0.0}, {"x": 5.0, "y": 1.0},
                                                 {"x": 4.0, "y": 2.0}, {"x": 3.0, "y": 2.0}, {"x": 2.0, "y": 3.0}
                            ]
                        }
                    }
                },
                "b11fa756/2": {
                    '...'
                }
            },
            "UUID": "b11fa756",
            "version": "0.1"
        },
        "summary": {
            "name": "imagesDifferences",
            "status": 0,
            "message": "ok",
            "resultInfos": {
                "imagesDifferences": {
                    "nbAreas": 3
                }
            },
            "score": 1.4694126844406128,
            "errors": []
        }
    },
    "code": "0",
    "message": "ok"
}