In addition to using a URL or uploading a file, you can specify other inputs and thus get files stored in various ways and in different sources.
We support a variety of third party cloud storage providers like Amazon S3 or Google Cloud.
The following code specifies a remote input file provided via URL.
This is the simplest way of specifying an input file. Define the type
field as remote
and specify the URL of the file you want to convert in
the source
.
{
"input": [{
"type": "remote",
"source": "https://example-files.online-convert.com/raster%20image/png/example_small.png"
}]
}
If the input is a password protected file (e.g. pdf, zip, ...) you can specify this as following:
{
"input": [{
"type": "remote",
"source": "https://example-files.online-convert.com/document/pdf/example_multipage_protected.pdf",
"credentials": {
"decrypt_password": "online-convert.com"
}
}]
}
It is also possible to send just the remote input like in the first example and adding the password in a second step. This is done by simply sending a PATCH request to the following endpoint:
/jobs/<job_id>/input/<input_id>
{
"credentials": {
"decrypt_password": "my_password"
}
}
In order to patch the input, the job should not be started yet. This means it should be created using
{"process": false}
In some use cases it might be desirable to specify the name of the output file instead of using the original file name.
One such example is to rename image files with a generic name like DSC_0013.JPG to a more meaningful name like Holiday 2018_0013.png during the conversion.
A different example are dynamically generated urls like ?productId=ABC123 in which case the output filename can be set to e.g. Product ABC123 Manual.pdf.
If you are unsure about the format of your file, just send the name without the extension. It will automatically be added by us.
In order to specify the new filename you have to send the file like in the following example:
class="lang-json">{
"input": [{
"filename": "Product ABC123 Manual.pdf",
"type": "remote",
"source": "https://www.example.com/your_dynamic_url?productId=ABC123"
}]
}
Depending on the remote input URL and on the desired target, we do our best to guess what the expected result will be like.
Sometimes our best guess is not as you may expect for your specific needs. Thus, you can specify a different remote download engine that fits your task better. Consider the example below:
{
"input": [{
"type": "remote",
"source": "https://www.example.com/your_dynamic_url?productId=ABC123",
"engine": "screenshot"
}]
}
As to better understand the different results that you may expect from the various engines, let's assume that you have a dynamic URL like the one in the example above.
If this source URL displays a full web page with the picture, description, and price of the requested productId, you may select screenshot as your engine.
You will then receive an image/screenshot of the web page. If you are interested in the full source code of the web page with all the assets, you should select website.
If the link points you directly to a PDF document for the product, file or simply auto will probably be the best choice.
The possible values for the engine are shown in the following table:
engine | description |
---|---|
auto | We do our best to choose the best engine to download the remote content for your conversion |
file | The remote URL is considered as a single file with its own filename and extension |
screenshot | Use this engine when what you need is a screenshot of a remote web page |
website | When you need all the files that are used to render the remote web page |
screenshot_pdf | This engine downloads a website as PDF document |
You have to do a POST
request to it.
The field file
is where you put the contents of the file you want to send.
The optional field decrypt_password
is where you put the password to open a password protected file.
Once the file is uploaded, the job will continue its lifecycle, start processing the conversion and eventually finish.
Please note that, even if not mandatory, it's highly recommended to set a unique random string for each file that you upload. This can prevent conversion problems in some corner cases. You don't need to send a full UUID, a short random string should be enough.
POST /v2/dl/web2/upload-file/39ef70ea-efc8-42a2-84dc-2090e1055077 HTTP/1.1
Host: www13.api2convert.com
x-oc-api-key: <your API key goes here>
x-oc-upload-uuid: <your random string or a full UUID>
Cache-Control: no-cache
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="decrypt_password"
this_is_a_password
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="file"; filename="myfile.png"
Content-Type: image/png
... contents of the file go here ...
------WebKitFormBoundary7MA4YWxkTrZu0gW--
To allow our servers to download files directly from the Amazon S3 storage, the following permissions are required:
s3:GetObject
s3:PutObject
s3:PutObjectAcl
s3:GetObject
s3:PutObject
s3:PutObjectAcl
Below you can find a list with the fields that are accepted in this input type:
Field | Description | Required | Default |
---|---|---|---|
input.type | Specifies the type of input. For cloud storage always use cloud. | Yes | N/A |
input.source | Tells our servers which cloud storage provider must be contacted, in this case amazons3. | Yes | N/A |
input.parameters.bucket | Determines from which bucket our servers will get the file. | Yes | N/A |
input.parameters.region | Indicates the region configured for your bucket. A list of Amazon S3 region names can be found here. If you don't specify this field and your bucket is configured to use another region than the default, any download will fail. | No | eu-central-1 |
input.parameters.file | Amazon S3 key of the file to download. Usually looks like a normal
file path, e.g. pictures/mountains.jpg .
|
Yes | N/A |
input.credentials.accesskeyid | The Amazon S3 access key ID. | Yes | N/A |
input.credentials.secretaccesskey | The Amazon S3 secret access key. | Yes | N/A |
input.credentials.sessiontoken | Together with secretaccesskey and
accesskeyid , this is used to authenticate using
temporary credentials. For more information on how to generate
temporary credentials please check how to install AWS CLI tool and how to
do a call to AWS STS get-session-token.
|
No | N/A |
To allow our servers to download files directly from the Google Cloud Storage, follow these instructions:
Below you can find a list with the fields accepted in this input type:
Field | Description | Required | Default |
---|---|---|---|
input.type | Specifies the type of input. For cloud storage always use cloud. | Yes | N/A |
input.source | Tells our servers which cloud storage provider must be contacted, in this case googlecloud. | Yes | N/A |
input.parameters.projectid | The ID of your Google Cloud project. | Yes | N/A |
input.parameters.bucket | Determines from which bucket our servers will get the file. | Yes | N/A |
input.parameters.file | Complete path to the file to download, e.g. folder-inside-bucket/image.jpeg .
|
Yes | N/A |
input.credentials.keyfile | Here, specify the contents of your json private key file. You can generate one following these instructions. | Yes | N/A |
To allow our servers to download files directly from the Microsoft Azure Blob Storage, the following parameters to send the request are available:
Field | Description | Required | Default |
---|---|---|---|
input.type | Specifies the type of the input. For cloud storage always use cloud. | Yes | N/A |
input.source | Tells our servers which cloud storage provider must be contacted, in this case azure. | Yes | N/A |
input.parameters.container | The name of the container that holds your files. | Yes | N/A |
input.parameters.file | Complete path to the file to download, e.g. folder-inside-bucket/image.jpeg .
|
Yes | N/A |
input.credentials.accountname | Can be found in the storage account dashboard. It's the name
before the blob.core.windows.net URL.
|
Yes | N/A |
input.credentials.accountkey | Can be found in the storage account dashboard under the Access Keys menu entry. | Yes | N/A |
To allow our servers to download files directly from an FTP server, the following parameters to send the request are available:
Field | Description | Required | Default |
---|---|---|---|
input.type | Specifies the type of the input. For cloud storage always use cloud. | Yes | N/A |
input.source | Tells our servers which cloud storage provider must be contacted, in this case ftp. | Yes | N/A |
input.parameters.host | The URL or IP of the FTP server. | Yes | N/A |
input.parameters.file | Complete path to the file to download, e.g. `folder-ion-ftp/image.jpeg | Yes | N/A |
input.parameters.port | The port used to connect to the FTP server. | No | 21 |
input.credentials.username | The username of the FTP server account. | Yes | N/A |
input.credentials.password | The password of the FTP server account. | Yes | N/A |
To send a base64 input, the following is required:
{
"id": "abcdd66e-bfb6-4ef3-b443-787131bb84b3",
...
...
"server": "https://wwwXX.api2convert.com/v2/dl/webX"
...
...
}
server
and id
keys from the
response
server
value with
/upload-base64/
and the id
value
The resulting string looks like this:
https://wwwXX.api2convert.com/v2/dl/webX/upload-base64/abcdd66e-bfb6-4ef3-b443-787131bb84b3
This is just an example. Make sure to use the values you get in the response of the create job call.
After generating the URL, do a
POST
request to it.
The form has to contain the following fields:
Once the base64 data is sent, the job will continue its lifecycle, start processing the conversion and eventually finish.
POST /v2/dl/web2/upload-base64/a6f691e2-839e-49e5-829d-dc2d97486fe1 HTTP/1.1
Host: www13.api2convert.com
x-oc-api-key: <your API key here>
Content-Type: application/json
Cache-Control: no-cache
{
"content": "data:image/gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAQABAAACAkQBADs=",
"filename": "black-pixel"
}
If you want to send more than one base64-encoded file at once, you can send them inside an array as shown in the following example.
Please note that the max size of the JSON body should not exceed 1 GB.
POST /v2/dl/web2/upload-base64/a6f691e2-839e-49e5-829d-dc2d97486fe1 HTTP/1.1
Host: www13.api2convert.com
x-oc-api-key: <your API key here>
Content-Type: application/json
Cache-Control: no-cache
[{
"content": "data:image/gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAQABAAACAkQBADs=",
"filename": "black_pixel.gif"
},{
"content": "data:text/plain;base64,dGVzdCBzdHJpbmc=",
"filename": "example_string.txt"
}]
This type of input allows you to use an input from a previous conversion again.
This is only possible if you are the creator of the job the input_id
is
taken from.
Advantages of using this input type:
Once you create a job and it is finished, you receive data similar to this:
class="lang-json">{
... extra information ...
"input": [
{
"id": "5e0aa023-2235-4e78-a0fc-3106b2689dd3",
"type": "remote",
"source": "https://example-files.online-convert.com/raster%20image/png/example_small.png",
"filename": "example_small.png",
"size": 333205,
"hash": "144979874887251a2150c5485e294001",
"checksum": "144979874887251a2150c5485e294001",
"content_type": "image/png",
"created_at": "2017-08-17T16:36:44",
"modified_at": "2017-08-17T16:36:45",
"parameters": []
}
],
... extra information ...
}
Now, take the value of the id
field. When creating your next job, add an input of type
input_id
and in the source
field, specify the contents you retrieved from
the previous input id
field. Consider the following request:
Google Drive picker allows you to access files stored in a Google Drive account. You can find more information about this one the official page.
Adding a new source file of type gdrive_picker
is done by using the following parameters:
If a job is not yet started, you may need to patch it to modify the inputs. You can reach the goal by sending an input PATCH request to the endpoint:
/v2/jobs/<job_id>/input/<input_id>
with the new data inside the body like in the following example.
Let's assume that your job with id 60056999-6cb9-4301-8c91-0247036f2098
has an input with id d163a942-465d-4f0f-9d96-e05dce7bd686
that
looks like the following example.
As you can see in the metadata
section, the PDF is password
protected, so we cannot process it. At this point, you can still add the
password before starting the conversion by sending a PATCH
request as in the example.
As you can see in the metadata
section, the PDF is password
protected, so we cannot process it. At this point, you can still add the
password before starting the conversion by sending a PATCH
request like the following:
"input": [{
"id": "d163a942-465d-4f0f-9d96-e05dce7bd686",
"type": "remote",
"source": "https://example-files.online-convert.com/document/pdf/example_multipage_protected.pdf",
"size": 1096348,
"hash": "e84a82fb5f42391c57d9411b21671a87",
"checksum": "e84a82fb5f42391c57d9411b21671a87",
"content_type": "application/pdf",
"created_at": "2018-10-02T08:45:21",
"modified_at": "2018-10-02T08:45:22",
"parameters": [],
"metadata": {
"pdf_has_user_password": true,
"password_protected": true
}
}]
If the password is the right one, you will immediately receive the answer from the API which should look like the following.
As you can see, the metadata now contains more useful information, like the number of pages and the size, that proof that we can now access and process the file.
{
"id": "d163a942-465d-4f0f-9d96-e05dce7bd686",
"type": "remote",
"source": "https://example-files.online-convert.com/document/pdf/example_multipage_protected.pdf",
"filename": "example_multipage_protected.pdf",
"size": 1096348,
"hash": "e84a82fb5f42391c57d9411b21671a87",
"checksum": "e84a82fb5f42391c57d9411b21671a87",
"content_type": "application/pdf",
"created_at": "2018-10-02T08:45:21",
"modified_at": "2018-10-02T08:47:12",
"parameters": [],
"metadata": {
"pages": "7",
"page_size": "595 x 842 pts (a4)",
"pdf_password_valid": true,
"pdf_password_conversion_permission": true,
"password_protected": true
}
}