Endpoint | https://sinkin.ai/api/inference |
---|---|
Request Type | POST request |
Note you might need to add 'Content-Type': 'multipart/form-data' to headers |
|
Input | access_token : String, required |
model_id
: String, required, see how to get the id of a model
prompt
: String, **required
lcm
**: String, ****optional
use lcm or not (half the cost with some tradeoff on quality)
pass in “true”
or “false”
, default = “false”
version
: String, optional
Model version, default to the latest version
width
: Int, optional
default = 512
, must be increment of 8
valid range is 128 to 896
commonly used values: 512, 640, 768
height
: Int, optional
default = 768
, must be increment of 8
valid range is 128 to 896
commonly used values: 512, 640, 768
negative_prompt
: String, optional
use_default_neg
: String, optional
Append the default negative prompt or not
pass in “true”
or “false”
, default = “true”
steps
: Int, optional
Number of inference steps
default = 30
valid range is 1 to 50
scale
: Float, optional
Guidance scale
default = 7.5
or model's default scale if one is set
valid range is 1 to 20
num_images
: Int, optional, default = 4
seed
: Int, optional, default = -1
scheduler
: String, optional, default = ”DPMSolverMultistep”
or model’s default scheduler if one is set, see options
lora
: String, optional
id of the LoRA model. You can query /models to get the full list of LoRA
lora_scale
: Float, optional, default = 0.75
|
| Extra input for img2img | init_image_file
: a file object, the base image, required for img2img
image_strength
: Float, optional
How much to transform the base image, default = 0.75
controlnet
: String, optional
ControlNet to use, legit values are canny
, depth
and openpose
Note when controlnet
is set, image_strength
will have no effect
see one example of how to make img2img request
|
| Output | Success:
{ error_code: 0, images=[ ‘image url’, ‘image url’, …] , credit_cost: 2.2, inf_id: 'xxxxxxxxxxxxxx' }
Failure:
{ error_code: 1, message: “This is an error message” }
|
You can call /models
to get the complete model list, including LoRAs. The id of each model can be found in the returned json.
You can also go to sinkin.ai, enter a model page and the last part of the url is the model id. E.g. for the model at https://sinkin.ai/m/vlDnKP6, the model id is vlDnKP6
.
The ids of some commonly used models:
Model Name | Model ID |
---|---|
majicMIX realistic | yBG2r9O |
AbsoluteReality | mGYMaD5 |
DreamShaper | 4zdwGOB |
MeinaHentai | RR6lMmw |
Realistic Vision | r2La2w2 |
Babes | mG9Pvko |
RealCartoon3D | gLv9zeq |
NeverEnding Dream | qGdxrYG |
Hassaku | 76EmEaz |
Deliberate | K6KkkKl |
MeinaMix | vln8Nwr |
DPMSolverMultistep |
---|
K_EULER_ANCESTRAL |
DDIM |
K_EULER |
PNDM |
KLMS |
params = {
'access_token': 'xxxxxxxxxxxxxxxxxxxxxxxxxx',
'model_id': 'xxxxx',
'prompt': 'an angry orc looking at camera smiling',
'num_images': 1,
'scale': 7,
'steps': 30,
'width': 512,
'height': 768,
# 'image_strength': 0.75,
# 'controlnet': 'openpose'
}
files = {'init_image_file': open('path-to-image-file', 'rb')}
r = requests.post('<https://sinkin.ai/api/inference>', files=files, data=params)
print(r.text)
Get all available models and LoRAs
Endpoint | https://sinkin.ai/api/models |
---|---|
Request Type | POST request |
Note you might need to add 'Content-Type': 'multipart/form-data' to headers |
|
Input | access_token : String, required |
Output | Success: |
`{ |
error_code: 0,
models: [ {'id': 'XXXX', 'title': 'XXXX', 'cover_img': 'xxxxxxxx', 'link': 'xxxxxxx'}, ... ] ,
loras: [ {'id': 'XXXX', 'title': 'XXXX', 'cover_img': 'xxxxxxxx', 'link': 'xxxxxxx'}, ... ]
}`
Failure:
{ error_code: 1, message: “This is an error message” }
|