API Reference

Ramo S3 Endpoint

A fully S3-compatible endpoint for Ramo's high-performance storage. Use your existing AWS SDKs and tools — just point them at Ramo.

S3 Compatible100 GbpsFree Egress$3.99/TB/mo
Endpointhttp://s3.ramo.io
Overview

Quick Start

The Ramo S3 Endpoint is fully compatible with the Amazon S3 API. Any tool, SDK, or application that speaks S3 will work with Ramo out of the box. Simply configure your S3 client to use the Ramo endpoint and your Ramo credentials.

S3-CompatibleStandard S3 API — no proprietary extensions required
Drop-In ReplacementWorks with AWS CLI, boto3, aws-sdk, and all S3-compatible clients
100 Gbps ThroughputHigh-speed retrieval across all operations
Quick Start — AWS CLI
# Configure the Ramo endpoint
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_ENDPOINT_URL="http://s3.ramo.io"

# Upload a file
aws s3 cp ./data.parquet s3://my-bucket/data.parquet

# List objects
aws s3 ls s3://my-bucket/

# Download a file
aws s3 cp s3://my-bucket/data.parquet ./local-copy.parquet
Authentication

Access Credentials

Access Key & Secret Key

Ramo uses standard AWS Signature Version 4 (SigV4) authentication. You'll receive an Access Key ID and Secret Access Key when you create your account. These credentials work identically to AWS IAM keys.

Endpoint Configuration

Point your S3 client to http://s3.ramo.io as the endpoint. Use us-east-1 as the default region. Path-style and virtual-hosted-style addressing are both supported.

HeaderDescriptionRequired
AuthorizationAWS Signature Version 4 authorization stringYes
x-amz-dateTimestamp in ISO 8601 format (yyyyMMdd'T'HHmmss'Z')Yes
x-amz-content-sha256SHA-256 hash of the request payload (or UNSIGNED-PAYLOAD)Yes
Hosts3.ramo.io or {bucket}.s3.ramo.ioYes

All SDKs handle SigV4 signing automatically. You only need to provide your Access Key, Secret Key, and the Ramo endpoint URL.

Buckets

Bucket Operations

Buckets are top-level containers for your objects. Bucket names must be globally unique, between 3–63 characters, and consist of lowercase letters, numbers, and hyphens.

MethodPathOperationDescription
GET/ListBucketsList all buckets in your account
HEAD/{bucket}HeadBucketCheck if a bucket exists and you have access
GET/{bucket}?locationGetBucketLocationGet the region of a bucket
Example — List Buckets
# List all buckets
curl "http://s3.ramo.io/" \
  -H "Authorization: AWS4-HMAC-SHA256 ..."
Objects

Object Operations

Objects are the fundamental data entities stored in Ramo. Each object is identified by a key within a bucket and can be up to 5 GB in size. For objects larger than 100 MB, we recommend using multipart uploads.

MethodPathOperationDescription
PUT/{bucket}/{key}PutObjectUpload an object to a bucket
GET/{bucket}/{key}GetObjectDownload an object from a bucket
HEAD/{bucket}/{key}HeadObjectRetrieve metadata for an object without the body
DELETE/{bucket}/{key}DeleteObjectDelete a single object from a bucket
PUT/{bucket}/{key}CopyObjectCopy an object within or between buckets via x-amz-copy-source header
GET/{bucket}?list-type=2ListObjectsV2List objects in a bucket with pagination, prefix, and delimiter support
POST/{bucket}?deleteDeleteObjectsDelete multiple objects in a single request (batch delete)
Upload an Object
# Upload with AWS CLI
aws s3 cp ./model-weights.bin \
  s3://ml-data/v2/model-weights.bin \
  --endpoint-url http://s3.ramo.io

# Upload with curl
curl -X PUT \
  "http://s3.ramo.io/ml-data/v2/model-weights.bin" \
  -T ./model-weights.bin
Download an Object
# Download with AWS CLI
aws s3 cp \
  s3://ml-data/v2/model-weights.bin \
  ./local-weights.bin \
  --endpoint-url http://s3.ramo.io

# Retrieve at 100 Gbps — free egress
Multipart

Multipart Uploads

For large objects, multipart uploads split the data into smaller parts that are uploaded independently and then assembled into the final object. This improves throughput, enables parallel uploads, and provides resilience against network failures.

01

Initiate

Call CreateMultipartUpload to get an upload ID

02

Upload Parts

Upload each part in parallel with its part number

03

Complete

Call CompleteMultipartUpload with the list of ETags

MethodOperationDescription
POSTCreateMultipartUploadInitiate a new multipart upload and receive an upload ID
PUTUploadPartUpload a single part of a multipart upload
POSTCompleteMultipartUploadFinalize the upload by assembling all parts into the final object
DELETEAbortMultipartUploadCancel a multipart upload and free associated storage
GETListPartsList the parts that have been uploaded for a specific upload

All AWS SDKs handle multipart uploads automatically when using high-level transfer managers.

SDK Compatibility

Works with Your Stack

Any S3-compatible client, SDK, or tool works with Ramo. Just change the endpoint URL. Below are configuration examples for the most popular tools.

AWS CLIShell
aws configure set default.s3.endpoint_url http://s3.ramo.io
aws s3 ls --endpoint-url http://s3.ramo.io
Python (boto3)Python
import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="http://s3.ramo.io",
    aws_access_key_id="RAMO_ACCESS_KEY",
    aws_secret_access_key="RAMO_SECRET_KEY",
)
JavaScript (AWS SDK v3)JavaScript
import { S3Client } from "@aws-sdk/client-s3";

const client = new S3Client({
  endpoint: "http://s3.ramo.io",
  region: "us-east-1",
  credentials: {
    accessKeyId: "RAMO_ACCESS_KEY",
    secretAccessKey: "RAMO_SECRET_KEY",
  },
});
Go (AWS SDK v2)Go
cfg, _ := config.LoadDefaultConfig(ctx,
  config.WithRegion("us-east-1"),
  config.WithEndpointResolverWithOptions(
    aws.EndpointResolverWithOptionsFunc(
      func(service, region string, opts ...interface{}) (aws.Endpoint, error) {
        return aws.Endpoint{URL: "http://s3.ramo.io"}, nil
      }),
  ),
)
client := s3.NewFromConfig(cfg)

Also compatible with: rclone, s3cmd, MinIO Client (mc), Cyberduck, s5cmd, and any S3-compatible application.

Error Handling

Error Codes

The Ramo S3 Endpoint returns standard S3-compatible error responses in XML format. Each error includes a code, message, and the relevant resource identifier.

Error CodeHTTP StatusDescription
AccessDenied403You do not have permission to perform this operation
NoSuchBucket404The specified bucket does not exist
NoSuchKey404The specified object key does not exist
BucketAlreadyExists409The requested bucket name is already in use
BucketNotEmpty409The bucket must be empty before it can be deleted
InvalidBucketName400The bucket name does not meet naming requirements
EntityTooLarge400The object or part exceeds the maximum allowed size
InvalidPart400One or more parts in the multipart upload are invalid
InvalidPartOrder400Parts must be listed in ascending order by part number
ServiceUnavailable503The service is temporarily unavailable — retry with backoff
Example Error Response
<?xml version="1.0" encoding="UTF-8"?>
<Error>
  <Code>NoSuchKey</Code>
  <Message>The specified key does not exist.</Message>
  <Key>missing-file.bin</Key>
  <RequestId>RAMO-REQ-a1b2c3d4</RequestId>
</Error>

Retry strategy: For 503 and 429 responses, implement exponential backoff starting at 1 second with a maximum of 5 retries. All AWS SDKs include built-in retry logic.

Performance

Limits & Specifications

ParameterValueNotes
Max throughput100 Gbps
Max object size5 GB
Durability99.999999999%11 nines
Egress cost$0Always free
Storage cost$3.99/TB/mo

Maximize Throughput

Use multipart uploads with parallel part transfers. Target 64–128 MB part sizes for optimal throughput on large objects.

Optimize Listing

Use prefix and delimiter parameters with ListObjectsV2 to efficiently navigate large buckets without scanning all keys.

Batch Operations

Use DeleteObjects for bulk deletes (up to 1,000 keys per request) instead of individual DeleteObject calls.

Get API Access

Contact us at hi@ramo.io to get your access credentials