Ramo S3 Endpoint
A fully S3-compatible endpoint for Ramo's high-performance storage. Use your existing AWS SDKs and tools — just point them at Ramo.
Quick Start
The Ramo S3 Endpoint is fully compatible with the Amazon S3 API. Any tool, SDK, or application that speaks S3 will work with Ramo out of the box. Simply configure your S3 client to use the Ramo endpoint and your Ramo credentials.
# Configure the Ramo endpoint
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_ENDPOINT_URL="http://s3.ramo.io"
# Upload a file
aws s3 cp ./data.parquet s3://my-bucket/data.parquet
# List objects
aws s3 ls s3://my-bucket/
# Download a file
aws s3 cp s3://my-bucket/data.parquet ./local-copy.parquetAccess Credentials
Access Key & Secret Key
Ramo uses standard AWS Signature Version 4 (SigV4) authentication. You'll receive an Access Key ID and Secret Access Key when you create your account. These credentials work identically to AWS IAM keys.
Endpoint Configuration
Point your S3 client to http://s3.ramo.io as the endpoint. Use us-east-1 as the default region. Path-style and virtual-hosted-style addressing are both supported.
| Header | Description | Required |
|---|---|---|
| Authorization | AWS Signature Version 4 authorization string | Yes |
| x-amz-date | Timestamp in ISO 8601 format (yyyyMMdd'T'HHmmss'Z') | Yes |
| x-amz-content-sha256 | SHA-256 hash of the request payload (or UNSIGNED-PAYLOAD) | Yes |
| Host | s3.ramo.io or {bucket}.s3.ramo.io | Yes |
All SDKs handle SigV4 signing automatically. You only need to provide your Access Key, Secret Key, and the Ramo endpoint URL.
Bucket Operations
Buckets are top-level containers for your objects. Bucket names must be globally unique, between 3–63 characters, and consist of lowercase letters, numbers, and hyphens.
| Method | Path | Operation | Description |
|---|---|---|---|
| GET | / | ListBuckets | List all buckets in your account |
| HEAD | /{bucket} | HeadBucket | Check if a bucket exists and you have access |
| GET | /{bucket}?location | GetBucketLocation | Get the region of a bucket |
# List all buckets
curl "http://s3.ramo.io/" \
-H "Authorization: AWS4-HMAC-SHA256 ..."Object Operations
Objects are the fundamental data entities stored in Ramo. Each object is identified by a key within a bucket and can be up to 5 GB in size. For objects larger than 100 MB, we recommend using multipart uploads.
| Method | Path | Operation | Description |
|---|---|---|---|
| PUT | /{bucket}/{key} | PutObject | Upload an object to a bucket |
| GET | /{bucket}/{key} | GetObject | Download an object from a bucket |
| HEAD | /{bucket}/{key} | HeadObject | Retrieve metadata for an object without the body |
| DELETE | /{bucket}/{key} | DeleteObject | Delete a single object from a bucket |
| PUT | /{bucket}/{key} | CopyObject | Copy an object within or between buckets via x-amz-copy-source header |
| GET | /{bucket}?list-type=2 | ListObjectsV2 | List objects in a bucket with pagination, prefix, and delimiter support |
| POST | /{bucket}?delete | DeleteObjects | Delete multiple objects in a single request (batch delete) |
# Upload with AWS CLI
aws s3 cp ./model-weights.bin \
s3://ml-data/v2/model-weights.bin \
--endpoint-url http://s3.ramo.io
# Upload with curl
curl -X PUT \
"http://s3.ramo.io/ml-data/v2/model-weights.bin" \
-T ./model-weights.bin# Download with AWS CLI
aws s3 cp \
s3://ml-data/v2/model-weights.bin \
./local-weights.bin \
--endpoint-url http://s3.ramo.io
# Retrieve at 100 Gbps — free egressMultipart Uploads
For large objects, multipart uploads split the data into smaller parts that are uploaded independently and then assembled into the final object. This improves throughput, enables parallel uploads, and provides resilience against network failures.
Initiate
Call CreateMultipartUpload to get an upload ID
Upload Parts
Upload each part in parallel with its part number
Complete
Call CompleteMultipartUpload with the list of ETags
| Method | Operation | Description |
|---|---|---|
| POST | CreateMultipartUpload | Initiate a new multipart upload and receive an upload ID |
| PUT | UploadPart | Upload a single part of a multipart upload |
| POST | CompleteMultipartUpload | Finalize the upload by assembling all parts into the final object |
| DELETE | AbortMultipartUpload | Cancel a multipart upload and free associated storage |
| GET | ListParts | List the parts that have been uploaded for a specific upload |
All AWS SDKs handle multipart uploads automatically when using high-level transfer managers.
Works with Your Stack
Any S3-compatible client, SDK, or tool works with Ramo. Just change the endpoint URL. Below are configuration examples for the most popular tools.
aws configure set default.s3.endpoint_url http://s3.ramo.io
aws s3 ls --endpoint-url http://s3.ramo.ioimport boto3
s3 = boto3.client(
"s3",
endpoint_url="http://s3.ramo.io",
aws_access_key_id="RAMO_ACCESS_KEY",
aws_secret_access_key="RAMO_SECRET_KEY",
)import { S3Client } from "@aws-sdk/client-s3";
const client = new S3Client({
endpoint: "http://s3.ramo.io",
region: "us-east-1",
credentials: {
accessKeyId: "RAMO_ACCESS_KEY",
secretAccessKey: "RAMO_SECRET_KEY",
},
});cfg, _ := config.LoadDefaultConfig(ctx,
config.WithRegion("us-east-1"),
config.WithEndpointResolverWithOptions(
aws.EndpointResolverWithOptionsFunc(
func(service, region string, opts ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{URL: "http://s3.ramo.io"}, nil
}),
),
)
client := s3.NewFromConfig(cfg)Also compatible with: rclone, s3cmd, MinIO Client (mc), Cyberduck, s5cmd, and any S3-compatible application.
Error Codes
The Ramo S3 Endpoint returns standard S3-compatible error responses in XML format. Each error includes a code, message, and the relevant resource identifier.
| Error Code | HTTP Status | Description |
|---|---|---|
| AccessDenied | 403 | You do not have permission to perform this operation |
| NoSuchBucket | 404 | The specified bucket does not exist |
| NoSuchKey | 404 | The specified object key does not exist |
| BucketAlreadyExists | 409 | The requested bucket name is already in use |
| BucketNotEmpty | 409 | The bucket must be empty before it can be deleted |
| InvalidBucketName | 400 | The bucket name does not meet naming requirements |
| EntityTooLarge | 400 | The object or part exceeds the maximum allowed size |
| InvalidPart | 400 | One or more parts in the multipart upload are invalid |
| InvalidPartOrder | 400 | Parts must be listed in ascending order by part number |
| ServiceUnavailable | 503 | The service is temporarily unavailable — retry with backoff |
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>missing-file.bin</Key>
<RequestId>RAMO-REQ-a1b2c3d4</RequestId>
</Error>Retry strategy: For 503 and 429 responses, implement exponential backoff starting at 1 second with a maximum of 5 retries. All AWS SDKs include built-in retry logic.
Limits & Specifications
| Parameter | Value | Notes |
|---|---|---|
| Max throughput | 100 Gbps | |
| Max object size | 5 GB | |
| Durability | 99.999999999% | 11 nines |
| Egress cost | $0 | Always free |
| Storage cost | $3.99/TB/mo |
Maximize Throughput
Use multipart uploads with parallel part transfers. Target 64–128 MB part sizes for optimal throughput on large objects.
Optimize Listing
Use prefix and delimiter parameters with ListObjectsV2 to efficiently navigate large buckets without scanning all keys.
Batch Operations
Use DeleteObjects for bulk deletes (up to 1,000 keys per request) instead of individual DeleteObject calls.
Contact us at hi@ramo.io to get your access credentials