RocksDB-Cloud in our own S3 storage

Hi everyone,

We’ve recently discovered your RocksDB fork to allow RocksDB to store SST files in a durable S3 repository. We are wondering if we could use our own S3 repository (we have a NetApp StorageGrid in our own CPD and would be great to use it instead of the AWS S3). It is being a tough task figuring out how to change the endpoint from AWS to our own infrastructure. Is there a easy way to do it?

Thanks in advance!


Not quite familiar with NetApp StorageGrid, does it have similar interface as AWS S3?

If so, you might be able to override some options in CloudFileSystemOptions to achieve what you want:

  • s3_client_factory: You should be able to control the endpoint by overriding it
  • credentials: you can specify the aws credentials here (e.g., aws access key, etc)

If it has completely different interface as AWS S3, you might have to fork the repo and implement your own CloudStorageProvider and CloudFileSystem

As you have said, StorageGrid does have S3 exact interface as AWS, so I will take a look at these options.

And here’s another question about RocksDB-Cloud: Despite of the fact that our database files could (and will) be in the cloud, is there any way to get rid of the local storage at all? Our main concern is the scalability of our system, and we don’t see how to achieve that if we depend on the local filesystem capacity.

Thank you very much for your response!

There is one keep_local_sst_files option right after the s3_client_factory option I mentioned above, which might be what you need. CLOUDMANFIEST and MANIFEST files still need to be stored locally though, but these two files shouldn’t be your concern in terms of local storage scalability.

Oh, thank you very much! I’ll try those properties.