Spring Boot and DigitalOcean Spaces for File Storage

Digital Ocean Spaces is an object storage solution and one of the cheapest out there right now. Integration for various languages is supported through API’s and AWS SDKs. In this post we will try integrating DigitalOcean Spaces with Spring Boot using AWS SDK with a practical demo UI built with Nextjs.

I have come to understand that maintaining a table in a database for files on DO spaces is much more helpful in managing them from an application context. So, we will be using H2 database to store image details every-time one is uploaded to DO spaces.

Create DigitalOcean Space :

Go to DigitalOcean console and click on create button and choose spaces option. select you region of choice and give a unique name to the sub domain that will be used to access the files from DO spaces.

Create Digital Ocean Space

Create DigitalOcean Spaces Access key:

Click on API tab, under Tokens and Keys find Spaces Access Key section. Click on ‘Generate new key’ button to get key and secret that is needed to access spaces from spring boot.


Create a Spring boot starter using Spring initializ with following dependencies

  1. Spring boot starter web for rest interfaces
  2. Spring boot start data JPA and H2 DB for database
  3. AWS SDK for DO Spaces interfaces.

Application Properties :

Let us add a few properties to application properties file as environmental variables. DO_SPACE_KEY and DO_SPACE_SECRET are Access Key and Secret that we got from DigitalOcean earlier.

If https://sample.nyc3.digitaloceanspaces.com was your Spaces origin URL, you know the DO_SPACE_BUCKET is “sample”, DO_SPACE_REGION is “nyc3” and DO_SPACE_ENDPOINT is “digitaloceanspaces.com” . PATH_TO_DB_FILE is any location on your machine to store the H2 database file.

## DO properties
    key: ${DO_SPACE_KEY}
    secret: ${DO_SPACE_SECRET}
    endpoint: ${DO_SPACE_ENDPOINT}
    region: ${DO_SPACE_REGION}
    bucket: ${DO_SPACE_BUCKET}

## Database Properties
    url: jdbc:h2:file:${PATH_TO_DB_FILE};DB_CLOSE_ON_EXIT=FALSE
    driverClassName: org.h2.Driver
    username: sa
    password: password
    database-platform: org.hibernate.dialect.H2Dialect
      ddl-auto: update
      enabled: true

AWS SDK initialisation :

Create a configuration bean to initialise AWS SDK using the properties that were defined earlier in application properties

public class DoConfig {

	private String doSpaceKey;

	private String doSpaceSecret;

	private String doSpaceEndpoint;

	private String doSpaceRegion;

	public AmazonS3 getS3() {
		BasicAWSCredentials creds = new BasicAWSCredentials(doSpaceKey, doSpaceSecret);
		return AmazonS3ClientBuilder.standard()
				.withEndpointConfiguration(new EndpointConfiguration(doSpaceEndpoint, doSpaceRegion))
				.withCredentials(new AWSStaticCredentialsProvider(creds)).build();


CRUD Operations

Create a service class to implement save, delete and read operations.

We will be using the initialised AWS S3 interface, a repository interface ImageRepository used to store image meta data to database, bucket name and folder under bucket where the uploaded files will be placed.

public class ImageStorageServiceImpl implements ImageStorageService {

	ImageRepository imageRepo;
	AmazonS3 s3Client;
	private String doSpaceBucket;

	String FOLDER = "files/";

 /*****  Remaining Code  *****/


Upload/Save File

The file uploaded to spring boot server is received as multipart file. Besides content length and content type seen here in the code, other metadata can also be added which is tagged along with the file on DO Spaces. Using S3 interface method putObject the multipart file as input stream, metadata and file access type are passed to save the file to server.

    private void saveImageToServer(MultipartFile multipartFile, String key) throws IOException {
		ObjectMetadata metadata = new ObjectMetadata();
		if (multipartFile.getContentType() != null && !"".equals(multipartFile.getContentType())) {
		s3Client.putObject(new PutObjectRequest(doSpaceBucket, key, multipartFile.getInputStream(), metadata)

A metadata record of each file uploaded to server is saved to image table using ImageRepository . The key could be any unique string, in this sample the original file name is used as key.

/** https://localhost:8080/save/image **/

	public void saveFile(MultipartFile multipartFile) throws IOException {
		String extension = FilenameUtils.getExtension(multipartFile.getOriginalFilename());
		String imgName = FilenameUtils.removeExtension(multipartFile.getOriginalFilename());
		String key = FOLDER + imgName + "." + extension;
		saveImageToServer(multipartFile, key);
		Image image = new Image();
		image.setCreatedtime(new Timestamp(new Date().getTime()));

Delete File

To delete file use S3 SDK interface method deleteObject with bucket name and key used to store the file.

/** https://localhost:8080/delete/image/{fileId} **/

public void deleteFile(Long fileId) throws Exception {
		Optional<Image> imageOpt = imageRepo.findById(fileId);
		if (imageOpt.get() != null) {
			Image image = imageOpt.get();
			String key = FOLDER + image.getName() + "." + image.getExt();
			s3Client.deleteObject(new DeleteObjectRequest(doSpaceBucket, key));

Read Files

S3 SDK interface does have a listObjects method to retrieve list of file which may be slower to be used with an application. Luckily all we have to do is query our meta table using ImageRepository to findAll or a paginated findAll.

/** https://localhost:8080/get/images **/

	public List<Image> getImage() {
		return (List<Image>) imageRepo.findAll();
DigitalOcean Spaces Demo UI

That it!

Source Code :