0
votes

We have an Artifactory solution deployed and I am trying to figure out if it can meet my use case. The normal use case is that artifacts are deleted within a week or so and can normally fit in X GB of local storage, but we'd like to be able to:

  • Keep some artifacts around much longer, and since they are accessed infrequently, store them in AWS S3.
  • Sometimes artifacts aren't able to be cleaned up in time, so we'd like to burst to the cloud when local storage is overflowed.

I was thinking I could do the following:

  • Local repository of X GB
  • Repo pointing to S3
  • Virtual repo in front of both of these
  • Setup a plugin to move artifacts from local->S3 via our policies

However, I can't figure out what a Filestore is in Artifactory, and how you'd have two Repositories backed by different filestores.

Anyone have pointers to documentation or anything that can help? The docs I can find are rather slim on the high level details of filestores and repositories.

1

1 Answers

4
votes

The Artifactory binary provider does not support configuring multiple storage backends, so it is impossible to use S3 and NFS in parallel. The main reason for this limitation is that Artifactory has a checksum based storage which stores each binary only once and keeps pointers from all relevant repositories. For that reason Artifactory does not manage separate storage per repository.

For archiving purposes, one of the possible solutions is setting up another Artifactory instance which will take care of archiving. This instance can be connected to an S3 storage backend.
You can use replication to synchronize between the two instances (without syncing deletes). You can have a repository(s) in your master Artifactory which contains artifacts which should be archived, those artifacts will be replicated to the archive Artifactory and later on can be deleted from the master.
You can use a user plugin to decide which artifacts should be moved to the archive repository.