56
votes

I'm currently designing an architecture for a web-based application that should also provide some kind of image storage. Users will be able to upload photos as one of the key feature of the service. Also viewing these images will be one of the primary usages (via web).

However, I'm not sure how to realize such a scalable image storage component in my application. I already thought about different solutions but due to missing experiences, I look forward to hear your suggestions. Aside from the images, also meta data must besaved. Here are my initial thoughts:

  1. Use a (distributed) filesystem like HDFS and prepare dedicated webservers as "filesystem clients" in order to save uploaded images and service requests. Image meta data are saved in a additional database including the filepath information for each image.

  2. Use a BigTable-oriented system like HBase on top of HDFS and save images and meta data together. Again, webservers bridge image uploads and requests.

  3. Use a completly schemaless database like CouchDB for storing both images and metadata. Additionally, use the database itself for upload and delievery by using the HTTP-based RESTful API. (Additional question: CouchDB does save blobs via Base64. Can it however return data in form of image/jpeg etc.)?

11

11 Answers

46
votes

We have been using CouchDB for that, saving images as an "Attachment". But after a year the multi-dozen GB CouchDB Database files turned out to be a headache. For example CouchDB replication still has issues if you use it with very large document sizes.

So we just rewrote our software to use CouchDB for image information and Amazon S3 for the actual image storage. The code is available at http://github.com/hudora/huImages

You might want to set up a Amazon S3 compatible Storage Service on-site for your project. This keeps you flexible and leaves the amazon option without requiring external services for now. Walruss seems to become the most popular and scalable S3 clone.

I also urge you to look into the Design of Livejournal with their excellent Open Source MogileFS and Perlbal offerings. This combination is probably the most Famous image serving setup.

Also the flickr Architecture can be an inspiration, although they don't offer Open Source software to the public, like Livejournal does.

14
votes

"Additional question: CouchDB does save blobs via Base64."

CouchDB does not save blobs as Base64, they are stored as straight binary. When retrieving a JSON document with ?attachments=true we do convert the on-disk binary to Base64 in order to add it safely to JSON but that's just a presentation level thing.

See Standalone Attachments.

CouchDB serves attachments with the content-type they are stored with, it's possible, in fact common, to server HTML, CSS and GIF/PNG/JPEG attachments directly to browsers.

Attachments can be streamed and, in CouchDB 1.1, even support the Range header (for media streaming and/or resumption of an interrupted download).

8
votes

Use Seaweed-FS (used to be called Weed-FS), an implementation of Facebook's haystack paper.

Seaweed-FS is very flexible and pared down to the basics. It was created to store billions of images and serve them fast.

3
votes

Have you considered Amazon Web Services? S3 is web-based file storage, and SimpleDB is a key->attribute store. Both are performant and highly scalable. It's more expensive than maintaining your own servers and setups (assuming you are going to do it yourself and not hire people), but you get up and running much more quickly.

Edit: I take that back - its more expensive in the long run at high volumes, but for low volume it beats the initial cost of buying hardware.

S3: http://aws.amazon.com/s3/ (you could store your image files here, and for performance maybe have an image cache on your server, or maybe not)

SimpleDB: http://aws.amazon.com/simpledb/ (metadata could go here: image id mapping to whatever data you want to store)

Edit 2: I didn't even know about this, but there is a new web service called Amazon CloudFront (http://aws.amazon.com/cloudfront/). It is for fast web content delivery, and it integrates well with S3. Kind of like Akamai for your images. You could use this instead of the image cache.

3
votes

We use MogileFS. We're small scale users with less than 8TB and some 50 million files. We switched from storing in Amazon S3 some years ago to get better control of file names and performance.

It's not the prettiest software, but it's very "field tested" and basically all users are using it the same way you will be.

2
votes

As part of Cloudant, I don't want to push product.... but BigCouch solves this problem in my science application stack (physics -- nothing to do with Cloudant, and certainly nothing to do with profit!). It marries the simplicity of the CocuhDB design with the auto-sharding and scalability that is missing in single-server CouchDB. I generally use it to store a smaller number of big file (multi-GB) and a large number of small file (100MB or less). I was using S3 but the get costs actually start to add up for small files that are repeatedly accessed.

2
votes
1
votes

Ok, if all that AWS stuff isn't going to work, here are a couple of thoughts.

As far as (3), if you put binary data into a database, the same data is going to come out. What makes it a jpeg is the format of the data, not what the database thinks it is. What makes the client (web browser) think its a jpeg is when you set the Content-type header to image/jpeg. You could also set it to something else (not recommended) like text and that's how the browser would try to interpret it.

For on-disk storage, I like CouchDB for its simplicity, but HDFS would certainly work. Here's a link to a post about serving image content from CouchDB: http://japhr.blogspot.com/2009/04/render-couchdb-images-via-sinatra.html

Edit: here's a link to a useful discussion about caching images in memcached vs serving them from disk under linux/apache.

1
votes

I've been experimenting with some of the _update functionality available to CouchDB view servers in my Python view server.

One really cool thing I did was an update function for image uploads so that I could use PIL to create thumbnails and other related images and attach them to the document when they get pushed to CouchDB.

This might be useful if you need image manipulation and want to cut down on the amount of code and infrastructure you need to keep up.

1
votes

I've written image store on top of cassandra . We have a lot and writes and random reads read/write is low. For high read/write ratio I suggest You mongodb (GridFs).

0
votes

Here is an example to store blob image in CouchDB using PHP Laravel. In this example, I am storing three images based on user requirements.

Establishing the connection in CouchDB.

$connection = DB::connection('your database name');

/*region Fetching the Uers Uploaded Images*/

$FirstImage = base64_encode(file_get_contents(Input::file('FirstImageInput')));
$SecondImage =base64_encode(file_get_contents(Input::file('SecondImageInput')));
$ThirdImage = base64_encode(file_get_contents(Input::file('ThirdImageInput')));

list($id, $rev) = $connection->putDocument(array(
    'name' => $name,
    'location' => $location,
    'phone' => $phone,
    'website' => $website,
    "_attachments" =>[
        'FirstImage.png' => [
            'content_type' => "image/png",
            'data' => $FirstImage
        ],
        'SecondImage.png' => [
            'content_type' => "image/png",
            'data' => $SecondImage
        ],
        'ThirdImage.png' => [
            'content_type' => "image/png",
            'data' => $ThirdImage
        ]
    ],
), $id, $rev);

...

same as you can store single image.