1
votes
  • I have a S3 bucket on one of my aws acc.
  • I am uploading files to the bucket using s3cmd interface.
  • I mounted the s3 bucket to an ubuntu ec2 instance using s3fs and I successfully accomplished it and able to list the on my ec2 instance.

    s3fs command - s3fs XXXXXX-ouse_cache=/tmp/s3cache -oallow_other -opasswd_file=/etc/passwd-s3fs -ourl=http://s3.amazonaws.com -odefault_acl=public-read-write /mnt/XXXXXXXX

  • Installed apache2 on ec2. Changed the document directory to the path mounted s3 path - /mnt/XXXXXXXX . I have successfully configured the same and restarted the apache2 service.

But, I when I try to access the S3 files using apache2, I get "Forbidden You don't have permission to access /temp/xxxxTesting.flv on this server."

can anyone help me with this issue. Any possible reasons for this issue. I tried all the suggestions, but all in vain. Pls, someone guide me on how to resolve this issue?

3
I am using latest version of s3fs - 1.71user1824140

3 Answers

1
votes

Your question mark key seems to be stuck.

S3 is not a block device and not a filesystem and serving files using s3fs or any other filesystem emulator is never going to give you optimum performance.

Fortunately, there is a much simpler solution.

My image files are in /var/content/images.

If you fetch a file from my-web-site.com/images/any/path/here.jpg .jpeg or .gif then my local apache server checks to see if /var/content/images/any/path/here.jpg is actually a file on the local hard drive. If it is, apache serves up the local file.

If it is not a local file, then apache fetches the file ([P] is for proxy) from an S3 bucket using HTTP, which is, of course, S3's native interface and something apache handles pretty effectively also.

If the file isn't in S3 either, apache returns the error page that the S3 bucket returns when we try to fetch the file from there. Done, in essentially 3 lines of config:

RewriteEngine on
RewriteCond /var/content%{REQUEST_FILENAME} !-f
RewriteRule ^/images/(.*[gif|jpe?g])$ http://my-bucket-name.s3-website-us-east-1.amazonaws.com/images/$1 [P]

This approach, to me, seems far simpler than trying to hack the same functionality with s3fs.

Note that on some of my systems I need to prepend the physical path before %{REQUEST_FILENAME} and some I don't. I haven't looked into the specifics of why that is but be aware that different environments might require somewhat different setup. You would also need to have the appropriate modules available and the regular expression in the example only works with filenames ending with gif jpeg and jpg.

0
votes

s3fs is not a block device. Under the covers it copies files to your temp directory when you try to access them. It does not work like NFS.

You can copy files to and from s3 with s3fs, but you should not be running your application directly from s3fs.

0
votes

I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time.

If you have any troubles with accessing your Apache files, please create a ticket on RioFS GitHub page !

Hope it helps !