Reduce the cost of using AWS EC2
On the cost of the components on Amazon EC2 there is one line on which the first not much notice, but which can lead to serious financial costs — Data Transfer traffic. If the rent instances and EBS volumes, you can schedule and control traffic is hard to predict and to ignore it will not give the monthly bill)
/ > For example: the average news site, 30 thousand visits a day, it will pull a small small or even micro instance. Take a full page size of 2 MB, monthly traffic then it will be (without a cached content) — 30000 * 0.002 * 30 = 1800 GB or $216. It turns out the cost of the Data Transfer even more of the rent of the instance! On the S3 the situation with the prices for traffic is exactly the same.
Bomost of this traffic is static content that do not have to give directly from Amazon. Need cheap and fast for these purposes, perfectly fit the most simple dedicated server on Hetzner.
Static, though static, but constantly changing files are downloaded, updated, deleted, so you'll have to set up an automatic sync between instances Amazon and server statics.
Take for these purposes lsyncd — it monitors files in the specified directory using inotify and performs a piece of Lua script if something changed (for a more complete description lsyncd in a good post: habrahabr.ru/post/132098)
Amazinescom on the server (example for CentOS):
Put lsyncd, rsync:
the
Create a config in /etc/lsyncd.conf with the meaning of "Synchronize all events with the file changes, except the php files, not more often than once in 3 seconds, use rsync via ssh."
the
Generated (if not yet) the keys using ssh-keygen, append the generated id_rsa.pub to the server static in the authorized_keys.
Run lsyncd:
the
In the logs immediately need to go posts about synchronization and server static files will appear that it is already possible to distribute, of course with the help nginx. The only time the client issued the error message, in case when the file is not yet synchronized, and the client requests it already, then you need to do proxying such requests back to instant Amazon. This situation can be when, for example, when downloading images from it to show or if for some reason it was off sync. The nginx configuration is the same:
the
The resulting combined reliability and flexibility and low cost Hetzner.
PS
For sites with a large number of files, you may need to increase the limits in sysctl inotify:
the
Article based on information from habrahabr.ru
/ > For example: the average news site, 30 thousand visits a day, it will pull a small small or even micro instance. Take a full page size of 2 MB, monthly traffic then it will be (without a cached content) — 30000 * 0.002 * 30 = 1800 GB or $216. It turns out the cost of the Data Transfer even more of the rent of the instance! On the S3 the situation with the prices for traffic is exactly the same.
Bomost of this traffic is static content that do not have to give directly from Amazon. Need cheap and fast for these purposes, perfectly fit the most simple dedicated server on Hetzner.
Static, though static, but constantly changing files are downloaded, updated, deleted, so you'll have to set up an automatic sync between instances Amazon and server statics.
Take for these purposes lsyncd — it monitors files in the specified directory using inotify and performs a piece of Lua script if something changed (for a more complete description lsyncd in a good post: habrahabr.ru/post/132098)
Amazinescom on the server (example for CentOS):
Put lsyncd, rsync:
the
yum install lsyncd
mkdir -p /var/log/lsyncd
Create a config in /etc/lsyncd.conf with the meaning of "Synchronize all events with the file changes, except the php files, not more often than once in 3 seconds, use rsync via ssh."
the
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
}
sync {
default.rsyncssh,
source = "/home/user/example.com"
host = "static.example.com",
targetdir = "/home/user/static.example.com"
rsyncOps = {"-av", "--temp-dir=/tmp", "--delete", "--exclude=*php"},
exclude = {"somestaticfile.json"},
delay = 3,
}
Generated (if not yet) the keys using ssh-keygen, append the generated id_rsa.pub to the server static in the authorized_keys.
Run lsyncd:
the
lsync /etc/lsyncd.conf
In the logs immediately need to go posts about synchronization and server static files will appear that it is already possible to distribute, of course with the help nginx. The only time the client issued the error message, in case when the file is not yet synchronized, and the client requests it already, then you need to do proxying such requests back to instant Amazon. This situation can be when, for example, when downloading images from it to show or if for some reason it was off sync. The nginx configuration is the same:
the
server {
listen 80;
server_name static.example.com
location / {
root /home/user/static.example.com;
add_header Access-Control-Allow-Origin *; #to be able to receive JSON via jQuery
try_files $uri @pass;
}
location @pass {
proxy_set_header Host "example.com;
proxy_pass http://example.com;
}
}
The resulting combined reliability and flexibility and low cost Hetzner.
PS
For sites with a large number of files, you may need to increase the limits in sysctl inotify:
the
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 65000
fs.inotify.max_queued_events = 16384
Комментарии
Отправить комментарий