Add FREE 75GB Storage to Your Personal VPS (Scaleway S3 Free Storage Bucket) – InfoSec Memo


Introduction

Scaleway does present extra beneficiant free tier resolution which has 75GB without cost. Lets see what they’ve of their free tier storage:

Storage Value from Scaleway Object Storage:

Sort of consumption Value

75 GB free each month

then €0.0000134/GB/hour or €0.01/GB/month

Intra-regional* outgoing knowledge switch

to different merchandise from the identical area

Inter-regional* and exterior outgoing knowledge switch

to different merchandise from a unique area and the Web

75 GB free each month

then €0.01/GB

Archiving objects

Object Storage (Normal) → C14 Chilly Storage (Glacier)

You can also discover pricing from Scaleway web site for different providers offered by them.

Generate A New Scaleway API Key and Create a Bucket

1  Signal Up an Scaleway.com Account. 

2  Generate a brand new API Key from Credentials Web page 

3  Get Entry Key and Secret key

4  Create a Bucket in Object Storage and Examine Bucket Settings

One factor we have to do to keep away from the cost is to not choose PARIS on your area. By default, RARIS will use Normal – Multi-AZ replication to retailer your uploaded information. Though it may be modified to One-Zone IA kind to retailer manually from Internet or CLI, however with this system s3fs we’re going to use to mount this storage, it turns into an issue. So suggestion is to make use of different two regiions: AMSTERDAM or WARSAW since each areas should not supporting Multi-AZ so by default, they’re utilizing One-Zone IA to retailer your information. 

1 Log into your VPS

2 Execute following instructions to configure atmosphere

Change access_key : secret_key

apt replace && apt set up -y s3fs
echo "user_allow_other" >>/and many others/fuse.conf
mkdir -p /oss
echo ACCESS_KEY:SECRET_KEY > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs

3 Mount the bucket

You will get Bucke ID from bucket element, which is the identify of your bucket once you created it. 

s3fs BUCKET_ID /oss -o allow_other -o passwd_file=~/.passwd-s3fs -o use_path_request_style -o endpoint=BUCKET_REGION -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.BUCKET_REGION.scw.cloud

Bucket_region: it’s both nl-ams or pl-waw based mostly on the area you chose. 

4 Examine the mounting outcome utilizing df -h command.

[email protected]:/# df -h
Filesystem      Measurement  Used Avail Use% Mounted on
/dev/root       9.6G  2.4G  7.2G  25% /
devtmpfs        479M     0  479M   0% /dev
tmpfs           483M     0  483M   0% /dev/shm
tmpfs            97M  928K   96M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           483M     0  483M   0% /sys/fs/cgroup
/dev/loop0       56M   56M     0 100% /snap/core18/2538
/dev/loop1       62M   62M     0 100% /snap/core20/1611
/dev/loop2       68M   68M     0 100% /snap/lxd/22753
/dev/loop3      295M  295M     0 100% /snap/google-cloud-cli/64
/dev/loop4       47M   47M     0 100% /snap/snapd/16292
/dev/sda15      105M  5.2M  100M   5% /boot/efi
/dev/loop5       56M   56M     0 100% /snap/core18/2560
/dev/loop6      297M  297M     0 100% /snap/google-cloud-cli/66
/dev/loop7       64M   64M     0 100% /snap/core20/1623
tmpfs            97M     0   97M   0% /run/person/1001
s3fs            256T     0  256T   0% /oss
[email protected]:/# 

5 To take away mount, utilizing “umount /oss” or reboot the machine

Auto Mount it As soon as System Rebooted

Technique 1: Supervisor

We will use Supervisor program to finish this auto mount activity as soon as system rebooted.

apt set up -y supervisor
systemctl allow supervisor
vi /and many others/supervisor/conf.d/s3fs.conf
[program:s3fs]
command=/bin/bash -c "s3fs vps-mount-amsterdam /oss -o allow_other -o passwd_file=~/.passwd-s3fs -o use_path_request_style -o endpoint=Bnl-ams -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.nl-ams.scw.cloud"
listing=/ 
autorestart=true
stderr_logfile=/supervisor-err.log
stdout_logfile=/supervisor-out.log
person=root
stopsignal=INT

Reboot system then it must be auto mount this new storage into your OS. 

It is likely to be an issue when your different utility used this mounted storage folder earlier than system mount it. 

for instance , you’ve got nginx web site created on this folder /oss/nginxsite

As a result of nginx will auto begin when system rebooted , so it’d begin first earlier than system mount the storage. On this case, we wlll cease nginx auto begin then use our supervisor command to start out it after we mount the storage.

systemctl disable nginx

Then we edit our s3fs.conf file to start out nginx after we mount the storage. 

[program:s3fs]
command=/bin/bash -c "s3fs vps-mount-amsterdam /oss -o allow_other -o passwd_file=~/.passwd-s3fs -o use_path_request_style -o endpoint=Bnl-ams -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.nl-ams.scw.cloud && cd /oss/nginxsite && systemctl begin nginx"
listing=/ 
autorestart=true
stderr_logfile=/supervisor-err.log
stdout_logfile=/supervisor-out.log
person=root
stopsignal=INT

Technique 2:  systemd

Right here is an instance of rclone service, however it may be simply modified it for s3fs

create rclone.service

To make rclone mount the google drive even after rebooted the vps, create /usr/lib/systemd/system/rclone.service with following info:


vi /usr/lib/systemd/system/rclone.service

[Unit]
Description=rclone

[Service]

Person=root
ExecStart=/usr/bin/rclone mount google-drive: /root/gdrive –allow-other –allow-non-empty –vfs-cache-mode writes
Restart=on-abort

You need to use following command to allow this service then reboot system to substantiate:

systemctl allow rclone.service

Native HardDrive Efficiency Take a look at

[email protected]:/# dd if=/dev/zero of=/tmp/output bs=8k depend=10k; rm -f /tmp/output
10240+0 information in
10240+0 information out
83886080 bytes (84 MB, 80 MiB) copied, 0.0591406 s, 1.4 GB/s
[email protected]:/# dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k depend=1k; rm -f /tmp/output
1024+0 information in
1024+0 information out
402653184 bytes (403 MB, 384 MiB) copied, 0.345877 s, 1.2 GB/s
[email protected]:/# 

Scaleway Object Storage Bucket Efficiency Take a look at:

[email protected]:/# dd if=/dev/zero of=/oss/output bs=8k depend=10k; rm -f /oss/output
10240+0 information in
10240+0 information out
83886080 bytes (84 MB, 80 MiB) copied, 5.91952 s, 14.2 MB/s
[email protected]:/# 
[email protected]:/# dd if=/dev/zero of=/oss/output conv=fdatasync bs=384k depend=1k; rm -f /oss/output
1024+0 information in
1024+0 information out
402653184 bytes (403 MB, 384 MiB) copied, 15.3656 s, 26.2 MB/s
[email protected]:/# 

Movies

 



Source_link

Leave a Reply

Your email address will not be published.