Here are some thoughts of mine.
Today, I have to copy files from one S3 bucket to another S3 bucket which sitting in separate AWS account.
Initially, I was thinking to use S3 clients ( Tranmit or Cyberduck ) to download the files first and manually upload the files again to the other S3 Bucket. However, this approach will consume a lot of bandwidth and really slow if you have a lot of files in your S3 bucket.
After a bit of research, I found that you can easily copy files between two S3 buckets. You can use either s4cmd or AWS CLI.
s3cmd or s4cmd
Run this command to install s4cmd
After you finished setting up the AWS credentials, you can start the copying process.
1 2 3 4 5
Install the cli through pip
And configure it
The usage is quite similar to s4cmd, see below:
I prefer using AWS CLI because it has more options and official support. AWS CLI has a built it support can specify the ACL and permission of the objects.
Since my target bucket is sitting in separate AWS account, I have to set another permission to allow everyone to upload and delete files into my target bucket.
If you want to follow this approach, make sure to delete that permission after you finished with the copying.
The other option is to set the S3 bucket policy manually, see this link: http://serverfault.com/questions/556077/what-is-causing-access-denied-when-using-the-aws-cli-to-download-from-amazon-s3
Today, I was trying to create a backup of production database. The problem is we have two different version of PostgreSQL running on production and these databases can only be accessed from front end(FE) server. We have older version of PostgreSQL client installed in all FE server which means I can’t use it to run pg_dump.
SSH Tunnel to the rescue
One solution to this problem is to create a SSH tunnel. Since I have the latest version of PostgreSQL client installed in my machine, I can run pg_dump locally which will connect to the database through SSH tunnel.
Here is the command I used to create SSH tunnel:
After this you can check whether the SSH tunnel is succesfully created by running this and look for port 9000
If you confirm that the SSH tunnel is working, you can run psql to connect or pg_dump to backup your database
At the moment, I am working on a Ruby on Rails projects using Rails Engines ( you can read more about Rails Engines here: http://guides.rubyonrails.org/engines.html ). In this post, I’ll share my tips and trick on how to configure your vim to work with Rails Engines.
I am using NERDTree Bookmark to quickly jump between different engines. If you are using NERDTree, you can create a bookmark by putting your cursor on one of the Rails Engines directory and use the command below:
After you created the bookmark, you can see the bookmarks list by pressing
B inside NERDTree window. See the screenshot below:
I also added these two options to my vimrc file.
1 2 3
NERDTreeChDirMode changes the current working directory of your vim to your bookmark directory. This will also enable my favourite rails.vim feature which is open alternate file.
CtrlP Working Path Mode
It is normal for Rails Engines to share similar directory structure and filenames. However, this creates problem when you want to search a file using CtrlP plugin. Combined with
NERDTreeChDirMode, you can tell CtrlP to search only in the current working directory.
Add this option to your vimrc file to enable this feature:
That’s it for now, I’ll update this post if I find a better workflow or configuration. If you are interested, you can check my full vimrc file here: https://github.com/rudylee/dotfiles/blob/master/vimrc
There are few known limitations when using Vagrant on Windows machine. One of these limitations is the lack of symbolic links support on synced folder.
Symbolic links are used heavily by NPM to create shortcut for the libraries. I posted more details about this here: http://blog.rudylee.com/2013/10/24/fix-npm-symlink-problem-in-vagrant/
Most of the time, you can get away with ‘npm —no-bin-link’ solution. However, you need more robust solution if you are using complex tools such as Grunt or Yeoman.
In this post, I’ll show you the proper way to add symbolic links support to your Vagrant machine.
First, you need to add this code snippet inside your Vagrantfile
1 2 3
VirtualBox disables symbolic links for security reasons. In order to pass this restriction, you need to boot up the Vagrant machine in Administrator mode.
You can do this by simply right clicking on your Command Prompt or Git Bash icon and click ‘Run as Administrator’. See the picture below if you can’t find it.
After that, boot up the Vagrant machine normally with ‘vagrant up’ command. Wait until the machine boots up, SSH to the machine and try to create symbolic link in the synced folder.
File path 255 character limit
Another annoying problem you might encounter is file path character limit. This happens quite often if you are using a node module with long name. You can easily solve this by following these steps:
Create ‘node_modules’ folder in your home folder
Add symbolic link to the ‘node_modules’ folder you just created inside your project folder
This solution will ensure that all the node modules are stored inside home directory instead of synced folder.
Basic HTTP authentication is one simple way to limit public access to your website prior to launch.
The first thing you need is .htaccess file which contains all the configurations. The second one is .htpasswd containing username and password. You can use this website to generate .htpasswd file for you http://www.htaccesstools.com/htpasswd-generator/
In the sample below, I am trying to enable HTTP authentication only on certain domain. On the first line, I set enviroment variable if the domain name is equal to “www.bundabergfestival.com.au”. On line 7, I tell .htaccess file to deny any access by using the live_uri variable. I hope that explanation is pretty straight forward.
1 2 3 4 5 6 7 8 9 10
I really enjoyed the course and definitely learned something new from it. It covers the basic concept about directives, services and dependecy injection. I didn’t understand those features when the first time I learned about AngularJS. In the beginning of learning AngularJS, I tended to copy and paste code without understanding the meaning behind it. This caused confusion when I tried to learn more about the framework.
Code School also released another screencast on how to build AngularJS app from scratch. I’ll suggest you to check that one out as well so you can apply the knowledge that you have learned from the course to build real application. However, you have to become the member to get access to the screencast. There are also some other websites that provide AngularJS videos such as http://www.egghead.io and http://www.thinkster.io
At Captiv8, we are using Amazon AWS to host most of our PHP projects. We are heavily rely on Elastic Beanstalk to help us set up PHP environment, database and load balancer. On top of that, we are also managing our own server image based on Amazon AMI. In this image, we installed additional software and packages that we need for our application. However, this approach has a drawback as it is difficult to maintain the image and track changes. Everytime you need to update the image, you have to create new server, install the new software and export it into new image. This will leave you with bunch of different images and it is hard to tell what are the things that have changed inside each image.
In order to solve these problems, I decided to find a way to automate the process. My first attempt was trying to use Chef to provision the Elastic Beanstalk environment. I have been using Chef for a while to provision my Vagrant machines. It is powerful and more convenient in compare with bash scripts. Since I am already familiar with Chef, I started looking at tutorials on how to use Chef with Elastic Beanstalk. Most of the tutorials that I found don’t provide easy way to integrate Chef with Elastic Beanstalk. One of them mentions about using AWS OpsWorks with Chef but I think it is overkill for the time being. So, I ditched Chef and start looking for another solution.
ebextensions is another solution that I found after checking the official documentation of Elastic Beanstalk. With this solution, you need to to create .ebextensions folder inside your project and create a file to define what are the packages that you want to install into the environment. Elastic Beanstalk will automatically run the script every time you deploy a new version of the application. Aside from that, you can also tell ebextensions to execute shell script in the instance or changing permission of a file. You can read more details about ebextension here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Here is the example of my ebextension config file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
And this is the example of my bash script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
Inside my ebextensions config file, I call the shell script which will install additional software. The benefit of using shell script is you have more options and it is much easier to customize the software. Since the .ebextensions folder is copied to the instance, you can tell the shell script to copy a template config file that you have prepared before hand. I hope you find this blog post useful.
Around October 2013, I decided to move from city to Gladesville which located 9 kilometres north-west of Sydney CBD. It wasn’t easy decision because I need to spend about 2 hours on commuting every day.
For the first couple months, I was fine with the long commuting time. I can distract myself with my phone or sleeping on the bus. However, I started to feel unproductive and I can feel it affected my concentration throughout the day. So, I tried to do some research on the Internet about productive use of commuting time. Most of the articles that I found suggesting to listen either podcasts or audio books.
I decided to give audio book a try and downloaded Eat That Frog by Brian Tracy. If you are interested, you can easily find the audio version of this book on YouTube. At first, I was a little bit skeptical with the result but it turned out to be really helpful. I can feel that my life is back on track again. I started to use Trello to keep my list of tasks. Once in a while, I’ll update my goals and create separate boards for each project.
Since that, I have been listening to several different audio books such as 168 hours by Laura Vanderkam and Getting things done by David Allen. Both of them shared some basic principles from Eat that frog. I prefer Getting things done because it’s more straight forward and focusing on actions you can perform to improve your productivity. Although, I can say that this book is quite to hard understand so you need to listen it more than once to get a good grasp of the concept.
It is common to have ‘active’ state or ‘current’ state on website navigation. This will help visitors to know which page they have selected.
This solution is based on Stackoverflow’s question which I couldn’t find. First, I’ll create a method inside Rails application_helper.rb file. I’ll call this method cp(). Here are the syntax:
1 2 3 4 5 6
The method uses current_page and Rails.application.routes.recognize_path to get information about current page.
After that we can use it in our view. Here is the example:
1 2 3 4 5 6 7 8 9
I hope that helps.