If you have hundreds of gigabytes or even terabytes of data on your local network at home, it’s probably all stored on your computer, external hard drive, or NAS (network attached storage) device. Backing up your data is extremely important, but keeping it all in one place is a bad idea.
I figured it out myself when I saw that my local NAS has more than 2TB of photos, videos, backups, etc. It has 4 hard drives, of course, and if one fails, my data will not be lost. However, if my house burns down or is flooded, everything will be lost along with the NAS. So I decided to back up my data to the cloud.
I checked Dropbox, SkyDrive, Google Drive, CrashPlan, Amazon S3, and Glacier before finally settling on Amazon S3. Why Amazon? Well, they have a cool service where you can send an external hard drive up to 16TB in size and upload it straight to their servers, thus bypassing the huge problem of trying to download that data over a slow internet connection.
With AT&T in my area, I get a whopping 1.4MB / sec upload speed. It will take many months to download the 2.5TB of data that I store on the NAS. With Amazon Import / Export, you can pay a $ 80 service fee and ask them to download all of this data for you in one day. I ended up writing a video tutorial that walks you through the entire process from signing up for Amazon Web Services to packing your hard drive and shipping it to Amazon.
< / iframe>
Here is the full text of the video:
Hello. This is Asem Kishore from Online Tech Tips. Today I am going to do something new. I’m going to do a video tutorial on Amazon Web Services import and export features. So what is the import and export function? Basically, it is a way to move large amounts of data into an Amazon S3 bucket or Glacier storage. Amazon S3 and Glacier are essentially two storage options that you have for backing up and archiving your data with Amazon. So why would you use this service from Amazon?
Well, it basically allows you to move large amounts of data to the cloud very quickly. If you’re like me, you can have hundreds of gigabytes of photos and videos stored locally on your computer or on an external hard drive. Trying to upload 100 or 500 gigabytes or even terabytes of data to the cloud will take you weeks if not months on a slow download connection. Instead, you can copy that data to an external hard drive up to 16 terabytes in size and just send it to Amazon where they will deliver it to their datacenter and upload it directly to your cart or storage, and then you can go ahead and access it. from the Internet.
So, to get started, the first thing you need to do is create an Amazon Web Services account. To do this, go to aws.amazon.com and click the “Register” button. Go ahead and enter your email address and then select I’m a New User if you don’t already have an Amazon account. If you do, select “I’m a Returned User” and you can use your current Amazon account to register with Amazon Web Services.
Once you have created an Amazon Web Services account, you will need to download the Import Export tool. This tool is very easy to use. This requires a little tweak, which I am going to explain. But you can see on the screen the download link that I’m going to add to the caption at the bottom of this video. So download it and then extract it to a directory on your computer.
Now that you’ve downloaded this tool and unpacked it, you should have a directory like this. At this point, we will need to edit a file named “AWS Credentials”. It contains two values: an access key identifier and a secret key. Basically, these are the two values ??that Amazon uses to link to your account. You can get these two values ??from your Amazon Web Services account by going to the following URL. This is aws.amazon.com/securitycredentials. On the Security Credentials page, click Access Keys.
Now this is a little confusing. If you have used Amazon Web Services and have generated keys in the past, you will not be able to see your private key here. It’s kind of a new interface from Amazon, and to see your existing secret keys you have to click on the Security Credentials link, which takes you to the old Legacy page.
If you’ve just created a new account, you can create a new root key. This button will be active. At this point, you will receive an access key ID and a secret key that will give you both values. And this is the Legacy Security page where you can access your private keys if you have already created an Access Key ID for Amazon Web Services. So, as you can see here, I have two access keys, and if I want to go ahead and see my private key, I can go ahead and click the Show button, and then I can copy those two values ??into the AWS Credentials file that I showed. you before. So you want to go ahead and insert the access id key here and insert the private key here.
Now, if you are confused by the passkey id and the secret passkey, you are fine. You don’t really need to know who they are, or care about them at all. All you have to do is sign and get the values, copy and paste them into this file.
The next thing we’re going to do is create an import job. The next two parts are the two most difficult parts of the whole procedure. To create an import job for Amazon S3, we’re going to create a manifest file. This manifest file basically contains some information about your device. Wherever you want to store data and where you want to return the device.
The nice thing is that we don’t need to create this manifest file ourselves. It is already created for us, we just need to fill it. So what you need to do is go to the directory where your import and export tool is located and click on “Examples”. This is where you are going to open the S3 import manifest. As you can see, I have already filled out the information for my import job. So let’s take a look at this in a little more detail.
As you can see, the first thing you need to do is re-enter the passkey ID. You need to get rid of the parentheses and just insert them right after the colon. The next thing you need to do is enter the name of the segment. You will need to go ahead and create a cart, which I am going to show and show after that, but for now go ahead and enter whatever name you want where your data will be stored. Therefore, if you create a folder named “Backup”, then everything that you have on your device, any folders or anything in it, will be under this trash bin name.
The next thing you’ll want to do is enter your device ID. Basically, this is a unique identifier for your external hard drive. This could be the serial number on the back of the hard drive. If you don’t have a serial number on the back of your hard drive, you can simply create your own or create an ID. Just write it on something, a sticker that you can stick to your device, and then just enter that value here. There just must be something the same on the device and in this file. Erase the device, it’s already set to No, so leave that. You can leave the next one. Service level is standard, you can leave it. And the return address, you are going to fill in your address like I did here. The source file has several optional fields. You should go ahead and remove them if you are not going to use them. So you can just delete those lines.
So, the next thing we will do after filling in the manifest file is to save it to the appropriate directory. To do this, we’ll go ahead and click File, Save As and navigate back to the directory of the Web Services Import and Export Tool. This is also the location of the point properties file we filled out earlier. Here you can go ahead and name your file “my import manifest.txt. “Since the Save As type is already in txt format, you do not need to type it in the file name. Go ahead and click “Save”.
Now that we’ve edited the AWS Credentials file and added the My Import Manifest file, we can go ahead and create a bucket in Amazon S3. This is very easy to do. What you are going to do is go to aws.amazon.com and you are going to go ahead and click the My Account Console and then AWS Management Console. After logging in, you should have a screen like this with all the different Amazon web services. At the moment, all we care about is the Amazon S3, which is here at the bottom left. Click on it and it will load the S3 console. And as you can see here, aside from the buckets, this is nothing special. So, I have two baskets, this is my backup of my Synology nas, which is a network type storage device.
What you need to do is click Create Bucket and think that you are going to go ahead and give your bucket a name. You can also select a different region, but I suggest you just go to the region that it fills in for you automatically. A segment name can only be dots and must be unique in the entire region where it is stored. So if someone already has this bucket name, you will get an error. For example, if I say nasbackup and say create, it will throw an error that the requested segment name is not available. In that case, you can use dots so you can put a dot and whatever you want and click New and if it’s unique then it will go on and create that segment name. So, you can create a trash can, that is, data on the entire external hard drive to be stored.
At this point, you might be wondering what else needs to be done. So let’s see what we have already done. We signed up for AWS service. We have downloaded and extracted the tool. We’ve edited the file and editor keys. We went ahead and created a manifest file, saved it in the import manifest in the same directory as the credential file, and created a bucket on Amazon S3. So there are a couple more things to do for that.
The next thing we need to do is create a job request using the Java command line tool. It’s a bit technical, and it’s probably the most technical thing you’ll have to do, but it’s actually not that hard. Now, to create this job request, we have to run a Java command on the command line. But for this we need to have the Java development kit installed. This differs from the Java runtime, which is usually installed on most computers, but does not allow you to run Java commands from the command line.
To do this, you need to go to Google and just search for Java SE, which is Java Standard Edition. Go ahead and click the first link here and you will be taken to this page. Here you can scroll down and see three options: JDK server, JRE, and JRE. We have nothing to worry about here about these two. We are going to download the JDK. On the next page, click “Accept License Agreement†and you will then be able to download the file that matches your system specifications. In my case, I downloaded the executable for Windows 64 bit.
Now that you have installed the Java executable package, we can go ahead and run the Java command, and you can go ahead and see that command here in the documentation I have highlighted here. And by the way, if you need to get to this documentation, the easiest way is to go to Google and search for “AWS import export docsâ€. Then click on Create Import Job and then click Create Your First Amazon S3 Import Job and you will be taken to this page.
We can now run the command by going to the command line. To do this, we press Start, enter CMD and press Enter. Now that we have the command line, we need to navigate to the directory where the Amazon import and export tool is located. In our case, this is in the Downloads section and then there is a folder called Import Export Web Service Tool. So to navigate directories on the command line you type “cd” and then I type “downloads” and then I type “cd” again and I’m going to type “import and export web service tool” which is directory name. Now that I’m in this directory, I’m just going to copy this command and paste it into the command line.
You may have noticed that in the command we just copied and pasted, the manifest file name is My S3 Import Manifest.txt. I think this is a documentation issue because when I tried to run it this way I got an error saying that the file should be called My Import Manifest.txt. Just move your cursor and delete the S3 part and you can run the command. Now I’m not going to run the command right now because it has run before. But when you go ahead and hit Enter, you should get something like this: job created, job id, AW delivery address, and contents of the signature file.
The content of the signature file is essentially a file that is created in the root directory here in the “Import / Export Web Services” section that invokes the signature. It will be created when you run the actual command. If all goes well, you can take this file and you will have to copy it to the root of your hard drive.
We’re almost at the end. The next thing we need to do is copy the signature file to the root of the hard drive. We can find a file named “Signature” in the directory of the web services import and export tool after you run the Java command.
The second and final step is to print the packing slip and fill it out. This is what the packing list looks like. This is a very simple document. You can include the date, your email account ID, your contact number, your name and phone number, the work ID, and the ID you provided for your device. Again, you can find this document here outside of the documentation.
And finally, the last step – just pack your hard drive and ship it to Amazon. There are a few little things to watch out for. First, you need to include the power supply, all power cables, and all interface cables, so if it is USB 2.0, 3.0, esata, you need to include a USB or esata cable. If not, they will return it to you. You will also need to fill out the packing list I mentioned earlier and put it in the box. Finally, you are going to send the packet to the address that you received with this create response command that we ran.
There are two other things to note when shipping. First, make sure the job ID is listed on the shipping label. If not, they will return it back. Therefore, you need to make sure you have a Job ID on your shipping label. Second, you should also fill in the return shipping address. It will be different from the return shipping address that we specified in the manifest file. If they don’t handle your hard drive for any reason, if there is a problem or something, they will return the hard drive to the shipping address shown on the shipping label. If they can handle your hard drive and can transfer all of the data, they will return the hard drive to the shipping address you provided with that person. Therefore, it is also important to include the return shipping address on the label. You can choose any telecom operator. I chose UPS. It’s good to have a tracking number and they can do it all for you without a problem.
That’s all. It is a few steps and it will take a little time the first time. But after that it’s pretty fast and it’s a great way to store a lot of data in the cloud, Amazon is also cheap to store. So if you have a ton a day to store and want to back up somewhere other than home or on an external hard drive, then Amazon Web Services S3 is a great option.
I hope you enjoyed this tutorial. Please come back and visit.
–
Comment on “Transfer Data to Amazon S3 Quickly using AWS Import Export”