AWS Import/Export gets data to and from the Amazon cloud computing platform

Internet transfer rates can be a gating factor for some cloud deployments. As a result Amazon's Web services have come to include Import/Export services to speed transfer of big data sets.

The details of cloud computing continue to develop in odd and sometimes unexpected ways. It seems that interesting problems of cloud often call for interesting solutions in turn. Count Amazon's AWS Import/Export format as just such a tool. As it turns out – and it should not come as a surprise - Internet transfer rates can be a gating factor for some cloud deployments. As a result Amazon's Web services have come to include Import/Export...

services to speed transfer of big data sets.

Import/Export moves data onto and off of storage devices that you send to Amazon. According to Amazon, the disk drive in the mail can outpace Internet transfer in speed, and save money versus connectivity upgrades. One can imagine the approach plays best in prototyping or 'one-off' scenarios.

One of the pre requisites for working with this AWS format is grounding in YAML – which, no, does not stand for "Yet Another Markup Language" but instead stands for "YAML Ain't Markup Language". What YAML is is a data serialization standard intended to work with any programming language. An AWS account, of course is required as well.

By some measures, AWS Import/Export would seem a return to the good old days of batch processing. Data uploads begin the day after the storage device arrives. Then, when the job is done, the device is returned. It arrives, one envisions, much as a book arrives from the vaunted Amazon book store - Via UPS!

Underlying AWS is Amazon S3. This storage format treats entities as objects that consist of data and metadata. Keys are used to uniquely identify objects within the conceptual containers used in Amazon S3 – these are called 'buckets.' The company describes a key as …

… the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. Since a bucket and key together uniquely identify each object, Amazon S3 can be thought of as a basic data map between "bucket + key" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the service endpoint, bucket name, and key, as in, where doc is the name of the bucket, and 2006-03-01/AmazonS3.wsdl is the key.

Who knew that cloud computing would bring out the familiar call-response of yore?

"How's the job going?"

"It's in the mail!"

Related Amazon Cloud Computing material

S3 AWS Import Export information - PDF
Amazon S3 AWS Import Export Guide and API reference - - PDF

This was first published in June 2010

Dig Deeper on Emerging SOA standards



Enjoy the benefits of Pro+ membership, learn more and join.



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: