AWS S3 Offers a Storage Option for Rarely Accessed Data Amazon Web Services (AWS), recently announced improvements to its Simple Storage Service (S3). These include an expansion of its Intelligent Tiering option to store archival data. AWS launched S3 Intelligent-Tiering in March 2012 to provide S3 users with a more cost-effective storage option for data that has unpredictable access requirements. There are two data tiers within the S3 Intelligent-Tiering option. One for data that is accessed frequently and another for data that is accessed less often. The service automatically moves data objects between tiers based on how frequently users request access. If an object isn’t accessed in 30 days, it is moved to the infrequent accessibility tier. Once it is accessed again, it is moved back to the frequent tier. This is a process that optimizes storage costs for data that is only occasionally, irregularly, or both. AWS announced this week an expansion to S3 Intelligent-Tiering. Archive Access and Deep Archive Access are now available to access data that is rarely accessed. The first tier is for data that hasn’t been accessed in the last 90 days and the second for data that hasn’t been accessed within 180 days. S3 Intelligent-Tiering can move data from one tier of the hierarchy to another as needed. Marcia Villalba is an AWS senior developer advocate and wrote a blog post about the cost benefits of S3 Intelligent-Tiering. You pay monthly storage, request, and data transfer. Intelligent-Tiering allows you to pay a small per-object fee each month for automation and monitoring. S3 Intelligent-Tiering does not charge a retrieval fee or charge for data movement between tiers. Objects in Frequent Access tier get billed at S3 Standard’s rate, while objects in Infrequent Access tier objects are billed at S3 Standard Infrequent Access. Objects stored in Archive Access tier objects are billed the same as S3 Glacier, and objects in Deep Archive access Tier objects are billed the same as S3 Deep Glacier. AWS also announced some other S3-related enhancements last week.

  • The Object Ownership feature, which was launched last month, now supports AWS CloudFormation.
  • Amazon S3 Replication now provides more detailed information about object replications, with new metrics and notification capabilities.
  • It is now possible to duplicate delete markers in Amazon S3 Replication.
  • With AWS DataSync, users can automate data transfers from AWS storage services to S3, including S3, using AWS DataSync.
Previous post AWS Launches Cloud9 Web-Based IDE. A year after Cloud9 was acquired, Amazon Web Services (AWS), has released a new browser based IDE with the same name. On Thursday, the company presented the AWS Cloud9 IDE at its re-Invent conference in Las Vegas. The Cloud9 IDE is based in part on AWS’ acquisition of c9.io technology. It allows developers to write, run, and debug code through a web-based interface. Its core component is the Ace Editor, which is a code editing window. It supports large files without lags, custom run configurations, and more than 40 language modes. Cloud9 allows developers to invite other IAM users. It also supports serverless app development. Cloud9 integrates seamlessly with AWS. Randall Hunt, Senior AWS Technical Evangelist, explained that Cloud9 can be run in their AWS environments (they only pay for the compute and storage), or in a virtual private clouds (VPC). “If you are running in AWS, the auto-hibernate function will stop your instance soon after you have finished using your IDE. This can save you a lot of money over having a permanent developer desktop. It can be launched within a VPC, giving it secure access to your development resources. Cloud9 can be run on an existing instance or outside of AWS. You can also provide SSH access to the service to allow it to create an environment on an external machine. Hunt stated that your environment is provisioned automatically with secure access to AWS so you don’t have to worry about copying credentials around. Cloud9 is now available in all regions of AWS, including the Northern Virginia, Ohio and Oregon, Ireland, Ireland, Singapore, and Ireland. More information is available here. More information from AWS reInvent 2017:
Next post AWS S3 Misconfiguration Exposes Personal Information of Nearly 200,000,000 Voters Multiple security reports have recently highlighted the dangers of cloud computing misconfigurations. This has resulted in vulnerabilities that are now manifesting in the real world. Personal information of nearly 200 million voters was exposed to an Amazon Web Services-hosted S3 account. Deep Root Analytics, a Republican data company working for the Republican National Committee (RNC), left the data exposed. Security firm UpGuard Inc. discovered this data. “In total, the personal data of potentially all of America’s registered voters was exposed,” UpGuard stated in a post that was last updated yesterday. Many vulnerabilities and threats have been created by misconfigured cloud-based data storages, such as the recent spate ransomware attacks on MongoDB databases, Elasticsearch repositories, and other sources. Security firms have been attempting to find such vulnerabilities have made the misconfiguration known. Chris Vickery, an UpGuard security analyst, discovered the exposed voter data while searching open cloud repositories. Deep Root Analytics’ data repository contained an AWS S3 bucket that didn’t have access protection. UpGuard stated that anyone with an Internet connection could have accessed Donald Trump’s Republican data operation by simply navigating to a six character Amazon subdomain: “dra-dw”. It was not clear that any attackers had downloaded any data for malicious purposes. The UpGuard report is just one of many such announcements.