ZEDD at re:Play 2021

re:Invent 2021 Recap

Last week was re:Invent. It was great to be back in Vegas, and I loath Vegas. The crowds this year were smaller, which meant I could typically get into whatever session I wanted to. However it still took forever to get from Wynn, to Venetian, to Caesar’s to Mirage (where I was staying). I probably walked as much last week as I did during the entire pandemic. The Expo floor was smaller, but it didn’t seem smaller.

Since Thanksgiving, where were another 125 announcements that hit the AWS What’s New Page which is what I’ve cribbed all these announcements from. Of the 125 announcements, 52 merit a mention in this post. Oof.

I’ll break these down the same as I did for my pre:Invent post, but I’ve added a few new categories for Serverless, Networking, and Cost Savings.

Watch out for these

Nothing too horrible in the announcements from a security perspective. We still haven’t seen Lambda URL Configs yet.

Announcing Amazon RDS Custom for SQL Server

Announced On: (Dec 1, 2021)

Amazon Relational Database Service (Amazon RDS) Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is now available for the SQL Server database engine. Amazon RDS Custom for SQL Server automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install drivers, and enable native features to meet the dependent application’s requirements.

Key thing that jumped out to me here is that customers will have access to the underlying OS in RDS SQL Server. Customers also had access to the underlying OS in Azure’s Cosmos DB. Cloud Security Researchers - you know what you need to do!

Announcing AWS Data Exchange for APIs

Announced On: (Nov 29, 2021)

We are announcing the launch of AWS Data Exchange for APIs, a new feature that enables customers to find, subscribe to, and use third-party API products from providers onAWS Data Exchange. With AWS Data Exchange for APIs, customers can leverage AWS-native authentication and governance, explore consistent API documentation, and utilize supported AWS SDKs to make API calls. Data providers can now reach millions of AWS customers that consume API-based data by adding their APIs to the AWS Data Exchange catalog, and more easily manage subscriber authentication, entitlement, and billing.

Third-Party APIs jumped out at me.

Amazon ECR announces pull through cache repositories

Announced On: (Nov 29, 2021)

Amazon Elastic Container Registry (Amazon ECR)now supports pull through cache repositories, a new feature designed to automatically sync images from publicly accessible registries. With today’s release, customers now benefit from the download performance, security, and availability of Amazon ECR for the public images.

I think this was designed as a work-around for DockerHub pull limits. Still, if you’re looking to control your software supply chain, this is one to keep an eye on.

AWS customers can now find, subscribe to, and deploy third-party applications that run in any Kubernetes environment from AWS Marketplace

Announced On: (Nov 29, 2021)

AWS customers can now find, subscribe to, and deploy third-party Kubernetes applications from AWS Marketplace on any Kubernetes cluster, in any environment. This extends the existing AWS Marketplace for Containers capabilities. Previously, customers could find and buy containerized third-party applications from AWS Marketplace, and deploy them in Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). Customers can now deploy third-party Kubernetes applications to on-premises environments using Amazon Elastic Kubernetes Service Anywhere (EKS-Anywhere), or any customer self-managed Kubernetes clusters in on-premises environments or in EC2.

AWS Marketplace, otherwise known as the Amazon Bypass-Corporate-Procurement-and-Vendor-Risk-Management-as-a-Service now supports k8s. Because if you don’t do Kubernetes you should be shopping for a casket grandpa.

AWS Chatbot now supports management of AWS resources in Slack (Preview)

Announced On: (Nov 29, 2021)

Today, we are announcing the public preview of a new feature that allows you to use AWS Chatbot to manage AWS resources and remediate issues in AWS workloads by running AWS CLI commands from Slack channels. Previously, you could only monitor AWS resources and retrieve diagnostic information using AWS Chatbot.

/launch cryptominer is a new slack command coming to a Workspace near you.

Securely manage your AWS IoT Greengrass edge devices using AWS Systems Manager

Announced On: (Nov 29, 2021)

Today, AWS IoT Greengrass announced a new integration with AWS Systems Manager that helps IT and edge device administrators to securely manage their edge devices, such as industrial equipment and industrial PCs, alongside their IT assets, such as EC2 instances, AWS Outposts, and on-premises servers.

ICS Security is hard. ICS Security is harder when access keys controlling your nuclear power plant control rods are committed to GitHub.

Amazon WorkSpaces introduces Amazon WorkSpaces Web

Announced On: (Nov 30, 2021)

Today we announced the General Availability of Amazon WorkSpaces Web. WorkSpaces Web is a new capability from our End User Computing suite - a low cost, fully managed WorkSpace built specifically to facilitate secure, web-based workloads. WorkSpaces Web makes it easy for customers to safely provide their employees with access to internal websites and SaaS web applications without the administrative burden of appliances or specialized client software. WorkSpaces Web provides simple policy tools tailored for user interactions, while offloading common tasks like capacity management, scaling, and maintaining browser images.

I kinda like this idea. Not sure how it differs from AppStream2, but keep an eye on this as it does export the pixels of your internal environment to the broader internet.

AWS Resource Access Manager enables support for global resource types

Announced On: (Dec 2, 2021)

AWS Resource Access Manager (RAM) now supports global resource types, enabling you to provision a global resource once and share that resource across your accounts. A global resource is a resource that can be used in multiple AWS Regions. For example, you can now create a RAM resource share with an AWS Cloud WAN core network, which is a managed network containing AWS and on-premises networks, and share it across your organization. As a result, you can use the Cloud WAN core network to centrally operate a unified global network across Regions and across accounts.

Public S3 Buckets are bad. Accidentally sharing your entire corporate WAN to all AWS Customers is badder.

Announcing preview of SQL Notebooks support in Amazon Redshift Query Editor V2

Announced On: (Dec 3, 2021)

Amazon Redshift simplifies organizing, documenting, and sharing of multiple SQL queries with support for SQL Notebooks (preview) in Amazon Redshift Query Editor V2. The new Notebook interface enables users such as data analysts and data scientists to author queries more easily, organizing multiple SQL queries and annotations on a single document. They can also collaborate with their team members by sharing Notebooks.

This one jumped out at me after the reading the Cosmos DB write up. They got into the control plane of Azure via Jupyter notebooks.

AWS announces Construct Hub general availability

Announced On: (Dec 2, 2021)

Today we are announcing the general availability of Construct Hub, a registry of open-source construct libraries for simplifying cloud development. Constructs are reusable building blocks of the Cloud Development Kits (CDKs). Discover and share CDK constructs for the AWS Cloud Development Kit (CDK),CDK for Kubernetes (CDK8s)andCDK for Terraform (CDKtf) and other construct-based tools.

OpenSource CDK building blocks. What could go wrong?

Amazon Textract announces specialized support for automated processing of identity documents

Announced On: (Dec 1, 2021)

Amazon Textract, a machine learning service that makes it easy to extract text and data from any document or image, now offers specialized support to extract data from identity documents, such U.S. Driver Licenses and U.S. Passports. You can extract implied fields like name and address, as well as explicit fields like Date of Birth, Date of Issue, Date of Expiry, ID #, ID Type, and more in the form of key-value pairs. Until today, current OCR based solutions were limited, and did not offer the ability to extract all the required fields accurately due to rich background images or the ability to recognize names and addresses, as well as the fields associated with them (e.g., Washington state ID lists home address with the key “8”), or support ID designs and formats that varied by country or state.

I’m not sure which AWS Customer asked for this, but I hope they’re not about to get a Bucket Negligence Award.

AWS Governance Related

Lots of good data backup and resiliency announcements this week. Ransomware is scary and AWS has been listening to panicked C-Levels this year.

Recover from accidental deletions of your snapshots using Recycle Bin

Announced On: (Nov 29, 2021)

Starting today, you can use Recycle Bin for EBS Snapshots to recover from accidental snapshot deletions to meet your business continuity needs. Previously, if you accidentally deleted a snapshot, you would have to roll back to a snapshot from an earlier point in time, increasing your recovery point objective. With Recycle Bin, you can specify a retention time period and recover a deleted snapshot before the expiration of the retention period. A recovered snapshot retains its attributes such as tags, permissions, and encryption status, which it had prior to deletion, and can be used immediately for creating volumes. Snapshots that are not recovered from the Recycle Bin are permanently deleted upon expiration of the retention time.

This introduces a new AWS Service in the AWS CLI aws rbin to create and manage retention rules. At a quick glance, it doesn’t look like there is the ability for an attacker to empty the recycling bin, but there might be an attack vector by way of modification of existing rules. Since rbin is it’s own service, you can probably setup some CloudTrail detection to see how it is being used in your environment.

AWS Backup adds support for VMware workloads

Announced On: (Nov 30, 2021)

AWS Backup now allows you to centrally protect VMware workloads, on premises and in the cloud as VMware Cloud on AWS, helping you meet your business and regulatory compliance needs. You can now use a single policy in AWS Backup to centrally protect your hybrid VMware environments alongside the 12 AWS services (spanning compute, storage, and databases) already supported by AWS Backup. AWS Backup enables you to demonstrate compliance status of your organizational data protection policies by monitoring backup, copy, and restore operations, and allowing you to generate unified auditor-ready reports to help satisfy your data governance and regulatory requirements.

I’m sure CommVault’s shareholders are not pleased.

AWS Compute Optimizer now offers enhanced infrastructure metrics, a new feature for EC2 recommendations

Announced On: (Nov 29, 2021)

AWS Compute Optimizer now offers enhanced infrastructure metrics, a paid feature that when activated, enhances your Amazon EC2 instance and Auto Scaling group recommendations by capturing monthly or quarterly utilization patterns. Compute Optimizer does this by ingesting and analyzing up to six times more Amazon CloudWatch utilization metrics history than the default Compute Optimizer option (up to 3 months of history vs. 14 days). You can activate the feature at the organization, account, or resource level via the Compute Optimizer console or API for all existing and newly created EC2 instances and Auto Scaling groups.

I have no idea if the savings of downsizing that t2.medium to t2.micro exceed the costs of these enhanced metrics or not.

AWS Compute Optimizer now offers resource efficiency metrics

Announced On: (Nov 29, 2021)

AWS Compute Optimizer now helps you quickly identify and prioritize top optimization opportunities through two new sets of dashboard-level metrics: savings opportunity and performance improvement opportunity.

Hopefully it can also tell me when I should have a meeting with 6 engineers and two VPs to discuss turning off that instance which dates back to the Obama administration.

Announcing preview of AWS Backup for Amazon S3

Announced On: (Nov 30, 2021)

Today, we are announcing the public preview of AWS Backup for Amazon S3. You can now create a single policy in AWS Backup to automate the protection of application data stored in S3 alone or alongside 11 other AWS services for storage, compute, and database. Using AWS Backup’s seamless integration with AWS Organizations, you can create independent, immutable, and encrypted backups and centrally manage backups and restore of S3 buckets and objects across your AWS accounts.

This is a preview, and only available in us-west-2 right now. It uses S3 Versioning to track copies. Running AWS CLI commands to restore previous versions was a pain in the butt, so this looks to simplify that.

Amazon SageMaker now supports cross-account lineage tracking and multi-hop lineage querying

Announced On: (Dec 1, 2021)

Amazon SageMaker now offers enhancements to the machine learning (ML) lineage tracking capability that enables customers to track and query the lineage of artifacts such as data, features, and models across an ML workflow. Now, customers can retrieve the end-to-end lineage graph spanning the entire workflow from data preparation to model deployment through a single query. This feature eliminates undifferentiated heavy lifting needed to retrieve lineage information one workflow step at a time and manually stitch them all together. Customers can also retrieve lineage information for segments of the workflow by defining a step as the focal point and querying the lineage of the steps that are upstream or downstream of that focal point. For instance, customers can define a model as the focal entity and retrieve the location of the raw data set from which features were extracted to train that model.

Cross-account jumped out at me, and this was going to go into the “Watch out for”, but re-reading this, it looks like it’s for data governance, and is a good thing, not a scary thing. Honestly, I don’t understand SageMaker, so who knows.

Control Tower

I generally recommend avoiding Control Tower if you’re a large shop. That said, I also keep a pet Control Tower, so I can reverse engineer what it can do. Three announcements about Control Tower are worth noting:

You probably don’t need to be in all regions, and if you’re not careful someone cryptominer is gonna spin up miners in Osaka or Sao Palo. Turn those regions off!

New Security tools

Amazon CodeGuru Reviewer now detects hardcoded secrets in Java and Python repositories

Announced On: (Nov 29, 2021)

Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality and identify an application’s most expensive lines of code.

This still charges by lines of code, and a quick look at the pricing makes me miss the simplicity of Macie. There are probably cheaper ways to do this.

AWS announces the new Amazon Inspector for continual vulnerability management

Announced On: (Nov 29, 2021)

The newAmazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure is generally available, globally. Amazon Inspector has been completely rearchitected to automate vulnerability management and deliver near real-time findings to minimize the time to discover new vulnerabilities.

Inspector now supports ECR, so you can scan your containers too. Findings can be pushed to EventBridge (I don’t recall if that was part of Inspector V1). This new Inspector introduces an entirely new API/Boto3 service. And while the announcement and Blog posts don’t mention it, according to the Boto3 docs, Inspector2 support Delegated Admin! Now you can manage Inspector via a central security account!

Amazon S3 console now reports security warnings, errors, and suggestions from IAM Access Analyzer as you author your S3 policies

Announced On: (Nov 30, 2021)

The Amazon Simple Storage Service (S3) console now reports security warnings, errors, and suggestions from Identity and Access Management (IAM) Access Analyzer as you author your S3 policies. The console automatically runs more than 100 policy checks to validate your policies. These checks save you time, guide you to resolve errors, and help you apply security best practices. By resolving errors and security warnings reported by the S3 console, you can validate that your policies are functional before you attach them to your S3 buckets or access points.

Useful if you’re authoring things in the console and not prone to blindly clicking “ok” at ever dialog box. Not as helpful when your IaC has the errors.

Amazon Athena now supports new Lake Formation fine-grained security and reliable table features

Announced On: (Nov 30, 2021)

Amazon Athena users can now use AWS Lake Formation to configure fine-grained access permissions and read from ACID-compliant tables. Amazon Athena makes it simple for users to analyze data in Amazon S3-based data lakes to help ensure that users only have access to data to which they’re authorized and that their queries are reliable in the face of changes to the underlying data can be a complex task.

If Data Governance excites you, I’m both sorry and happy to report this new feature

AWS Lake Formation support Governed Tables, storage optimization and row-level security

Announced On: (Nov 30, 2021)

AWS Lake Formationis excited to announce the general availability of three new capabilities that simplify building, securing, and managing data lakes. First, Lake Formation Governed Tables, a new type of table on Amazon S3, that simplifies building resilient data pipelines with multi-table transaction support. As data is added or changed, Lake Formation automatically manages conflicts and errors to ensure that all users see a consistent view of the data. This eliminates the need for customers to create custom error handling code or batch their updates. Second, Governed Tables monitor and automatically optimize how data is stored so query times are consistent and fast. Third, in addition to table and columns, Lake Formation now supports row and cell-level permissions, making it more easily to restrict access to sensitive information by granting users access to only the portions of the data they are allowed to see. Governed Tables, row and cell-level permissions are now supported through Amazon Athena, Amazon Redshift Spectrum, AWS Glue, and Amazon QuickSight.

This one merited time in Adam’s keynote. I’m too ADHD to be interested in lake formation, which I’m told, occurs on geological time frames.

Amazon S3 Object Ownership can now disable access control lists to simplify access management for data in S3

Announced On: (Nov 30, 2021)

Amazon S3 introduces a new S3 Object Ownership setting, Bucket owner enforced, that disables access control lists (ACLs), simplifying access management for data stored in S3. When you apply this bucket-level setting, every object in an S3 bucket is owned by the bucket owner, and ACLs are no longer used to grant permissions. As a result, access to your data is based on policies, including AWS Identity and Access Management (IAM) policies applied to IAM identities, session policies, Amazon S3 bucket and access point policies, and Virtual Private Cloud (VPC) endpoint policies. This setting applies to both new and existing objects in a bucket, and you can control access to this setting using IAM policies. With the new S3 Object Ownership setting, you can easily review, manage, and modify access to your shared data sets in Amazon S3 using only policies.

10 years after introducing IAM, AWS finally deprecated ACLs! I wonder if this will finally enable the ability to transfer S3 buckets across accounts. GodBePraised

AWS Shield Advanced introduces automatic application-layer DDoS mitigation

Announced On: (Dec 1, 2021)

AWS Shield Advanced now automatically protects web applications by blocking application layer (Layer 7) DDoS events with no manual intervention needed by you or the AWS Shield Response Team (SRT). When you protect your resources with AWS Shield Advanced and enable automatic application layer DDoS mitigation, Shield Advanced will identify patterns associated with layer 7 DDoS events and isolate this anomalous traffic by automatically creating AWS WAF rules in your web access control lists (ACLs). These rules can be implemented in count mode to observe how they will impact resource traffic and then deployed in block mode. These capabilities enable you to quickly respond to and mitigate DDoS events that threaten the availability of your applications.

WAFs are hard, and Shield Advanced is expensive. That said, I’ll trade dollars for hours anytime. I look forward to seeing what my old team can do with this.

Amazon Virtual Private Cloud (VPC) announces Network Access Analyzer to help you easily identify unintended network access

Announced On: (Dec 1, 2021)

Amazon VPC Network Access Analyzer is a new feature that enables you to identify unintended network access to your resources on AWS. Using Network Access Analyzer, you can verify whether network access for your Virtual Private Cloud (VPC) resources meets your security and compliance guidelines. With Network Access Analyzer, you can assess and identify improvements to your cloud security posture. Additionally, Network Access Analyzer makes it easier for you to demonstrate that your network meets certain regulatory requirements.

This one needs a deeper look than I can provide while doing hot-takes and snark on 52 different AWS announcements.

AWS Lambda now logs Hyperplane Elastic Network Interface (ENI) ID in AWS CloudTrail data events

Announced On: (Dec 3, 2021)

AWS Lambda now logs the Hyperplane Elastic Network Interface (ENI) ID in AWS CloudTrail data events, for functions running in an Amazon Virtual Private Cloud (VPC). Customers can use the ENI ID in AWS CloudTrail data events to audit the security of their applications, and verify that only authorized functions are accessing their VPC resources through a shared Hyperplane ENI.

They snuck this one in on Friday of re:Invent. Nothing to say here other than more data is always useful.


Everything I do I do as serverless, because I hate patching and talking to my vulnerability management team as much as the next person. Here is a quick run down of some things that could be of interest.

AWS Lambda now supports event filtering for Amazon SQS, Amazon DynamoDB, and Amazon Kinesis as event sources

Announced On: (Nov 26, 2021)

AWS Lambda now provides content filtering options for SQS, DynamoDB and Kinesis as event sources. With event pattern content filtering, customers can write complex rules so that their Lambda function is only triggered by SQS, DynamoDB, or Kinesis under filtering criteria you specify. This helps reduce traffic to customers’ Lambda functions, simplifies code, and reduces overall cost.

Introducing Amazon CloudWatch Metrics Insights (Preview)

Announced On: (Nov 29, 2021)

Metrics Insights is a new feature fromAmazon CloudWatchthat is in preview. As a fast, flexible, SQL based query engine, Metrics Insights enables developers, operators, systems engineers, and cloud solutions architects to identify trends and patterns across millions of operational metrics in real time and helps you use these insights to reduce time to resolution. With Metrics Insights, you can gain better visibility on your infrastructure and large scale application performance with flexible querying and on-the-fly metric aggregations. Use Metrics Insights and otherCloudWatchfeatures to monitor your AWS and hybrid environments, and to respond to operational problems promptly.

Amazon S3 Event Notifications with Amazon EventBridge help you build advanced serverless applications faster

Announced On: (Nov 29, 2021)

You can now use Amazon S3 Event Notifications with Amazon EventBridge to build, scale, and deploy event-driven applications based on changes to the data you store in S3. This makes it easier to act on new data in S3, build multiple applications that react to object changes simultaneously, and replay past events, all without creating additional copies of objects or developing new software. With increased flexibility to process events and send them to multiple targets, you can now create new serverless applications with advanced analytics and machine learning at scale more confidently without writing single-use custom code.

This changes up the S3 EventNotification -> SNS -> SQS pipeline and allows you to leverage EventBus directly. I’ll look to see if this is ridiculously priced like CloudTrail S3 Events before I use it. I’d recommend you do the same.

Amazon S3 adds new S3 Event Notifications for S3 Lifecycle, S3 Intelligent-Tiering, object tags, and object access control lists

Announced On: (Nov 29, 2021)

You can now build event-driven applications using Amazon S3 Event Notifications that trigger when objects are transitioned or expired (deleted) with S3 Lifecycle, or moved within the S3 Intelligent-Tiering storage class to its Archive Access or Deep Archive Access tiers. You can also trigger S3 Event Notifications for any changes to object tags or access control lists (ACLs). You can generate these new notifications for your entire bucket, or for a subset of your objects using prefixes or suffixes, and choose to deliver them to Amazon EventBridge, Amazon SNS, Amazon SQS, or an AWS Lambda function.

New Serverless offerings

Adam made mention of four new Serverless/On-Demand offerings in his Tuesday keynote. I don’t use any, so I couldn’t tell you much about them.


Not content with putting DBAs in the unemployment line, AWS now has its sights on Network Admins. I’d be careful AWS, anyone who can encant the right things to make BGP work are scarier than Death Eaters.

Introducing AWS Cloud WAN Preview

Announced On: (Dec 2, 2021)

Today AWS announced the preview release of AWS Cloud WAN, a new wide area networking (WAN) service that helps you build, manage, and monitor a unified global network that manages traffic running between resources in your cloud and on-premises environments.

Announced On: (Dec 1, 2021)

Today AWS announced the general release of AWS Direct Connect SiteLink. SiteLink makes it easy to create private network connections between your on-premises locations, such as offices and data centers, by connecting them to Direct Connect locations throughout the world.

Basically, you can route traffic between on-prem facilities across the AWS backbone via DX connections. I believe Google allows you to do the same, so looks like AWS is catching up.

Amazon Virtual Private Cloud (VPC) announces IP Address Manager (IPAM) to help simplify IP address management on AWS

Announced On: (Dec 1, 2021)

Amazon VPC IP Address Manager (IPAM) is a new feature that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. With IPAM’s automated workflows, network administrators can more efficiently manage IP addresses.

I’ve not looked at this, so it may be nothing more than a AWS WorkDocs spreadsheet. That said, if it can automate the creation of non-overlapping VPCs, it’s a good thing.

AWS Transit Gateway introduces intra-region peering for simplified cloud operations and network connectivity

Announced On: (Dec 1, 2021)

Starting today, AWS Transit Gateway supports intra-region peering, giving you the ability to establish peering connections between multiple Transit Gateways in the same AWS Region. With this change, different units in your organization can deploy their own Transit Gateways, and easily interconnect them resulting in less administrative overhead and greater autonomy of operation.

When Network Team A doesn’t want to share with Network Team B, this feature is for you. Also good for corporate mega-mergers, so I’ll be keeping an eye on this.

Announcing preview of AWS Private 5G

Announced On: (Nov 30, 2021)

Today, we are announcing the preview ofAWS Private 5G, a new managed service that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months. With just a few clicks in the AWS console, customers specify where they want to build a mobile network and the network capacity needed for their devices. AWS then delivers and maintains the small cell radio units, servers, 5G core and radio access network (RAN) software, and subscriber identity modules (SIM cards) required to set up a private 5G network and connect devices. AWS Private 5G automates the setup and deployment of the network and scales capacity on demand to support additional devices and increased network traffic. There are no upfront fees or per-device costs with AWS Private 5G, and customers pay only for the network capacity and throughput they request.

Like AWS GroundStation - there’s not many use cases for this, and you probably want to SCP this one off from your developers who are pissed at AT&T’s signal at their house and looking for another option.

Cost Reduction Features & Announcements

As my job focus is now on cost containment as well as security, I’d be remiss not hitting on some of the cost savings announcements.

Amazon EBS Snapshots introduces a new tier, Amazon EBS Snapshots Archive, to reduce the cost of long-term retention of EBS Snapshots by up to 75%

Announced On: (Nov 29, 2021)

Starting today, you can use AmazonEBS Snapshots Archive, a new tier for EBS Snapshots, to save up to 75% on storage costs for EBS Snapshots that you intend to retain for more than 90 days and rarely access. EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You might also have snapshots that you access every few months or years and do not need fast access to data, such as snapshots created at the end of a project or snapshots that need to be retained long-term for regulatory reasons. For such use cases, you can now use EBS Snapshots Archive to store full, point-in-time snapshots at a storage cost of $0.0125/GB-month. Snapshots in the archive tier have a minimum retention period of 90 days. Retrievals from the archive tier will incur a charge of $0.03/GB of data transferred.

What is not stated above is “with typical restore times of 24-72 hours". Don’t use this as part of your DR strategy!!! That said, I’ll use this in my flamethrowers when I archive an orphaned Instance I’ve killed.

Amazon S3 Glacier storage class is now Amazon S3 Glacier Flexible Retrieval; storage price reduced by 10% and bulk retrievals are now free

Announced On: (Nov 30, 2021)

The Amazon S3 Glacier storage class is now named Amazon S3 Glacier Flexible Retrieval, and now includes free bulk retrievals in addition to a 10% price reduction, making it optimized for use cases such as backup and disaster recovery. S3 Glacier Flexible Retrieval is now even more cost-effective, and the free bulk retrievals make it ideal for when you need to retrieve large data sets once or twice per year and do not want to worry about the retrieval cost.

They renamed Glacier and cut the price, because the introduced a new Glacier…

Announcing the new Amazon S3 Glacier Instant Retrieval storage class - the lowest cost archive storage with milliseconds retrieval

Announced On: (Nov 30, 2021)

Amazon S3 Glacier Instant Retrieval is a new archive storage class that delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes.

Back in 2016, I spent 6 months modeling S3-IA for CNN. Then came Glacier Expedited Retrieval, and that work was thrown out the window. With this, you’re paying 3x the per-GB charge as you would with S3-IA.

Amazon S3 announces a price reduction up to 31% in three storage classes

Announced On: (Nov 30, 2021)

We are excited to announce that Amazon S3 has reduced storage prices by up to 31% in three S3 storage classes. Specifically we are reducing the storage price for S3 Standard-Infrequent Access and S3 One Zone-Infrequent Access by up to 31% in 9 AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), US West (Northern California), and South America (Sao Paulo).

Good News if you do a lot in these regions. The third class was the reduction in Glacier.

AWS price reduction for data transfers out to the internet

Announced On: (Nov 26, 2021)

Effective December 1, 2021, AWS is making two pricing changes for data transfer out to the internet. Each month, the first terabyte of data transfer out of Amazon Cloudfront, the first 10 million HTTP/S requests, and the first 2 million CloudFront Functions invocations will be free. Free data transfer out of CloudFront is no longer limited to the first 12 months. In addition, the first 100 gigabytes per month of data transfer out from allAWS Regions(except China and GovCoud) will be free. Free data transfer out from AWS Regions is also no longer limited to the first 12 months. These changes will replace the existing data transfer and CloudFront AWS Free Tier offerings, and AWS customers will see these changes automatically reflected in their AWS bills going forward. All AWS customers will benefit from these pricing changes, and millions of customers will see no data transfer charges as a result.

12 month free-tier has always irked be, because people new to AWS will spend most of that figuring out EC2, VPC and IAM. 1TB is not a lot of data at enterprise scale.

Amazon DynamoDB announces the new Amazon DynamoDB Standard-Infrequent Access table class, which helps you reduce your DynamoDB costs by up to 60 percent

Announced On: (Dec 1, 2021)

Amazon DynamoDBannounces the new Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, which helps you reduce your DynamoDB costs by up to 60 percent for tables that store infrequently accessed data. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, old social media posts, e-commerce order history, and past gaming achievements.

DynamoDB is a rounding error compared to my RDS charges, so I’m not sure who this impacts.

Announcing the new S3 Intelligent-Tiering Archive Instant Access tier - Automatically save up to 68% on storage costs

Announced On: (Nov 30, 2021)

The Amazon S3 Intelligent-Tiering storage class now automatically includes a new Archive Instant Access tier with cost savings of up to 68% for rarely accessed data that needs millisecond retrieval and high throughput performance. S3 Intelligent-Tiering is the first cloud storage that automatically reduces your storage costs on a granular object level by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead. S3 Intelligent-Tiering delivers milliseconds latency and high throughput performance for frequently, infrequently, and now rarely accessed data in the Frequent, Infrequent, and new Archive Instant Access tiers. Now, you can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new applications, and user-generated content.

“If you would like to standardize on S3 Intelligent-Tiering as the default storage class for newly created data, you can modify your applications by specifying INTELLIGENT_TIERING on your S3 PUT API request header.” There is almost no reason not to make this the default.

Other Noteworthy

Introducing Amazon CloudWatch Evidently for feature experimentation and safer launches

Announced On: (Nov 29, 2021)

Amazon CloudWatch Evidently is a new capability which helps application developers safely validate new features across the full application stack. Developers can use Evidently to conduct experiments on new application features and identify unintended consequences, thereby reducing risk. When launching new features, developers can expose the features to a subset of users, monitor key metrics such as page load times and conversions, then safely dial up traffic for general use. Amazon CloudWatch Evidently is part of CloudWatch’s Digital Experience Monitoring capabilities along with Amazon CloudWatch Synthetics and Amazon CloudWatch RUM.

Introducing Amazon CloudWatch RUM for monitoring applications’ client-side performance

Announced On: (Nov 29, 2021)

Amazon CloudWatch RUM is a real-user monitoring capability that helps you identify and debug issues in the client-side on web applications and enhance end user’s digital experience. CloudWatch RUM enables application developers and DevOps engineers reduce mean time to resolve (MTTR) client-side performance issues by enabling a quicker resolution. Amazon CloudWatch RUM is part of CloudWatch’s Digital Experience Monitoring along with Amazon CloudWatch Synthetics and Amazon CloudWatch Evidently.

Introducing AWS re:Post, a new, community-driven, questions-and-answers service

Announced On: (Dec 2, 2021)

Amazon Web Services (AWS) announces the availability of AWS re:Post (re:Post), a new, community-driven, questions-and-answers service to help AWS customers remove technical roadblocks, accelerate innovation, and enhance operation. AWS re:Post enables you to ask questions about anything related to designing, building, deploying, and operating workloads on AWS, and get answers from community experts, including AWS customers, Partners, and employees.

This replaces the AWS Forums.

Announcing Amazon EC2 M1 Mac instances for macOS

Announced On: (Dec 2, 2021)

Starting today, Amazon Elastic Compute Cloud (EC2) M1 Mac instances for macOS are available in preview. Built on Apple silicon Mac mini computers and powered by AWS Nitro System, EC2 M1 Mac instances deliver up to 60% better price performance over x86-based EC2 Mac instances for iOS and macOS application build workloads. EC2 M1 Mac instances also enable native ARM64 macOS environments for the first time in AWS to develop, build, test, deploy, and run Apple applications. Developers rearchitecting their macOS applications to natively support Apple silicon Macs can now provision ARM64 macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing to enjoy faster builds and convenient distributed testing.

I think there is a reason Intel didn’t sponsor replay this year. ARM is ascendant, and it will be interesting to see how long before Windows becomes available for ARM.

New Sustainability Pillar for the AWS Well-Architected Framework

Announced On: (Dec 2, 2021)

The AWS Well-Architected Framework has been helping AWS customers improve their cloud workloads since 2015. The framework consists of design principles, questions, and best practices across multiple pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. Today we are introducing a new AWS Well-Architected Sustainability Pillar to help organizations learn, measure, and improve workloads using environmental best practices for cloud computing.

Not only can you wonder why you’re paying $0.04 cents in ap-southeast-1, you can wonder why you’re generating 18oz of CO2 there as well. If I can add CO2 reduction to security risk and cost as the benefits of optimizing, I’ll take it.

Introducing Amazon SageMaker Canvas - a visual, no-code interface to build accurate machine learning models

Announced On: (Nov 30, 2021)

Amazon SageMaker Canvas is a new capability of Amazon SageMaker that enables business analysts to create accurate machine learning (ML) models and generate predictions using a visual, point-and-click interface, no coding required.

Making it even easier for SkyNet to replace us someday.

Introducing AWS Amplify Studio

Announced On: (Dec 2, 2021)

AWS Amplify announces AWS Amplify Studio, a visual development environment that offers frontend developers new features (public preview) to accelerate UI development with minimal coding, while integrating Amplify’s powerful backend configuration and management capabilities. Amplify Studio automatically translates designs made in Figma to human-readable React UI component code. Within Amplify Studio, developers can visually connect the UI components to app backend data. For configuring and managing backends, Amplify Admin UI’s existing capabilities will be part of Amplify Studio going forward, providing a unified interface to enable developers to build full-stack apps faster.Learn more.

This will be a game changer if it can prevent me from needing to learn how to use React.

Introducing AWS Mainframe Modernization - Preview

Announced On: (Nov 30, 2021)

AWS Mainframe Modernization is a unique platform for mainframe migration and modernization. It allows customers to migrate and modernize their on-premises mainframe workloads to a managed and highly available runtime environment on AWS. This service currently supports two main migration patterns – replatforming and automated refactoring – allowing customers to select their best-fit migration path and associated tool chains based on their migration assessment results.

Ok Boomer.


You’re still with me? Good grief that was a lot of announcements. Most of these are things I want to follow up on over the holidays with their associated Breakout Session videos, or to kick the tires on in my own org. I still need to do a blog post on my Chalk Talk (which wasn’t recorded), and some of the other things I got from the event.