For VPC gateway endpoints, when we create one, a route is added to the subnet's route table pointing to a prefix list containing the service's public IP addresses in region X. Despite these being public IPs, does the traffic go through the Internet, or does it stay internal within AWS network?

 

Answer:

A route with destination managed prefix list is added automatically with a target as vpce-xyz Regardless if the destination is public or private IP, it does not change the route priorities (1) and traffic sent privately to vpc endpoint. If we abstract, it does not leave the VPC.

Regarding traffic leaving the AWS network, even without privatelink, traffic to AWS services does not leave AWS owned network and always stays in Border/Backbone of AWS network. This is what is called "internet for us" , meaning where the public routes advertised/routed only, but still traffic remains on AWS network. Refer here (2)

(1) https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html (2) https://aws.amazon.com/vpc/faqs/#:~:text=AWS%20service%20endpoint%3F-,No.,or%20from%20AWS%20China%20Regions.

Thumbnail

Is there any official hardening script that AWS recommends for Amazon Linux 2? Else, are there any freely available CIS benchmark scripts that customers can run manually on their instances?

 

Answer:

Here are your options:

Finally, for all the cases, use Amazon Inspector to verify over time compliance to hardening best practices.

Thumbnail

We’ve relied on pre-signed URLs for S3 uploads, but now we face a pressing challenge: restricting network traffic to specific IP addresses. We must ensure uploads traverse our controlled infrastructure. Implementing IP allow listing is crucial for enhancing security while preserving upload functionality. Any solutions?

 

Answer: There's no Static IPS for S3, but if they need to restrict access to a bucket for only a specific IP range, then they can try using a bucket policy to limit what IPs can access the bucket. They'd need to take into account the fact that the source is also coming from the pre-signed url, which I'm not quite sure how to account for. But otherwise they'll have to do access control outside of S3 directly.

Thumbnail

Devops enginner asked: I have a question regarding the configuration of memoryReservation and memory in ECS task definitions. What is the recommended difference between these two parameters? Are there any best practices to follow, especially for applications with variable memory requirements?

 

Answer:

When configuring memory parameters in Amazon ECS task definitions, it's important to understand the difference between memoryReservation and memory, and how to set them effectively.

The memoryReservation parameter specifies the soft limit for memory, which is the minimum amount of memory that your container is guaranteed to have available. The memory parameter, on the other hand, sets a hard limit, which is the maximum amount of memory your container can use.

For applications with variable memory requirements, it's generally recommended to set both parameters:

  1. Set memoryReservation to a value that represents the typical or baseline memory usage of your application. This ensures that your container always has access to this amount of memory.

  2. Set memory to a higher value that represents the maximum amount of memory your application might need during peak usage.

The difference between these two values allows for "bursting" - your container can use more memory than the reservation up to the hard limit when needed, but it's not guaranteed to always have that much available.

Best practices for setting these parameters include:

  1. Use load testing to determine your application's memory requirements under various conditions.

  2. Set memoryReservation slightly higher than your application's baseline needs to ensure consistent performance.

  3. Set memory high enough to accommodate peak usage, but not so high that you're wasting resources.

  4. For Java applications or others with complex memory management, you may need to experiment to find the right balance.

  5. Consider the total memory available on your EC2 instances (if using EC2 launch type) to ensure efficient task placement.

  6. Regularly monitor your containers' memory usage and adjust these parameters as needed.

Remember, setting appropriate memory values is crucial for both performance and cost optimization. Too low, and your application may crash or perform poorly; too high, and you may be over-provisioning and wasting resources.

It's also worth noting that for tasks hosted on AWS Fargate, you must use one of the specified values for memory that corresponds to the CPU value you've chosen, as Fargate has predefined task CPU and memory combinations.

By carefully configuring these memory parameters based on your application's needs and behavior, you can ensure optimal performance and resource utilization in your ECS environment.

Thumbnail

Customer wants to use AWS managed AD and they have 4000 windows client machines will connect over SD WAN. Customer wants to know if any separate CAL licenses to be purchased separately?

 

Ans: For AWS Managed Microsoft AD, there is no need to purchase separate Client Access Licenses (CALs) for Windows client machines connecting to the service. AWS manages the licensing of Windows Server instances for you, and access is included in the price you pay for the instances you use.

This means that for your customer's 4000 Windows client machines connecting over SD-WAN, they do not need to buy additional Windows Server CALs. The licensing is handled by AWS as part of the Managed Microsoft AD service. So you won't need CALs for managed AD but for any other non aws servers like windows file or web servers that dit not on aws they will need CALs.You could have all your servers besides your windows file server on aws for example and if those clients all access that on prem windows file server they would technically need to  be covered with CALs.Microsoft 365 E5 for example also covers the CAL requirement so if they had that for example they may be covered there.

Thumbnail

My customer has a vSAN on-prem for the block storage of their VMs and they have synchronous replication between 2 datacenters separated by a meaningful distance. When a VM in one datacenter fails, they can restart a similar VM in the second datacenter with the block storage synchronously replicated so their RTO is very small.

They want an equivalent of this for EC2 machines across two AZs. Can they synchronously sync the EBS volumes across AZ?

 

Answer:

You are correct that Amazon EBS volumes cannot be synchronously replicated across Availability Zones (AZs) natively. EBS volumes are AZ-specific resources and cannot be directly shared or synchronously replicated between AZs. However, there are alternative approaches your customer can consider to achieve similar functionality and meet their requirements for block storage replication across AZs:

  1. Multi-AZ deployments for Amazon RDS: If the customer is using databases, Amazon RDS offers Multi-AZ deployments that provide synchronous replication across AZs. This is a managed solution that handles failover automatically.

  2. Amazon FSx for Windows File Server: For Windows-based workloads, Amazon FSx for Windows File Server offers Multi-AZ file systems. These provide synchronous replication between two AZs, leveraging Windows Server Failover Clustering technology.

  3. Amazon FSx for OpenZFS: This service now offers Multi-AZ deployment options, providing increased availability and enhanced durability by spanning multiple AZs within an AWS Region.

Thumbnail