AWS, Azure and Google Cloud: Exploring the battlefield and strategy for 2020

The hyperscale cloud providers – Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, with other pretenders occasionally cited – naturally generate the vast majority of revenues and, with it, the headlines.

Image result for AWS, Azure and Google Cloud
AWS, Azure and Google Cloud

According to figures from Synergy Research in December, one third of data centre spend in Q3 ended up in hyperscalers’ pockets. The company’s most recent market share analysis, again for Q3, found that for public infrastructure (IaaS) and platform as a service (PaaS), AWS held almost two fifths (39%) of the market, well ahead of Microsoft (19%) and Google (9%).

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

For those who say the race has long since been won, however, the course has gradually been changing as organisations explored hybrid and multi-cloud workflows, as well as tying infrastructure and platform together with software portfolios.

European outlook
In Europe, the battleground is shifting rapidly. Each provider has planted their flag variously, aside from the hubs of London, Frankfurt et al. Google Cloud launched in Poland and Switzerland in 2019 making seven European locations in total, while Microsoft unveiled plans to launch Azure in Germany and Switzerland, also taking its European locations to seven. AWS, meanwhile, has six with two of these regions, Italy and Spain, due in early 2020 and 2023 respectively.

Nick McQuire, senior vice president enterprise at CCS Insight, says that the competitive environment has ‘obviously turned up a notch’ over the past 12 months. “Even if you rewind 12 months, you’re starting to see the significant gap that AWS had, particularly in the core infrastructure as a service, compute, storage, just slightly become minimised,” he tells CloudTech. “Obviously AWS is still very much a front runner, depending on how you define it – but this is always part of the challenge in the industry.”

Talk to any number of people and you will get any number of definitions as to who is doing what and where. This obfuscation is somewhat encouraged by the hyperscalers themselves. AWS discloses its specific revenues – $8.99 billion for Q319 – while Microsoft and Google do not.

Microsoft directs its financial reporting into three buckets; productivity and business processes ($11bn in Q120), intelligent cloud ($10.8bn), and more personal computing ($11.1bn). Azure growth percentages are wheeled out, but a specific figure is not; the overall figure lies somewhere in the first two categories. According to Jay Vleeschhouwer of Griffin Securities, per CNBC, Azure’s most recent quarter was estimated at $4.3bn. Google, meanwhile, puts its cloud operation as one part of its ‘other revenues’ tag, which was $6.42bn last quarter. Analysts have been asking the company whether it will cut free the specific revenues, only to get a committed non-committal in response.

Yet therein lies the rub. Where do these revenues come from and how does it compare across the rest of the stack? As Paul Miller, senior analyst at Forrester, told this publication in February, the real value for Google, among others, is to assemble and reassemble various parts of its offerings to customers, from software, to infrastructure, and platform. “That should be the story, not whether their revenue in a specific category is growing 2x, 3x, or 10x.”

For McQuire’s part, this is the differentiation between Google and Microsoft compared with AWS. “The alternative approach is where you see companies, typically from the CEO down, that are all-in on transformation, and seeing the workplace environment and internal side of the house as part of that,” he says. “That’s typically where you will see companies go a little bit deeper with a Google or Microsoft; they will embed the entirety of their SaaS applications capabilities in and around decision making for their infrastructure as a service as well.

“That approach very much favours Microsoft, and we’ve seen more and more companies in the context of Microsoft’s big announcements last year.”

The preferred cloud and avoiding lock-in
With this in mind, McQuire sees the rise of the ‘preferred cloud’, as the marketing spiel would put it. AT&T and Salesforce were two relatively recent Microsoft customers whose migrations were illustrated by this word. It doesn’t mean all-in, but neither does it really mean multi-cloud. “Companies will start to entrench themselves around one strategic provider, as opposed to having one multiple cloud, and [being] not necessarily embedded business-wise into a strategic provider,” says McQuire.

This represents a fascinating move with regards to the industry’s progression. Part of the reason why many industries did little more than tip their toes into the cloud in the early days was down to the worry of vendor lock-in. Multi-cloud and hybrid changed that up, so should organisations be fearful again now? McQuire notes Microsoft has been doing a lot to change its previous image, yet a caveat remains.

“There’s always going to be that pre-perceived notion among companies out there that they have to careful with going all-in with Microsoft around this,” he admits. “You see companies navigate through those complexities… [but] I feel that there’s a growing set of customers, particularly globally, and if they’re going with Azure they’re going heavily and quite deep with Microsoft across the piece, as opposed to taking a workload by workload Azure model.”

According to a recent study from Goldman Sachs, more organisations polled were using Azure for cloud infrastructure versus AWS. It’s worth noting that the twice-annual survey polls only 100 IT executives, but they are at Global 2000 companies. Per CNBC again, 56 execs polled used Azure, compared with 48 for AWS.

This again shows the wider strength of the ecosystem, according to McQuire. “For the companies that are making more investments in the infrastructure as a service for Microsoft, they’re doing it with a complete picture in mind around the strength of these higher level services, particularly as you shift into SaaS applications and, more important, a lot of security and management capabilities,” he says. McQuire adds that Microsoft has had success with Azure in the UK, for instance from the number of firms who have moved to Office 365 over the past few years.

What next for Google?
Google Cloud, meanwhile, has had a particularly interesting 12 months. In terms of making noise, under the leadership of Thomas Kurian, the company has been especially vociferous. Its acquisitions – from Looker to Alooma, from Elastifile to CloudSimple – stood out, and even this year a raft of news has come through, from retail customers to storage and enterprise updates.

If You are interested to Learn Google Cloud you can enroll for Free live Demo Google cloud Architect Online Training

Expect more acquisitions to come out of Google Cloud in the coming year in what is going to be a long game. Despite the various moves made in terms of recruitment and acquisitions in beefing up Google’s marketing and sales presence, plenty more is to come. “Whilst clearly I think the focus is on improving Google Cloud and targeting very key areas – and they’re seeing areas of success, particularly among high level services around machine learning – there’s a longer game at play,” says McQuire. “The question is: how much time do they have in this arena?

“They’re going to have to focus more and more on some of those higher-level services, as opposed to the commodity infrastructure as a service market,” McQuire adds. “I think it’s going to be an ongoing battle for Google for awareness in the industry, in the market, and more importantly, I think there is still a large number of customers who are just not that well educated on what Google is doing in this space.”

AWS Fargate for Amazon Elastic Kubernetes Service

AWS’s serverless container capability makes it easier than ever for customers to deploy, manage, and scale Kubernetes workloads on AWSSquare, National Australia Bank, Babylon Health, and GitHub among customers and partners using Amazon EKS with AWS Fargate.

Image result for aws fargate for amazon elastic kubernetes service
AWS EKS

Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company AMZN, -0.03%, announced that customers can now run AWS Fargate for Amazon Elastic Kubernetes Service (EKS), making it easier for customers to run Kubernetes applications on AWS. AWS Fargate, which provides serverless computing for containers, has substantially changed the way developers manage and deploy their containers. Launched two years ago to work with Amazon ECS, AWS Fargate has been broadly requested by Kubernetes customers. Now, with AWS Fargate for Amazon EKS, customers can run Kubernetes-based applications on AWS without the need to manage servers and clusters.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

Containers have become very popular because they allow customers to package an application and run it anywhere, improve resource utilization, and make it easier to scale quickly. Most cloud providers only offer one container offering built around Kubernetes. AWS built Amazon Elastic Container Service (ECS) before container orchestration gained wide interest and, because it is built on AWS Application Programming Interfaces (APIs), it integrates easily with other AWS services. Today, there are hundreds of thousands of active clusters managed by Amazon ECS.

Over time, as Kubernetes became popular, many customers started running Kubernetes on top of Amazon EC2. Over 80% of the Kubernetes workloads in the cloud are running on AWS, according to Nucleus Research. Customers like the broad community and openness of Kubernetes, but it’s challenging for them to manage Kubernetes on their own, which is why they have asked AWS to help them solve this problem. A year and a half ago, AWS launched Amazon EKS, a managed Kubernetes service to make it easier to manage, scale, and upgrade Kubernetes clusters. Amazon EKS has been very popular and has given Kubernetes customers an extremely flexible way to model and run their applications. While Amazon EKS handles the Kubernetes management infrastructure, customers still need to patch servers, choose which Amazon EC2 instances to run on, patch the instances, scale cluster capacity, and manage multi-tenancy. These customers have asked AWS to further simplify running Kubernetes on AWS.

AWS Fargate for Amazon EKS combines the power and simplicity of serverless computing with the openness of Kubernetes. With AWS Fargate there is no longer a need to worry about patching, scaling, or securing a cluster of Amazon EC2 instances to run Kubernetes containers in the cloud. When customers run Kubernetes applications on AWS Fargate, it automatically allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. Customers only pay for the resources required to run their containers, thereby right-sizing performance and cost. AWS Fargate for Amazon EKS also provides strong security isolation for every pod by default, removing the need to manage multi-tenancy. With AWS Fargate for Amazon EKS, customers can focus on building their applications rather than spending time patching, scaling, or securing a cluster of Amazon EC2 instances.

“AWS Fargate has made it so much easier for Amazon ECS customers to manage containers at the task layer versus worrying about servers and clusters,” said Deepak Singh, Vice President of Containers at AWS. “Our Amazon EKS customers have been clamoring for us to find a way to make Fargate work with Kubernetes, and we’re excited to do so today. With AWS Fargate, Kubernetes customers can truly take advantage of the elasticity and cost savings of the cloud when running their Kubernetes containers, and don’t have worry about patching servers, scaling clusters, or managing multi-tenancy.”

AWS Fargate for Amazon EKS is available today in US East (N. Virginia), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo), with more regions coming soon.

Square helps millions of sellers run their business from secure credit card processing to point of sale solutions. “As we modernize our stack with EKS, we are always looking for opportunities to increase our security posture and lessen our administrative burden,” said Geoff Flarity, Engineering Manager for CashApp, Square. “We’re excited by the potential for Fargate for EKS to provide out of box isolation and ensure a secure compute environment for our applications with the highest level of security requirements. In addition, the ability to right size portions of our compute consumption, ensuring optimal utilization without having to spend cycles on capacity planning or operational overhead, is extremely compelling. This is without a doubt the most exciting Kubernetes announcement of the year.”

National Australia Bank (NAB) is one of the largest financial institutions in Australia and offers a wide array of personal banking financial solutions to its customers. “Amazon ECS has already reduced NAB’s microservice development time by a factor of 10. With AWS Fargate for Amazon EKS, we expect to improve this even further by enabling low touch Kubernetes cluster management at scale,” said Steve Day, EGM of Infrastructure Cloud and Workplace, NAB. “By removing the need for infrastructure management, we expect AWS Fargate for Amazon EKS to reduce our development costs on new projects by 75%. Over the next 12 months, migrating to AWS Fargate for Amazon EKS will enable 100 NAB service teams with a managed microservices based platform to break down 50 monolithic applications into modern architectures.”

Get Start your Journey with Kubernetes Online Training

GitHub brings together one of the world’s largest community of developers to discover, share, and build better software. “GitHub is committed to being the home for all developers, which includes providing them with great experiences across a wide range of tools and platforms,” said Erica Brescia, COO, GitHub. “AWS is an important platform for developers using GitHub Actions and we’re proud to collaborate with them on the launch of Amazon EKS for Fargate. Our solution makes it easier than ever for developers to focus on getting their code to the cloud with a minimum of operational overhead.”

Babylon Health is a health service provider that provides a range of services including remote consultations with doctors and health care professionals via text and in-app video messaging. “Amazon EKS is vital in our mission to offer accessible and affordable healthcare across the globe,” said Jean-Marie Ferdegue, Director of Global Platform Engineering, Babylon Health. “By using EKS and EC2 Spot instances, we have a lightning fast micro-service architecture where 300+ containerised applications are built and deployed in a highly decoupled manner. We now have unprecedented high availability across the globe while reducing the average time to bring a change to the stack from four weeks to a matter of hours. Our offering is focused on affordability and the cost reduction of 40% across our critical clusters is a key part of delivering this vision. The availability of Fargate for EKS will shift the focus from running and operating complex orchestration platforms to operating a secure and scalable health system. This maximizes our engineering effort, both in terms of time and money.”

HashiCorp is an open source software company that enables organizations to have consistent workflows and to provision, secure, connect, and run any infrastructure for any application.”Amazon EKS for Fargate enables developers and operations teams to offload the heavy lifting of infrastructure management to AWS,” said Armon Dadgar, co-founder and CTO, HashiCorp. “EKS for Fargate allows development teams to be more self-sufficient by abstracting the minute-to-minute management of their infrastructure and freeing up more time to focus on best practices and delivery. By supporting EKS for Fargate on launch day, HashiCorp Terraform provides users with a turnkey solution for provisioning Kubernetes workloads that makes use of best practices such as infrastructure as code.”

Datadog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services through a SaaS-based data analytics platform. “Containers and orchestration are becoming a standard practice for organizations looking to operate efficiently at scale,” said Ilan Rabinovitch, VP of Product Management, Datadog. “We’ve seen wide adoption of AWS Fargate throughout our customers. We are excited to see support extend to cover Amazon EKS, so that our customers can further simplify management of Kubernetes at scale on AWS.”

About Amazon Web Services

For 13 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 165 fully featured services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 69 Availability Zones (AZs) within 22 geographic regions, with announced plans for 13 more Availability Zones and four more AWS Regions in Indonesia, Italy, South Africa, and Spain. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs.

Tips for Running Containers and Kubernetes on AWS

Docker has taken the world by storm over the past few years. It’s fair to say that it has revolutionized the way applications are built, shipped, and deployed. In order to manage your Docker containers, you do need an orchestration tool, as doing everything manually is impractical and prone to error.

Related image
AWS kubernetes

But the downside of all the benefits such architectures bring is an inherent complexity. Indeed, there are now two layers to look after: the containers and the servers running those containers. Both layers need monitoring, maintenance, and scalability.

To get In depth Knowledge on AWS you can enroll for free live demo AWS Online Training

The most popular orchestration tool today is Kubernetes. It is developed and maintained by Google and offers a good balance between having all the recent features on the one hand and stability/maturity on the other. Also, Kubernetes is highly configurable and not opinionated by default, so it can be installed and configured to meet your specific needs. But this does come at a price: Kubernetes is well-known for having a rather steep learning curve and for being difficult to set up and maintain.

Thus, various cloud vendors now offer “turnkey solutions” with varying degrees of success in terms of hiding Kubernetes’ complexities. Here in this article, however, we will focus on the brave souls who have decided to set up and administer their own DIY Kubernetes clusters.

Tips for Setting Up a Kubernetes Cluster
Setting up a cluster from scratch is arguably the most difficult part of a DIY Kubernetes workload. There are tools that can help you, the best-known being kops. Please be aware that this is an opinionated tool, so you get more simplicity for less control. Kops will make choices for you, but these are reasonable choices suitable for most workloads. You can find the official tutorial to set up a cluster with kops here.

Alternatively, you can try to set up your cluster the hard way. The linked page will provide you with detailed explanations on how to go from nothing to a working cluster. You should probably put aside a couple of days to achieve this goal, depending on your technical level and the various issues that you may come across along the way.

There are a few things to note about a bare Kubernetes setup:

There is no out-of-the-box support for identity federation (i.e., to allow you to use your Google, LDAP, or Active Directory login)
Kubernetes does not provide a High Availability mode by default; to create a fault-tolerant cluster, you will have to manually configure HA for your etcd cluster, kube-apiserver, load balancers, worker nodes, etc.
A good tip at this stage is to configure your kubelet servers to initiate garbage collection based on the number of free inodes. Kubernetes has default values for available memory and available space, but you could still run out of inodes before those. You can use an argument such as the following in order to trigger garbage collection based on free inodes:

–eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%
Tips for Using Your Kubernetes Cluster
Here below are a few things to consider when implementing your Kubernetes cluster.

Namespace Limits
A useful thing to do with your newly installed Kubernetes cluster is to configure default limits for namespaces. This will prevent problems if, for example, your app has a memory leak. In such a case, a pod running your app might crash a worker node, or at least make it slow and unresponsive. An example of a configuration file, which you would apply to the namespace of your choice, would be:

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
– default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
The next tip is that you should make sure to configure your users properly in order to fulfill the Principle of Least Privilege. Broadly speaking, you will have at least two categories of users: administrators and deployers. Administrators will be confined to a given namespace and will have access to the namespace’s secrets and administrative features. Deployers will also be confined to a given namespace and will have just enough privileges to perform deployments. If you need to access the Kubernetes API from your applications or scripts, you should create service accounts with just enough privileges for what you need to do.

Use the Declarative Approach
Kubernetes allows you to use it both through an imperative approach and a declarative approach. The imperative approach means that you are telling Kubernetes what to do, for example, to create a Pod, a Service, etc. The declarative approach means that you write a set of YAML files to describe your desired end state, and you let Kubernetes make the decision for you as to how to achieve that end goal. Generally speaking, unless you are testing or debugging, you should use the declarative approach because it allows you to focus on what really matters (which is your end goal) and because this method is reproducible (as opposed to a series of kubectl commands for the imperative approach, which are difficult to reproduce and error-prone).

You can deploy applications to a bare Kubernetes cluster by using the Deployment objects. For the imperative approach, you can use the kubectl create deployment command, although, as discussed above, this approach is usually suboptimal. For the declarative approach, you would write a YAML file describing the deployment object and update the Kubernetes cluster by using the kubectl apply command.

Dealing with Docker Tags
One issue that many people run into is when they reuse the “latest” tag for their Docker images and are surprised that Kubernetes doesn’t update their Pods. The reason for this is essentially that Kubernetes (wrongly) considers Docker tags as immutable (i.e., once a tag is set, it is never changed). There are some workarounds for this issue, although the most sensible one would be to tag all of your Docker images with a specific tag (e.g., a git commit hash), which will also help you in your housekeeping of the Docker images.

Configuring Pod Disruption Budgets
The final tip in this section is to use Pod disruption budgets. This allows your cluster to maintain high availability, especially during a deployment. By using Pod disruption budgets, you maximize your chances of your cluster maintaining the availability of your app at all times. A YAML specification of a Pod disruption budget would look like this:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: app-a-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: app-a
Tips for Monitoring Your Cluster
Having visibility over how your cluster is performing is critical to ensure smooth business operations. Because of the two-layer nature of an orchestration tool, this makes things more complicated, as you have a lot more metrics to look after.

Built-In Solution
Kubernetes has a very crude built-in solution to collect and retrieve the logs emitted by Docker containers. The logs are essentially stored on the worker node and retrieved using the kubectl logs command. You even have to set up logrotate yourself, and kubectl logs don’t fetch logs from rotated log files. This is good enough for toying around, but you definitely need something more capable for a production workload.

Cluster-Level Logging
Cluster-level logging is available through node-level logging agents. On a DIY Kubernetes cluster, there are no such agents pre-installed for you, so you will have to do the hard work of installing and maintaining them yourself. You probably want to use a DaemonSet to achieve this (a DaemonSet is akin to a Deployment but ensures that one Pod runs on each worker node). From there, you can set up your cluster to manage its logs using ELK or stackdriver.

Proper Auditing
One good tip is to configure auditing to allow you to keep track of API calls made to your Kubernetes cluster. This is very important from a security perspective, and this might even be mandated by your organization’s policies or to comply with a certain standard. Auditing is configured through auditing policies, and you will need to configure a backend where all the auditing trails will be stored.

Node Problem Detection
Another tip is that you should enable the node problem detector on each node, most probably using a DaemonSet. This will alert you whenever a worker node has a problem that hampers its normal performance. The Kubernetes official documentation will walk you through setting this up.

Metrics
In terms of monitoring metrics, Prometheus is quite a popular solution for a Kubernetes cluster, and it integrates with Kubernetes well. This tutorial, for example, will guide you through setting this up. Once Prometheus is collecting and storing metrics, you can use Grafana for visualization and AlertManager to alert you when things go wrong.

Maintaining and Scaling Your Cluster
Your Pods are running on servers, and you still have to manage and maintain those servers. So it is best to see Kubernetes as a deployment platform and a scheduler for Docker, but not as a black box where you can run your containers without having to think or concern yourself with the underlying infrastructure.

Master and Worker Nodes
The master and worker nodes still need maintenance. The most common operation would be to patch and update the operating system. Unfortunately, this is outside the scope of Kubernetes itself, as Kubernetes limits itself to kubelet and any Pods running on the worker node. This means that it is up to you to handle this task, which is absolutely necessary if only from a security standpoint. Updating the OS is the absolute minimal operation that is required to ensure you don’t let yourself become vulnerable to published exploits.

If you used a third party to create and manage your cluster, it will in all likelihood have solutions in place to do this for you. It will also most probably be automated so that you don’t have to worry about it.

Auto-Scaling
Scaling your cluster happens on both layers. First, to scale your Pods out and in, Kubernetes comes with the Horizontal Pod Autoscaler. You will have to configure a metric that is relevant to your app and that measures your cluster’s current workload as best as possible. CPU usage is commonly used, but it could be network out traffic or something more complex like the average time used to service a request. Kubernetes also has a Vertical Pod Autoscaler, which is mostly used for stateful Pods, such as a database engine.

The second layer of auto-scaling is for the worker nodes themselves, which is provided by the Cluster Autoscaler. The Cluster Autoscaler will monitor the state of the Pods and will communicate with the underlying cloud vendor to create or delete worker nodes depending on the Pods’ needs. Currently, the Cluster Autoscaler supports the major cloud vendors, such as AWS, GCP, and Azure.

Using a Managed Solution for Kubernetes
Managed solutions provided by third parties could really help you by hiding away a lot of Kubernetes’ intricacies and free up time for your DevOps to focus on more important, higher-level tasks.

GKE
Google (which initially developed Kubernetes and is still very heavily involved with it) offers the Google Kubernetes Engine (GKE) service as part of its Google Cloud Platform (GCP). GKE is very easy to set up and administer, and the fact that Google both developed Kubernetes and offers a managed service for it might explain why Kubernetes is so well integrated into the GCP ecosystem.

EKS
AWS also offers a managed Kubernetes solution: Elastic Kubernetes Engine (EKS). The setup of an EKS cluster is rather more involved compared to GKE. It will require you to use some CloudFormation templates provided by AWS, adapt them to your needs, and deploy a CloudFormation stack to set up your Kubernetes cluster infrastructure. The advantage of EKS is that it is nicely integrated into other AWS services, such as IAM and CloudWatch.

ECS
Finally, AWS Elastic Container Service (ECS) is worth mentioning. ECS is the proprietary offering from AWS to run Docker containers and is not based on Kubernetes. AWS released Fargate in 2017 as an incremental step in managing Docker-based clusters. Fargate works as a backend for ECS, which completely frees you up from having to manage the underlying infrastructure on which Docker containers are running–you just have to focus on the container layer. Fargate also automatically allocates worker nodes for you to run your containers, so the underlying layer is all managed for you.

Conclusion
One final tip: If your objective is to play with Kubernetes and learn, don’t forget that you can use minikube. This great tool will get a miniature Kubernetes cluster running on your desktop very quickly and easily.

Creating, configuring, and managing a DIY Kubernetes is hard and not for the faint of heart. Prepare yourself for a steep learning curve and hone your internet research skills, as these will be part of your daily life. Unless your DevOps team is made up of nerds and experts, you might want to consider making your life easier by using the Helm package manager or one of the many vendor-packaged solutions, such as Google Kubernetes Engine or Amazon Kubernetes Engine.

What is AWS GovCloud (US)?

Government agencies and enterprises in regulated industries can take advantage of Amazon GovCloud – an isolated and secure AWS region, which complies with the stringent security regulations of the US Government.

Image result for What is Amazon GovCloud?
AWS GovCloud

The US Government with its various agencies and contractors are subject to different security compliance regulations than enterprises that operate in the private sector. This has been one of the reasons why cloud deployments in highly regulated environments have progressed at a slower pace.

If you are interested to Learn AWS You can enroll for free live demo AWS Online Training

To facilitate cloud adoption for the US Government, Amazon Web Services launched AWS GovCloud – a new isolated and secure AWS region, specifically structured to accommodate workflows for U.S. government agencies and contractors.

AWS GovCloud (US) is designed specifically for agencies at the federal, state, and local levels, as well as organizations in government regulated industries, such as Defense, Law Enforcement, Energy, Aerospace, Healthcare, Financials, and many more.

AWS Server Replication Use Case: OnPrem to/from AWS GovCloud (US)AWS Server Replication Use Case: OnPrem to/from AWS GovCloud (US) – Amazon RedshiftAWS Server Replication Use Case: OnPrem to/from AWS GovCloud (US) to Amazon S3 to Amazon EMR/HadoopAWS Server Replication Use Case: OnPrem to/from AWS GovCloud (US)

GovCloud facilitates customers with stringent regulatory and compliance requirements, such as:

FedRAMP (Federal Risk and Authorization Management High Baseline)
ITAR (International Traffic and Arms Regulation)
DOD SRG (Dept of Defense Security Requirements Guide)
CJIS (Criminal Justice Information Services Security Policy and Addendum)
HIPAA (Health Insurance Portability and Accountability Act)

AWS GovCloud (US) is operated by vetted employees and it allows access only to account holders who have been confirmed “U.S. Persons”. Network, Data, and Virtual Machines in GovCloud are isolated from all other AWS Cloud Regions. GovCloud features a separate identity and access management stack with unique credentials, which only work with the GovCloud region, and comes with a dedicated management console, as well as end-points that are specific to the AWS GovCloud region.

Customers with regulated IT workloads can now move sensitive data into the cloud with the agility and scalability of the AWS cloud platform.

6 Benefits of Migrating to AWS Cloud

Recently IDC predicted that by 2019, small to medium-sized businesses will contribute 40% of all money spent on cloud services. Today, the prophecy has come true and 90% of businesses are using cloud computing at some capacity.

Some of the largest organisations including Comcast, Dow Jones and Adobe are using AWS to fulfil their cloud requirements.

Image result for 6 Benefits of Migrating to AWS Cloud
AWS Cloud

Let’s have a look at a few advantages of migrating to AWS cloud.

Eliminating infrastructure Costs:
AWS Only charges for the resources used. business expand and the demands on their current IT infrastructure increase. the system administration as well as their costs are eliminated sp owners can focus on running the business.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

Feasible Data Storage:

AWS provide the flexbility to access various files from any device in any place at any time. it eleminates the extra expanse and allows a business infinite storage yet only paying for the actual usage.

Improved security:
AWS has a robust set of security features that meets all requirements. By migrating AWS. You can take benefit of the compeny’s extensive security expertise and no longer need resources to create and constantly adjust security practices manually.

Feweer issuse with applications:
Having your applications hosted via AWS means easy deployment,management, scaling, monitoring, capacity provisioning, and load balancing. as a result a small business can use its apps efficiently with minimal issues.

Disaster Recovery:
Data protection is important to organization being an integral part of business continuity planning. cloud takes the backup of the data at a safe and securelocation and protects it from sudden failure, natural disaster, or other crises.

Robust Solutions for Mobility:
With IOS And Android mobile app versions of the AWS management console , small business have anytime /anywhere access to these services. the gives them the ability to design and create features that target mobile devices.

The Highlights from AWS in January 2020

This year is off to an exciting start with all of the new AWS announcements! This blog is not intended to be an exhaustive list of all January announcements, but rather a curation of a few announcements we feel could benefit the enterprise thought leaders working to drive cloud adoption and efficiency within their organizations. Some are helpful, some are vital, and some are interesting. Let’s spend a few minutes looking at AWS Systems Manager Change Calendar, Amazon Elastic File System (EFS) announcements, AWS Backup announcements, and a 50% price reduction for Amazon Elastic Kubernetes Service (EKS) clusters!

Image result for The Highlights from AWS in January 2020
AWS January announces

AWS Systems Manager Change Calendar
AWS Systems Manager launched in late 2017 and provides system administrators a cloud native way to view operational data from a variety of AWS services and automate operational tasks against those services, all from a unified user interface. Resources can be grouped logically by application, application layer, or by environment such as production or QA. API activity, resource configuration changes, related notifications, operational alerts, software inventory, and patch compliance status are all things that can be viewed and acted on via resource groups within AWS Systems Manager.

To get In depth Knowledge on AWS You can enroll for free live demo AWS Online Training

One thing that would make AWS Systems Manager more useful and help to drive adoption for enterprise users would be the ability to not only automate operational tasks, but to calendar those tasks with a date and time range for execution. Even better still, what if you could calendar a date and time range where no changes should be made to the system? This is exactly the functionality that AWS announced in January with the addition of AWS Systems Manager Change Calendar.

In AWS Systems Manager Change Calendar, when you create a Change Calendar entry, you are creating a Systems Manager document of the type ChangeCalendar. The iCalendar 2.0 formatted information for the event becomes part of this document. After the calendar event is created, you can view your calendars in the AWS console. AWS Systems Manager displays your calendar entry in the Change Calendar list. You can also get information about your calendar events programmatically using the GetCalendarState API or get-calendar-state AWS CLI command. You can retrieve present, past, and future state conditions of the calendar events. This is useful to have resources like Lambda functions that make operational changes to the environment check the calendar programmatically before they take action to make sure the window of opportunity is open based on the date/time range it retrieves.

Amazon EFS Announcements
AWS IAM for NFS

Amazon EFS now supports AWS Identity and Access Management (IAM) for Network File System (NFS) clients. The ability to control NFS client permissions with IAM policies greatly simplifies the management of NFS access at scale. It also provides the ability to manage access using the same methods you use today for access management for other AWS resources. Anything that lightens the load of and fosters more confidence in access control processes should make the list of any enterprise user seeking to drive AWS adoption in their organization.

Amazon EFS Single File Restore
One notable announcement that increases the utility of Amazon EFS for administrators is the ability to restore a single file from Amazon EFS. Previously, you would have to wait for the entire file system to restore a single file contained within. From the Amazon EFS vault within AWS Backup, in addition to “Full Restore” you’ll see “Item-Level Restore” for single file restore capability. Choose your restore location and restore role as usual and you’re off! Pricing is a fixed fee based on the number of bytes you restore. Accidental deletion of a single file? No problem. A single file becomes corrupted? No sweat. With Amazon EFS Single File Restore, recovering these individual files just got easier.

AWS Backup Announcements
Cross-Region Backups

AWS Backup is a wonderful service announced last year that allows you to backup Amazon EBS volumes, Amazon RDS databases, Amazon DynamoDB tables, as well as Amazon EFS file systems. It is a fully managed, centralized backup service that is protecting petabytes of data in AWS. In January, AWS announced the ability to back up to a secondary region from within the AWS Backup service, allowing for a fully managed, cross-region backup service in AWS. Copy to a secondary region either on-demand when you need it or automatically, as part of a backup plan. This makes the service particularly useful to enterprises that have internal or compliance framework requirements that place a geographical separation requirement on a copy of backup data. Now you can meet those requirements natively within the AWS Backup service. Administrators can use the AWS console, CLI, or AWS SDKs to initiate the copy.

Full Amazon Elastic Compute Cloud (EC2) Backup
Prior to January’s launch of full Amazon EC2 backup for the AWS Backup service, it was only the Amazon EBS volumes that got handled via AWS Backup. This announcement now means that you can use the AWS Backup service to back up the entire Amazon EC2 instance, not just the volumes. All parameters get backed up from the instance excluding user data scripts and Amazon Elastic Inference accelerators. The Amazon EBS volumes are still protected, but they get attached to an AMI containing the instance level parameters such as instance type, VPC, security group, IAM role, etc. Restoration of the instance can be accomplished with the console, CLI, or API, with edits to the original available in the restore process. It just keeps getting easier!

Amazon EKS Price Reduction
Amazon announced a 50% price reduction for Amazon EKS to $0.10 an hour for every Amazon EKS cluster that you run. You can view the whole announcement here. With the adoption of Amazon EKS in the enterprise in recent years, this is welcome news to readers of this blog!

How do AWS developers manage Web apps?

When it comes to hosting and building a website on cloud, Amazon Web Services ( AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share.

AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app.

If You are interested to Learn AWS you can enroll for free live demo AWS Online Training

AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability.

Having said that, let’s take a deep dive into how AWS developers manage a web application.

Deploying a website or web app with Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances.

With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases.

Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4.

Here is how developers get themselves started with Amazon EC2 for deploying a website or web app.

The first step is to set up an AWS account and log into it.

Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM.

Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.

Click on Launch.

In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance.

Click on ‘Launch Instances’ to finish the set-up process.

Once the instance is ready, it can be used to build high availability websites or web app.

Using Amazon S3 for cloud storage
Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript.

S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites.

Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key.

There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage.

Elastic load balancing
Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers.

With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer.

Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections.

Debugging and troubleshooting
A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.

  • See the service issue at Common Debugging and Troubleshooting Issues.
  • Check the Debugging Recipes for issues related to recipes.
  • Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.
  • Get in touch with AWS OpsWorks Stacks support team to solve the issue.

Traffic monitoring and analysis
Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.

AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.

For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more.

Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc.

How do AWS developers manage Web apps?

When it comes to hosting and building a website on cloud, Amazon Web Services ( AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share.

AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app.

If you are searching AWS Training. To get In depth Knowledge on AWS you can enroll for free live demo AWS Online Training

AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability.

Having said that, let’s take a deep dive into how AWS developers manage a web application.

Deploying a website or web app with Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances.

With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases.

Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4.

Here is how developers get themselves started with Amazon EC2 for deploying a website or web app.

The first step is to set up an AWS account and log into it.

Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM.

Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.

Click on Launch.

In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance.

Click on ‘Launch Instances’ to finish the set-up process.

Once the instance is ready, it can be used to build high availability websites or web app.

Using Amazon S3 for cloud storage
Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript.

S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites.

Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key.

There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage.

Elastic load balancing
Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers.

With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer.

Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections.

Debugging and troubleshooting
A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.

See the service issue at Common Debugging and Troubleshooting Issues.
Check the Debugging Recipes for issues related to recipes.
Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.
Get in touch with AWS OpsWorks Stacks support team to solve the issue.
Traffic monitoring and analysis
Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.

AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.

For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more.

Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc.

Kubernetes on AWS and How Kubernetes Works

Kubernetes is open source software that allows you to deploy and manage containerized applications at scale. Kubernetes manages clusters of Amazon EC2 compute instances and runs containers on those instances with processes for deployment, maintenance, and scaling. Using Kubernetes, you can run any type of containerized applications using the same toolset on-premises and in the cloud.
AWS makes it easy to run Kubernetes in the cloud with scalable and highly-available virtual machine infrastructure, community-backed service integrations, and Amazon Elastic Kubernetes Service (EKS), a certified conformant, managed Kubernetes service.

Image result for and Kubernetes on AWS
AWS Kubernetes

HOW KUBERNETES WORKS
Kubernetes works by managing a cluster of compute instances and scheduling containers to run on the cluster based on the available compute resources and the resource requirements of each container. Containers are run in logical groupings called pods and you can run and scale one or many containers together as a pod.

To get In depth Knowledge On AWS you can enroll for free live demo AWS Online Training

Kubernetes control plane software decides when and where to run your pods, manages traffic routing, and scales your pods based on utilization or other metrics that you define. Kubernetes automatically starts pods on your cluster based on their resource requirements and automatically restarts pods if they or the instances they are running on fail. Each pod is given an IP address and a single DNS name, which Kubernetes uses to connect your services with each other and external traffic.

WHY USE KUBERNETES
Because Kubernetes is an open source project, you can use it to run your containerized applications anywhere without needing to change your operational tooling. Kubernetes is maintained by a large community of volunteers and is always improving. Additionally, many other open source projects and vendors build and maintain Kubernetes-compatible software that you can use to improve and extend your application architecture.

Run Applications At Scale
Kubernetes lets you define complex containerized applications and run them at scale across a cluster of servers.

Seamlessly Move Applications
Using Kubernetes, containerized applications can be seamlessly moved from local development machines to production deployments on the cloud using the same operational tooling.

Run Any Where
Run highly available and scalable Kubernetes clusters on AWS while maintaining full compatibility with your Kubernetes deployments running on-premises.

Add New Functionality
As an open source project, adding new functionality to Kubernetes is easy. A large community of developers and companies build extensions, integrations, and plugins that help Kubernetes users do more.

Amazon EKS Kubernetes Versions

The Kubernetes project is rapidly evolving with new features, design updates, and bug fixes. The community releases new Kubernetes minor versions, such as 1.14, as generally available approximately every three months, and each minor version is supported for approximately one year after it is first released.

Image result for Amazon EKS Kubernetes
Amazon eks

Available Amazon EKS Kubernetes Versions
The following Kubernetes versions are currently available for new clusters in Amazon EKS:

1.14.9

1.13.12

1.12.10

Important
Kubernetes version 1.11 is no longer supported on Amazon EKS. You can no longer create new 1.11 clusters, and all existing Amazon EKS clusters running Kubernetes version 1.11 will eventually be automatically updated to the latest available platform version of Kubernetes version 1.12. For more information, see Amazon EKS Version Deprecation.

To get In depth Knowledge on AWS You can enroll for free live demo AWS Online Training

Please update any 1.11 clusters to version 1.12 or higher in order to avoid service interruption. For more information, see Updating an Amazon EKS Cluster Kubernetes Version.

Unless your application requires a specific version of Kubernetes, we recommend that you choose the latest available Kubernetes version supported by Amazon EKS for your clusters. As new Kubernetes versions become available in Amazon EKS, we recommend that you proactively update your clusters to use the latest available version. For more information, see Updating an Amazon EKS Cluster Kubernetes Version.

Kubernetes 1.14
Kubernetes 1.14 is now available in Amazon EKS. For more information about Kubernetes 1.14, see the official release announcement.

Important
The –allow-privileged flag has been removed from kubelet on Amazon EKS 1.14 worker nodes. If you have modified or restricted the Amazon EKS Default Pod Security Policy on your cluster, you should verify that your applications have the permissions they need on 1.14 worker nodes.

The following features are now supported in Kubernetes 1.14 Amazon EKS clusters:

Container Storage Interface Topology is in beta for Kubernetes version 1.14 clusters. For more information, see CSI Topology Feature in the Kubernetes CSI Developer Documentation. The following CSI drivers provide a CSI interface for container orchestrators like Kubernetes to manage the lifecycle of Amazon EBS volumes, Amazon EFS file systems, and Amazon FSx for Lustre file systems:

  • Amazon Elastic Block Store (EBS) CSI driver
  • Amazon EFS CSI Driver
  • Amazon FSx for Lustre CSI Driver

Process ID (PID) limiting is in beta for Kubernetes version 1.14 clusters. This feature allows you to set quotas for how many processes a pods can create, which can prevent resource starvation for other applications on a cluster. For more information, see Process ID Limiting for Stability Improvements in Kubernetes 1.14.

Persistent Local Volumes are now GA and make locally attached storage available as a persistent volume source. For more information, see Kubernetes 1.14: Local Persistent Volumes GA.

Pod Priority and Preemption is now GA and allows pods to be assigned a scheduling priority level. For more information, see Pod Priority and Preemption in the Kubernetes documentation.

Windows worker node support is GA with Kubernetes 1.14.

Kubernetes 1.13
The following features are now supported in Kubernetes 1.13 Amazon EKS clusters:

The PodSecurityPolicy admission controller is now enabled. This admission controller allows fine-grained control over pod creation and updates. For more information, see Pod Security Policy. If you do not have any pod security policies defined in your cluster when you upgrade to 1.13, then Amazon EKS creates a default policy for you.

Important
If you have any pod security policies defined in your cluster, the default policy is not created when you upgrade to Kubernetes 1.13. If your cluster does not have the default Amazon EKS pod security policy, your pods may not be able to launch if your existing pod security policies are too restrictive. You can check for any existing pod security policies with the following command:

kubectl get psp
If you cluster has any pod security policies defined, you should also make sure that you have the default Amazon EKS pod security policy (eks.privileged) defined. If not, you can apply it by following the steps in To install or restore the default pod security policy.

Amazon ECR interface VPC endpoints (AWS PrivateLink) are supported. When you enable these endpoints in your VPC, all network traffic between your VPC and Amazon ECR is restricted to the Amazon network. For more information, see Amazon ECR Interface VPC Endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide.

The DryRun feature is in beta in Kubernetes 1.13 and is enabled by default for Amazon EKS clusters. For more information, see Dry run in the Kubernetes documentation.

The TaintBasedEvictions feature is in beta in Kubernetes 1.13 and is enabled by default for Amazon EKS clusters. For more information, see Taint based Evictions in the Kubernetes documentation.

Raw block volume support is in beta in Kubernetes 1.13 and is enabled by default for Amazon EKS clusters. This is accessible via the volumeDevices container field in pod specs, and the volumeMode field in persistent volume and persistent volume claim definitions. For more information, see Raw Block Volume Support in the Kubernetes documentation.

Node lease renewal is treated as the heartbeat signal from the node, in addition to its NodeStatus update. This reduces load on the control plane for large clusters.

Learn for Latest AWS Interview questions

Amazon EKS Version Deprecation
In line with the Kubernetes community support for Kubernetes versions, Amazon EKS is committed to running at least three production-ready versions of Kubernetes at any given time, with a fourth version in deprecation.

We will announce the deprecation of a given Kubernetes minor version at least 60 days before the deprecation date. Because of the Amazon EKS qualification and release process for new Kubernetes versions, the deprecation of a Kubernetes version on Amazon EKS will be on or after the date the Kubernetes project stops supporting the version upstream.

On the deprecation date, Amazon EKS clusters running the version targeted for deprecation will begin to be updated to the next Amazon EKS-supported version of Kubernetes. This means that if the deprecated version is 1.11, clusters will eventually be automatically updated to version 1.12. If a cluster is automatically updated by Amazon EKS, you must also update the version of your worker nodes after the update is complete. For more information, see Worker Node Updates.

Kubernetes supports compatibility between masters and workers for at least 2 minor versions, so 1.11 workers will continue to operate when orchestrated by a 1.12 control plane. For more information, see Kubernetes Version and Version Skew Support Policy in the Kubernetes documentation.

Design a site like this with WordPress.com
Get started