AWS’s Managed Kubernetes Service

Amazon EKS is a hosted Kubernetes solution that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. This is a great entry point for Kubernetes administrators who are looking to migrate to AWS services but want to continue using the tooling they are already familiar with. Often, users are choosing between Amazon EKS and Amazon ECS (which we recently covered, in addition to a full container services comparison), so in this article, we’ll take a look at some of the basics and features of EKS that make it a compelling option.

Image result for AWS Managed Kubernetes Service
AWS kubernetes

Amazon EKS 101

The main selling point of Amazon EKS is that the Kubernetes control plane is managed for you by AWS, so you don’t have to set up and run your own. When you set up a new cluster in EKS, you can specify if it’s going to be just available to the current VPC, or if it will be accessible to outside IP addresses. This flexibility highlights the two main deployment options for EKS:

To get In depth Knowledge on AWS you can enroll for free live demo AWS Online course

1.Fully within an AWS VPC, with complete integration to other AWS services you run in your account while being completely isolated from the outside world.
2.Open and accessible, which enables hybrid-cloud, multi-cloud, or multi-account Kubernetes deployments.

Both options allow you the flexibility to use your own Kubernetes management tools, like Dashboard and kubectl, as EKS gives you the API Server Endpoint once you provision the cluster. This control plane utilizes multiple availability zones within the region you choose for redundancy.

Managed Container Showdown: EKS vs. ECS

Amazon offers two main container service options in EKS and ECS, and both are using Kubernetes under the hood. The biggest difference between the two options lies in who is doing the management of Kubernetes. WIth ECS, Amazon is running Kubernetes for you, and you just decide which tasks to run and when. Meanwhile, with EKS, you’re doing the Kubernetes management of your pods.

One consideration when considering EKS vs. ECS is networking and load balancing. Both services run EC2 servers behind the scenes, but the actual network connection is slightly different. ECS has network interfaces connected to individual tasks on each EC2 instance, while EKS has network interfaces connecting to multiple pods on each EC2 instance. Similarly, for load balancing, ECS can utilize Application Load Balancers to send traffic to a task, while EKS must use an Elastic Load Balancer to send traffic to an EC2 host (which can have a proxy via Kubernetes). Neither is necessarily better or worse, just a slight difference that may matter for your workload.

Sounds Great… How Much Does It Cost?

For each workload you run in Amazon EKS, there are two main charges that will apply. First, there’s a charge of $0.20/hr (roughly $146/month) for each EKS Control Plane you run in your AWS account. Second, you’re charged for the underlying EC2 resources that are spun up by the Kubernetes controller. This second charge is very similar to how Amazon ECS charges you, and is highly dependant on the size and amount of resources you need.

To get start you journey with AWS Online Training Hyderabad

Amazon EKS Best Practices

There’s no one-size-fits-all option for Kubernetes deployments, but Amazon EKS certainly has some good things going for it. If you’re already using Kubernetes, this can be a great way to seamlessly migrate to a cloud platform without changing your working processes. Also, if you’re going to be in a hybrid-cloud or multi-cloud deployment, this can make your life a little easier. That being said, for just simple Kubernetes clusters, the price of the control plane for each cluster may be too much to pay, which makes ECS a valid alternative.

Using Amazon SES in Python with Postman and Postfix

A recent addition to the Amazon Web Services (AWS) family is Amazon Simple Email Service (Amazon SES). This article discusses the application programming interface (API) calls to Amazon SES through boto, a Python library for AWS. It also walks you through a sample command-line tool called postman, which is designed for use in your Postfix configuration to send mail through Amazon SES.all while remaining transparent to your applications.

In addition, you will discover an alternative to postman for use with Django.django-ses.and get the pros and cons of both solutions. Finally, the article looks at the information you can pull from the quota and stats APIs as well as how to avoid rejection and Internet service provider (ISP).level spam filtering.

To get In depth Knowledge On AWS you can enroll for free live demo AWS Online Trining

Getting Started
This article assumes that you have already signed up for AWS and added Amazon SES to your account.

Install Postman
Let’s start by installing postman, a command-line client for Amazon SES built on top of boto, the leading Python library for AWS. The library is designed to be fed raw email messages from Postfix through its send command, but it also has useful commands for interacting with the service from the command line. For now, install the library and configure your system to work with your AWS account:

$ pip install postman
If you do not have pip installed, you can install it manually by downloading the tarball from my http://pypi.python.org/pypi/postman and running:

$ tar zxf postman-0.5.tar.gz
$ cd postman-0.5
$ python setup.py install
Installing postman also installs boto, as the library is a dependency. (The next section walks through the various boto API calls.) This code is open source and can be found on Github.

Configuration
With the code installed, you must now configure boto to use your account. You do this by editing the /etc/boto.cfg file with the following content:

[Credentials]
aws_access_key_id=
aws_secret_access_key=
You can find these keys under the Access Credentials section. Copy and paste them into this file.

Now you’re ready to have some fun!

Postman
I wrote postman for two reasons. First, I wanted an example for this article.a concrete example rather than abstract ideas that would provide more value to the reader who wants to get something working. Second, it solves a real-world problem for me. I wanted to send mail from a Django-based project I had been hosting on Amazon Elastic Compute Cloud (Amazon EC2) without having to deal with proper email configuration. I wanted to use Postfix so that applications on my server could send email out of the box without special configuration.

I mentioned that you can find the code on Github, but I’m actually going to be reviewing the code found in main__.py (see https://github.com/paltman/postman/blob/master/postman/__main.py).

The first thing you’ll notice is that this is just a simple command-line utility that does some simple wrapping of the API that boto exposes. There’s nothing fancy or remarkable about the code, but it serves my dual purposes well.

The send Command
Let’s start with the send command:

def cmd_send(args):
ses = boto.connect_ses()
out(“Sending mail to: %s” % “, “.join(args.destinations), args)
msg = sys.stdin.read()
r = ses.send_raw_email(args.f, msg, args.destinations)
if r.get(“SendRawEmailResponse”, {}).get(“SendRawEmailResult”, {}).get(“MessageId”):
out(“OK”, args)
else:
out(“ERROR: %s” % r, args)
In standard boto fashion, you get a connection object for the service. Next, call the send_raw_email method on the connection object with content from standard input. That’s all there is to sending an email message using Python and boto through Amazon SES. Some notable improvements here would be to catch quota/rate exceptions and try again after a sleep period or.better yet.return the appropriate return code so that Postfix could manage the retry.

The verify Command
Now on to the other commands, starting with verify:

def cmd_verify(args):
ses = boto.connect_ses()
for email in args.email:
ses.verify_email_address(email)
out(“Verification for %s sent.” % email, args)
Again, this is a simple call to a single boto connection method, verify_email_address. You need to call this method for every email address from which you want to send a message. In fact, while in the Sandbox, you will also need to call this method for the email addresses you are going to send mail to. After calling this method, Amazon SES sends an email with a confirmation link. The recipient must click the link before the address is considered verified. Once verified, you can send mail as that address (or to that address during the Sandbox period).

The list_verified Command
To check which email addresses are verified on your account, you can run the list_verified command like so:

$ postman list_verified
The code for this command is slightly more involved, but it is just cleaning up the return data from the single boto method to provide cleaner output:

def cmd_list_verified(args):
ses = boto.connect_ses()
args.verbose = True

addresses = ses.list_verified_email_addresses()
addresses = addresses["ListVerifiedEmailAddressesResponse"]
addresses = addresses["ListVerifiedEmailAddressesResult"]
addresses = addresses["VerifiedEmailAddresses"]

if not addresses:
    out("No addresses are verified on this account.", args)
    return

for address in addresses:
    out(address, args)

This code sends output to standard out as a listing of each email address that has been verified.

The show_quota and show_stats Commands
Next, two commands.show_quota and show_stats.query the service for data about current limits as well as information on what you have sent:

def cmd_show_quota(args):
ses = boto.connect_ses()
args.verbose= True

data = ses.get_send_quota()["GetSendQuotaResponse"]["GetSendQuotaResult"]
out("Max 24 Hour Send: %s" % data["Max24HourSend"], args)
out("Sent Last 24 Hours: %s" % data["SentLast24Hours"], args)
out("Max Send Rate: %s" % data["MaxSendRate"], args)

def cmd_show_stats(args):
ses = boto.connect_ses()
args.verbose = True

data = ses.get_send_statistics()
data = data["GetSendStatisticsResponse"]["GetSendStatisticsResult"]
for datum in data["SendDataPoints"]:
    out("Complaints: %s" % datum["Complaints"], args)
    out("Timestamp: %s" % datum["Timestamp"], args)
    out("DeliveryAttempts: %s" % datum["DeliveryAttempts"], args)
    out("Bounces: %s" % datum["Bounces"], args)
    out("Rejects: %s" % datum["Rejects"], args)
    out("", args)

Again, these are simple wrappers around two boto methods.get_send_quota and get_send_statistics.that provide some parsing out of the data structure that botoreturns to provide cleaner console output.

The delete_verified Command
The last command.delete_verified.provides a way to remove an email address from the verified emails that are allowed be in the From header of an email message:

def cmd_delete_verified(args):
ses = boto.connect_ses()
for email in args.email:
ses.delete_verified_email_address(email_address=email)
out(“Deleted %s” % email, args)
The rest of the main.py module consists of Python code that parses input arguments and calls the right command function. The module is missing one API call that boto does provide, however: a more structured email send that does not require a raw email message body. Adding this call provides a clean method for sending email messages from the command line without having to structure and PIPE in content with email headers. I will leave that addition as an exercise for later (or perhaps an ambitious reader).

Postman and Postfix
Now that you have installed postman and are familiar with what it’s doing, you’re ready to hook it up as your default transport for Postfix. I am no postfix expert, but after a bit of searching and finding instructions for hooking up a Perl script in Amazon SES as a Postfix transport, I thought I could do the same thing with postman send:

/etc/postfix/master.cf

postman unix – n n – – pipe
flags=R user=ubuntu argv=/usr/local/bin/postman send -f ${sender} ${recipient}

/etc/postfix/main.cf

default_transport = postman
If, like me, you have a Django project being served on this machine and want it to be able to send email through this postman transport, you need to update two settings in your project’s settings.py file:

Django project’s settings.py

SERVER_EMAIL = “user@gmail.com”
DEFAULT_FROM_EMAIL = “user@gmail.com”
After saving this change and bouncing your server to reload the settings.py changes, you must verify the email addresses you are going to be sending from:

$ postman verify user@gmail.com
Then, reload Postfix to pick up the new changes to main.cf and master.cf:

$ sudo /etc/init.d/postfix reload
While you’re in Sandbox mode, remember that you must verify any emails you are going to send to, as well, so after doing that, try sending a few test messages from your Django project. You can tail the Postfix logs to aide in troubleshooting this setup, but you shouldn’t have any problems:

$ tail -f /var/log/mail.info
Quotas and Statistics
There are two limits on your Amazon SES account: a daily quota and a send rate. The daily quota is how many emails you are permitted to send within a 24-hour period. The send rate is how many emails your account can send per second. For example, you start out with a daily quota of 1000 and a 1/email/sec rate, so if you had a batch of 1000 emails to send, you would have to throttle the sending to 1 per second, or else you would get an exception. So, it would take approximately 17 minutes to send all 1000 messages, but after doing so, you would need to wait another 23.75 hours until you could send anymore. The postman show_quota command will assist you in monitoring these limits, which Amazon raises according to your usage over time.

$ postman show_quota

Max 24 Hour Send: 1000.0
Sent Last 24 Hours: 9.0
Max Send Rate: 1.0
Amazon provides an API to fetch statistics on emails sent grouped in 15-minute intervals for a rolling previous two-week period. These statistics are useful for assisting in monitoring how your application is using and sending email. They provide counts on delivery attempts, complaints, bounces, and rejects. The postman show_stats command prints these figures to the console for you:

$ postman show_stats

Complaints: 0
Timestamp: 2011-04-10T18:48:00Z
DeliveryAttempts: 1
Bounces: 0
Rejects: 0

Complaints: 0
Timestamp: 2011-04-10T19:18:00Z
DeliveryAttempts: 1
Bounces: 0
Rejects: 0
You receive Bounce and Complaint notifications via email with the address that either bounced or complained. It is wise to take action and remove the affected email address from your application to avoid future, repeated sends to that address.

Avoiding SPAM Filters
To avoid spam filtering or outright rejections from ISPs, it’s a good idea to set Sender Policy Framework (SPF) and Sender ID records. These are Domain Name System (DNS) TXT records that have the following content:

SPF:
v=spf1 include:amazonses.com ?all
Sender ID:
spf2.0/pra include:amazonses.com ?all
If you already have either of these records, you MUST add these entries as additions or replace the current entries, as ISPs will query and see a different authorization and reject them, whereas if they are simply missing, it might allow the authorization through and/or mark it as spam. Bottom line: Add the records to the domains from which you are sending email to ensure high-quality delivery of your email.

django-ses
One alternative to the postman solution presented earlier is a project by boto core committer, Harry Marr, called django-ses, which you can find at Github. It is a Django mail back end. Obviously, this tool only works within the confines of a Django-based project, so it’s not exactly an apples-to-apples comparison to postman. However, in the context of a Django project, the tools solve the same problem.

The django-ses project includes user interface elements for graphing and displaying statistics, which may be more useful than pure console output. In addition, depending on where you were deploying your Django project, you may not have control over your Postfix configuration or have a good email solution in place at your host provider. So using postman would not be an option, whereas django-ses is 100 percent Python and is deployed as part of your site.

The downside to django-ses is that sending mail is a blocking call. Depending on what is triggering the email to send, the message may have to wait on the request to Amazon SES to finish before returning. This delay could become problematic in high-performance scenarios.

AWS Stacks SageMaker ML Onto Kubernetes Clusters

Amazon Web Services (AWS) launched its SageMaker Operators for Kubernetes that uses the Kubernetes Operators model to more tightly couple the SageMaker machine learning (ML) platform with their Kubernetes workflows

Image result for AWS Stacks SageMaker ML Onto Kubernetes Clusters

.

The SageMaker Operators for Kubernetes product allows users to tap into data housed within SageMaker to populate a Kubernetes-controlled container or cluster of containers. This can be done at the scale needed to support ML-based services and does not require the data scientist or developer to re-write code.

If You are interested to Learn AWS you can enroll for free live demo AWS Online Training

The integration uses the Kubernetes Operators model that allows a user to natively invoke custom resources and automate associated workflows of pre-configured application-specific or domain-specific logic and components. SageMaker can be used as one of these custom resources.

The Operators model was originated by CoreOS as a controller that runs Kubernetes for a particular application. It does this by using the Kubernetes API to handle the creation and management of application instances. The concept is targeted at distributed applications and allows for the scaling of instances as needed.

Initial SageMaker Operators include Train, which helps to cut training costs; Tune, which automates hyperparameter optimization; and Inference, which can handle the autoscaling of container clusters that are spread across multiple availability zones. Those initial zones include AWS’ US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

SageMaker Expansion
AWS unveiled SageMaker at its annual re:Invent show in 2017. It’s a fully managed, end-to-end ML service that helps developers build ML models at scale.

The cloud giant at that event also launched its Elastic Kubernetes Service (EKS), which (finally) offered a managed Kubernetes service running on top of AWS. This allows users to deploy and manage containerized applications while staying within the warm embrace of AWS.

Get start your journey with Machine Learning Online Training

AWS this week also linked its SageMaker product into a newly developed Adlink AI at the Edge platform that is targeted at industrial use cases that can benefit from artificial intelligence (AI) at the edge. It combines AWS’ SageMaker and Greengrass edge platform with Intel’s OpenVINO toolkit, which includes accelerators and streamlines deep learning workloads across Intel architecture, and edge company Adlink’s Edge software suite.

What is AWS Cloud Formation?

The sinews and muscles that make cloud computing function are just as important as the web and mobile applications that run on top of it. While many companies are focused on the features available in the apps, increasing user adoption of an app, or focusing on revenue generated from a service that runs on the web, there is also the underlying infrastructure that makes those app work reliably and at a high-performance level. For the most part, a cloud computing service provider like Amazon (with AWS or Amazon Web Services) insulates developers, data scientists, and business owners from the complexity of the infrastructure.

Related image
AWS Cloud Formation

Yet, there is also a great opportunity to tweak that cloud infrastructure in ways that help your company, the web and mobile apps you run, and your customers. The concept of “Infrastructure as Code” emerged a few years ago as a way to help companies manage all of the disparate services that run in the cloud. Previously, they may have used scripts or other tools to manage their IT infrastructure, but those tools were often hard to use and complex. It’s exacerbated further when your staff needs to manage provisioning, version control, and other variables.

To get In depth Knowledge on AWS you can enroll for free live demo AWS Online Training

While we like to think of cloud infrastructure as running independent of the apps and services we need to deploy, there are opportunities to provision services so that they all work together seamlessly, and to take advantage of new Amazon services. It means even more control over how the infrastructure runs and what you can do with your apps that run on top of it.

AWS Cloud Formation, as the name implies, is a way to “form the cloud” — meaning, it allows companies to manage and control the application stacks and resources needed for your web and mobile applications. It provides access to infrastructure components at your disposal and allows you to manage them all from one central command-line interface.

An example of what you can do: for those who are new to cloud computing, AWS Cloud Formation uses templates to make the process easier (essentially, it’s a JSON or JavaScript Object Notation file you can use to track and manage resources). With templates, you can define and track all of the AWS resources you need. It takes the guesswork out of the infrastructure management part of cloud computing. Pre-defined templates make this even easier, providing access to the most used resources in a way that is ready to deploy.

Once you have selected a template (either as a JSON file or a pre-determined template) you then upload that configuration file to Cloud Formation. The “infrastructure as code” concept comes into play here because you are using a piece of code (the JavaScript Object Notation file) to manage and control all of the resources, including the application stack, storage, servers, memory, and any other variables you need for your applications.

Benefits of AWS Cloud Formation

As you can imagine, using AWS Cloud Formation means there is one primary method of controlling the infrastructure rather than a disparate set of parameters and controls. Once you configure the template and upload it, running the infrastructure the way you want it to run is a simple matter of “running that code” in the cloud. The single template or a series of templates you create becomes the one way you manage the AWS infrastructure.

Because of this one command center approach, it is also easier to replicate and deploy another infrastructure for additional application stacks using the same template. This also makes it easier to deploy an infrastructure for testing and development purposes. This provides more flexibility in how you develop and test business apps, and how you stress test and add additional services for your infrastructure without the confusion of having multiple points of configuration.

Because of this flexibility in how you control and manage the infrastructure, the CloudFormation templates have exactly the same benefits as a normal piece of software code. This includes version control for those templates, the ability to author the templates in a programming language just as you would any other app, and also to work together as a team to analyze the application stack, AWS resources, and performance variables as needed.

Learn for more latest AWS interview questions

Another benefit to managing your infrastructure in this way is that you automate the entire process. Once your templates are all configured and ready to deploy, and your team has worked together to tweak all of the settings, deploying the template is incredibly easy — it is just a matter of uploading that template and deploying it within CloudFormation.

One additional benefit, as is usually the case with any cloud infrastructure process, is that you can scale up easily with increased demand or when you need to deploy more apps to a larger group of users. You can replicate the templates in Cloud Formation and launch an entirely new infrastructure with new applications without reinventing the wheel.

What are The AWS Ground Station Features

AWS Ground Station enables you to control and ingest data from orbiting satellites without having to buy or build satellite ground station infrastructure. AWS Ground Station does this by integrating the ground station equipment like antennas, digitizers, and modems into our AWS Regions around the world. It’s as easy as onboarding your satellites and scheduling time to communicate with them. You have the option of conducting all of your satellite operations on the AWS Cloud, including the storing and processing of your satellite data and delivering your products using AWS services, or use AWS Ground Station just to downlink your satellite data and transport it to your own processing center.

Image result for What are The AWS Ground Station Features
AWS Ground Station

Schedule satellites and download data using AWS services

You can use the AWS Ground Station console to identify the satellites you need to communicate with and schedule “Contacts” with the satellite, where each Contact consists of a selected satellite, start and end time, and the ground location. After scheduling your Contacts, you can launch Amazon EC2 instances to run each portion of the Contact. You can launch a Command EC2 instance to receive operational telemetry from your satellite and transmit commands up to the satellite to schedule future activities. You can also launch a Downlink EC2 instance to receive bulk mission data from your satellite. These EC2 instances will communicate with AWS Ground Station’s antenna gateway over an elastic network interface (ENI) connection in Amazon VPC that exists for the duration of the contact.

To get In depth Knowledge on AWS you can enroll for free live demo AWS Online Course

Fully managed global ground station network integrated with AWS Global Infrastructure

AWS Ground Station antennas are located within fully managed AWS ground station locations, and are interconnected via Amazon’s low-latency, highly reliable, scalable and secure global network backbone. Data downlinked and stored in one AWS Region can be sent to other AWS Regions over the global network, so it can be further processed.

Graphical AWS Ground Station console

AWS Ground Station provides an easy to use graphical console that allows you to reserve contacts and antenna time for your satellite communications. You can review, cancel, and reschedule contact reservations up to 15 minutes prior to scheduled antenna times.

Direct access to AWS services

AWS Ground Station provides our satellite antennas direct access to AWS services for faster, simpler and more cost-effective storage and processing of downloaded data. This allows you to reduce data processing and analysis times for use cases like weather prediction or natural disaster imagery from hours to minutes or seconds. This also enables you to quickly create business rules and workflows to organize, structure, route the satellite data before it can be analyzed and incorporated into key applications such as imaging analysis and weather forecasting. Key AWS services include Amazon EC2, Amazon S3, Amazon VPC, Amazon Rekognition, Amazon SageMaker, and Amazon Kinesis Data Streams.
Support most common satellites and communication frequencies
You can connect with any satellite in low Earth orbit (LEO) and medium Earth orbit (MEO) operating in X-band and S-band frequencies, including: S-band uplink and downlink, X-band narrowband and wideband downlink.

Pay-per-minute pricing

You can schedule access to AWS Ground Station antennas on a per-minute basis and pay only for the scheduled time. You can access any antenna in the ground station network, and there are no long-term commitments.

What is AWS AppSync?

Applications that rely on data in cloud storage do not need to stay current every minute of the day. Think of a social media app. The is “real-time” data such as a new post or a photo upload, but most of the data such as account information, user profile, and the place you went to high school do not need to update constantly. In a gaming app, there is a massive amount of real-time data such as your current location on a map (which is ever-changing), but your credit card number will likely stay the same month and month. To constantly update all data for a mobile or web app doesn’t make sense and only consumes uses unnecessary resources.

Image result for What is AWS AppSync?

AWS AppSync is a way to synchronize the data used in a web or mobile app, allowing developers to choose which data should be synced in real-time.

To get in-depth knowledge on AWS you can enroll for a free live demo AWS Online Training

AppSync relies on a GraphQL, which was originally developed by Facebook, for the data syncing. It’s intended to help developers who might need to pull data from different sources in the cloud, then perform functions within the app quickly and efficiently. It’s also highly secure so even though an app is syncing from multiple data sources, and developers are choosing which portions of an app can use real-time data or not, the data is still protected.

As mentioned, the application development service is intended for those who need to deal with massive amounts of real-time data and have that data sync to the application. Yet they also need the ability to decide which data does not need to sync in real-time. Developers can create complex queries that use a cloud database and aggregate the data or make complex decisions to analyze, process, or manipulate it from multiple sources.

The advantage here is that you can easily scale an application and use multiple Amazon services for your application, without being restricted by your IT infrastructure or where the data resides (and if you need to process all data in real-time).

Another advantage is that this can work with data that is offline for periods of time. In a gaming app, for example, the developer can sync real-time data but also coordinate what happens when the end-user continues to use the game and rack up a high score when he or she is no longer connected to the Internet. AppSync can them sync the offline data once the user makes a connection again without having to sync the entire data set. This reduces bandwidth requirements and speeds up data syncing for the web or mobile application.

Examples of using AWS Appsync

One example of using AWS AppSync is with a Big Data project. Often, with a research project at a large university, for example, the data sources are widely distributed. For a project analyzing new road construction, there might be data available related to material research in Zurich and environmental data from a lab in Munich, but the app development team is based in Chicago.

In the past, syncing all of this data for an app, and also deciding which data is mission-critical and must be real-time in nature and which data can be stored long-term and not synced, was quite an undertaking. It often requires a combination of multiple cloud services and a way to sync all of the data sources manually. Yet, AWS AppSync provides one console so that developers can understand their API and what is happening with their data.

Another example of AWS AppSync in practical use is when developers are creating a smart home app, one that monitors home security and safety issues.

Sensors might be installed to detect water leaks, look for intruders, and monitor whether a window has opened suddenly in the middle of the night. The Internet of Things (or IoT) is a concept that has allowed developers to create rich applications that unify and unite these disparate sensors to present a clear picture of what is happening in the home.

Learn for More AWS Interview Questions

As you can imagine, pulling and monitoring this sensor data is a Herculean task. There might be thousands or even millions of data requests from an app — e.g., every time someone opens a door or when a sensor detects a moving object. In a connected home app, some of the data can be at rest and won’t need to sync. With AWS AppSync, a developer can decide how to sync that data and what happens to it in real-time within the app, not only for the dozens of sensors that might be installed in a smart home but for hundreds or thousands of customers.

In the end, it’s the flexibility this provides that is key for developers creating rich applications that use multiple data sets from wildly varying sources from all over the globe.

What is AWS Cognito?

For a small company, keeping track of user accounts for a mobile or web app is not terribly difficult. It might be a simple database you deploy and manage manually. The problem arises when you suddenly start managing hundreds, thousands, or millions of user accounts. At that point, the task is much more complicated and can involve security and authentication, access control, social media identity providers like Facebook, and other factors. What started as a fairly simple process becomes a full-time job for someone on your staff.

Image result for what is aws cognito

AWS Cognito is a user account control service that runs in the cloud. It’s designed to relieve many of the headaches related to user account control for mobile and web apps. By using AWS Cognito, you can take full control of the account management and then scale accordingly using cloud services.

To get in depth Knowledge on AWS You can enroll for free live demo AWS Online Course

AWS Cognito consists of several features for managing users for sign-up (registration), sign-in, and account management. To understand how it all works, here are the main features.

For starters, the Cognito User Pools feature helps you manage user accounts. It’s a secure user directory that can scale up as your needs evolve. With other user account control systems, you have to run a server and manage the IT infrastructure, but you can start using Cognito User Pools without having to configure any of the back-end systems.

Cognito supports “account federation” in that you can use third-party identity management providers for the login. This includes social media platforms like Facebook, well-known providers like Google and Amazon, and also enterprise-class identity providers such as Microsoft Active Directory (using SAML, the Security Assertion Markup Language). The advantage here is simplicity for the user. In a mobile app, for example, they can click a Facebook icon to sign-in quickly.

AWS Cognito uses well-known and well-established security providers, including Oauth 2.0, SAML 2.0, and OpenID Connect. These are open-source providers that use standards-based authentication and access management services. Cognito doesn’t rely on any proprietary security methods that lock you into that authentication method.

Not only is Cognito geared for helping you scale and manage your user management, but it’s also a good match for companies that need to adhere to compliance regulations. The most common is HIPAA, which regulates the health and medical field, such as for storing electronic health records. Amazon Cognito is also compliant with PCI DSS, SOC, ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, and ISO 9001.

Benefits and AWS Cognito
There’s no question the biggest benefit to using this service is that you can scale up as your apps grow and expand to reach a wider audience. One of the “gotchas” of any new web or mobile app is when it really catches on with users. Companies often celebrate when they see users engaging with their app, but then there’s the eventual realization that it can be extremely hard to keep up with demand if your infrastructure for user management is not ready. Because Cognito scales automatically with user demand, you don’t have to worry about the infrastructure requirements or building out and maintaining servers.

Related to this is the cost structure. Your costs are related to the user accounts you need to manage, so the costs can scale as well. This takes some of the surprises out of a fast-growing app as well, not only in how you pay for Cognito as a management tool but also in the fact that you don’t have to build out more servers or expand your infrastructure.

What this means is that you pay only to manage the monthly active users (or MAUs). A user is counted as active if they change a password, refresh their account info, or sign-up for the service within the month. This is a major advantage to companies that have legacy business apps that are still viable but also have been around for many years. They may have a large portion of users who are no longer actively using the app on a regular basis.

Perhaps one unheralded feature that is beneficial to companies developing apps is that Cognito integrates into your application framework. When you are building the front-end of your app, you can use the same branding and logo that matches the rest of your user interface.

Start at your journey at AWS training

One last benefit is that the entire service is easy to use and deploy. Amazon makes a point to demonstrate how this works by including a portion of the code you insert into your app, running only a few dozen lines long but providing everything you need to get started. Because of the ease of deployment, companies can develop multiple apps, experiment with new ones, and manage existing apps without the typical complexities and management overhead.

In the end, AWS Cognito fits right into the development cycles of most companies, especially with how easy it is to use the code, the cost structure, and the easy integration.

What is AWS Snowball?

In the age of incredible technological advancements on the Internet, autonomous cars, drones that can deliver a package from the sky, and instant access to social media feeds on our phones, it’s almost refreshing to know there is a service that is a little old school.

Image result for What is AWS Snowball?
AWS Snowball

AWS Snowball is a data transfer service that uses a physical device sent by Amazon. Even though the only thing that’s “old school” is that it uses a physical product, Snowball is quite advanced in terms of what it does, how it works, and how it benefits your company.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

For firms doing petabyte-scale research projects, those who have accumulated vast amounts of backup data, have a legacy tape backup system they need to move to cloud storage, or are closing an entire data center and using a cloud-based infrastructure, Snowball is a godsend.

In most cases, Snowball comes into play when there is a data migration project. There is a vast amount of data stored locally, and there’s a need to move that data to the cloud. However, because there may be petabytes of information, the Internet is not a viable option due to the speed concerns, security issues, and networking complexities.

With Snowball, there are several benefits. One is the ease of migration. It all starts with the AWS Console where you can initiate the migration with a few clicks. Once you do, Amazon determines whether you need one or more Snowball client devices. (For terabytes of data, you likely only need one client; for petabytes of data, you’ll need more than one.)

Then, the physical devices arrive at your site. You connect them to your network, and run the Snowball client application and select the data sources. After this, the migration runs at high speeds and over a secure connection (not on the Internet). Amazon uses an E Ink label for Snowball shipping, so once the migration is complete and you are ready to send back the devices, the label changes and shows the correct address for the return shipment.

As you might guess, the devices are not stashed away in a vault at this point. Amazon moves your data to Amazon S3 (Simple Storage Service) for easy access in the cloud. An important point to make here is that you can now take advantage of the enormous scale of cloud computing, adding additional storage archives to this one storage location. Once you migrate the data to the cloud using Snowball, the physical transfer work is all part of enabling your company in a move to a cloud infrastructure and benefiting from the reliable, secure storage. You can decide to remove portions of your S3 archives at any time.

There are benefits to how this works from a cost standpoint as well. At a petabyte or terabyte-scale, the cost of secure data transfers over the Internet can run several thousand dollars. Amazon states that Snowball migrations tend to cost about one-fifth the normal migration costs when the Internet and high-speed networks are involved.

Examples of how Snowball works
A good example of a project where Snowball comes into play is a large company that has existed for many decades and has decided to move all of their tape backup systems to the cloud. As you can imagine, this migration would normally be fraught with complexity — how to move the data securely, dealing with terabytes or even petabytes of storage that dates back many years, and making sure the migration is successful.

With Snowball, the company can configure the migration using the AWS Console which removes the complexity of knowing which data you need to move and where it will end up. Since the migration is highly automated using the Snowball client devices and client software, the company trying to move their tape backup systems doesn’t have to build a new IT infrastructure or purchase software to help with the legacy transfers.

The benefits are, as mentioned above, lower costs and the ease of migrating, but there are also clear advantages related to what happens after you are done migrating. The data becomes available in S3 and becomes part of your cloud infrastructure. This means you can retrieve that data from the cloud just as easily and efficiently as you migrated it to the cloud. It opens up new possibilities for using the data for business intelligence, additional research projects, data discovery, enterprise-grade apps, analytics and other projects that were not possible before.

Learn Latest AWS Interview Questions

In the end, companies benefit from how Amazon has architected the entire migration system. Snowball does involve “old school” client devices shipped to your facility, but that’s where any similarity to older data transfer migration practices ends. By using physical devices, it speeds up the entire process, makes it secure, and is designed for easy, fast migrations.

AWS Spotlights IoT and AI Integrations

At this week’s Consumer Electronics Show (CES) in Las Vegas, Amazon Web Services (AWS) and its partners showcased the ways in which the cloud provider’s various IoT and AI offerings are creating new opportunities for software makers, particularly in the automotive space.

Image result for AWS Spotlights IoT and AI Integrations"
AWS Spot IOT

For instance, AWS is integrating its IoT cloud platform with the BlackBerry QNX operating system for embedded devices to develop a new “connected vehicle software platform for in-vehicle applications.” The idea is that BlackBerry’s QNX would provide the backbone for the “smart” in-vehicle operating systems that are now running in many modern cars, while the AWS cloud will provide the real-time data collection and application development/deployment capabilities that automotive software makers need.

To get in depth Knowledge on AWS You can enroll for Free Live demo AWS Online Training

“QNX software allows automotive OEMs to develop and run a common software platform across in-vehicle systems such as gateways, TCUs, engine controllers, digital cockpits and emerging domain controllers, while AWS capabilities enable automotive software developers to securely and easily access data from vehicle sensors, build software applications and machine learning (ML) models using vehicle data, and deploy them inside the vehicle to enable in-vehicle inference and actions,” the two companies said in a press release Monday.

The combined offering opens new possibilities for applications aimed specifically at connected cars, from real-time systems checks and battery life monitoring, to improved accessibility features, to maintenance and warranty cost management.

“The platform will integrate the BlackBerry QNX operating system and over-the-air software update services, with AWS IoT cloud services for secure connectivity and telematics, Amazon SageMaker for developing ML models, and AWS IoT edge services for in-vehicle ML inference,” the companies said.

The combined platform is already being used by electric vehicle manufacturer Karma to monitor the battery health of its cars. A demo of the BlackBerry/AWS platform running in a Karma vehicle is on view at CES this week.

Also on view at CES are new AWS-based technologies that advance vehicle connectivity. One demo integrates Alexa, AWS machine learning and Amazon Rekognition capabilities to power an in-vehicle digital assistant. Another shows interactions between smart home systems and cars powered by Alexa. Other demos show how AWS cloud technologies can help automakers manage their autonomous fleets.

What You Need To Know About VPC Security Groups

I recently wrote a column in which I explained the process for creating a virtual private cloud (VPC) in Amazon Web Services (AWS). In that article, I briefly mentioned the concept of VPC security groups, but I didn’t really get a chance to explore the topic.

Therefore, I want to use this column to explain what a VPC security group is, what it does, and some of the key considerations that you will need to keep in mind when creating and working with them.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

Simply put, a VPC security group is really just a software firewall. Of course, if things were that simple, then this would be a very short column. As you probably expected, there are some important things that you need to know about VPC security groups.

You can see an example of a security group in Figure 1. As you can see in the figure, each security group contains a collection of inbound rules and outbound rules.

The first thing that you need to know about these rules is that although they exist within the VPC, the rules actually apply to individual virtual network adapters. Think of it as applying firewall settings to individual instances (or rather, virtual NICs within an instance).

Another thing that you need to know about VPC security groups is that you can apply multiple security groups to a single network adapter. Doing so results in the rules from the various security groups being combined and collectively applied to the adapter.

Learn for Amazon VPC

Security group rules are designed to grant permission for a particular type of traffic. There is no such thing as a denial rule, because traffic is denied unless there is a rule allowing it. The exception to this is that response traffic is allowed. If, for example, an instance sends a request, then the response to that request will be allowed to enter the instance, even if the security rules would have otherwise blocked that type of communications.

One more thing to keep in mind is that the instances within a VPC are not allowed to communicate with one another unless you explicitly allow them to do so. The default security group does allow communications between instances, but if you choose not to use the default security group, then you will have to create rules enabling any desired communications between instances.

You are, of course, free to use the default security group, but most administrators choose to create some custom VPC security groups to either replace or augment the default group. Before you create any security groups, however, it is important to understand some applicable limits.

Security groups are applied at an instance’s network interface. By default, AWS will let you apply up to five security groups to a virtual network interface, but it is possible to use even more in extreme situations (the upper limit is 16). Doing so requires you to contact AWS support.

Another limit that you will need to be aware of is that of the number of rules you can have per security group. Each security group can have up to 50 inbound IPv4 rules, 50 inbound IPv6 rules, 50 outbound IPv4 rules and 50 outbound IPv6 rules. Keep in mind that although there are ways of getting around the default limits, you cannot do so without contacting AWS support. If, for example, you choose not to create any IPv6 rules, that alone does not give you the ability to create additional IPv4 rules.

As previously explained, AWS sets some default limits on the number of security groups that you can assign to a network interface, and the number of rules that can exist within a security group. I also mentioned that it is possible to bend the rules by contacting AWS support and asking them to raise your limits. However, there is one limit that you will not be able to get around: The total number of rules that apply to a network interface cannot exceed 250 (AWS support cannot change this limit).

Take your career to new heights of success with a AWS Training

Hence, increasing the number of security groups that can be applied to a virtual network interface has the potential to reduce the number of rules that you can create within a security group. If you add together the number of rules that exist within each of the security groups that apply to a network interface, that number cannot exceed 250.

There is one last limit that you need to be aware of. By default, AWS sets a limit of 500 security groups per VPC. You can get around this limit by contacting AWS support.

Design a site like this with WordPress.com
Get started