CDW AWS Managed Services for Hybrid Cloud Environments

CDW continues to embrace our partnership with Amazon Web Services (AWS) while expanding our best-in-class on-premises managed services. This is important, as many of our customers leverage the hybrid infrastructure model — a combination of traditional on-premises hardware, Infrastructure as a Service (IaaS) and Software as a Service (SaaS) verses using a private or public cloud). The hybrid cloud model allows our customers to leverage their existing on-premises investments while taking advantage of the flexibility and scalability of AWS.

Image result for CDW AWS Managed Services for Hybrid Cloud Environments
CDW AWS Managed Services

CDW has more than 20 years of experience as a Managed Services Provider (MSP) and more than 50,000 devices under management. Our dedicated MSP engineers handle over 30,000 incidents per month. With this expertise, CDW has the knowledge and skill set to support our customers’ platform workloads in the environment which best meet the organizations’ needs.

To get In depth Knowledge on AWS you Can enroll for free live demo AWS Online Training

The entire IT world is quickly realizing the distinct advantage of just how elastic AWS can be through the provisioning of cloud infrastructure in a matter of minutes compared to weeks or months for on-premises hardware. Cloud infrastructure translates to a robust, scalable pay-as-you-go model with no upfront fees or commitments.

Cloud computing has multiple advantages over traditional on-premises data center infrastructure. Provisioning and maintaining on-prem infrastructure requires considerable time and resources, both financial and technical. In comparison, many of these provisioning processes can be handled instantaneously using AWS-enabling technical resources to focus on other areas of greater value.

Another advantage that AWS cloud services bring is the speed and agility of DevOps in the cloud. Traditional on-premises DevOps are typically siloed organizations (i.e., distinct application development and infrastructure operations teams). By comparison, AWS allows DevOps teams to be tightly coupled, which brings efficiency. At CDW, our operations staff, professional services consulting team and development engineers are engaged with each other throughout the entire cloud services lifecycle, from design to migration to production support.

A major part of an MSP’s role for traditional on-site services revolves around monitoring, patching and security. In traditional on-premises environments, these tasks can be extremely time-consuming, which limits an IT staff’s bandwidth for more strategic activities. These tasks may also lead to scheduled downtime to perform. In the cloud, many of these tasks can be automated using robust tools, patching on the fly and real-time security monitoring.

At CDW, we have embraced cloud automation to provision AWS infrastructure as part of our enrollment procedures. Leveraging tools such as AWS CloudFormation, AWS Step Functions, AWS Lambda and Python scripts, CDW can establish new AWS services for a customer in minutes. This includes provisioning infrastructure and the creation of automated billing and cost management with just a few clicks.

CDW believes in next-generation MSP principals by doing more than just managing resources. We are involved in the full lifecycle of services, which consists of design, build and/or migrate, run, manage and continual optimization.

CDW offers AWS services ranging from backup planning and integration to “Jumpstart” onboarding, cloud migration and assessments of existing on-premises environments. CDW has the resources ready to support your organization. Through our cloud migration planning tool and expert cloud engineers, CDW can provide the expertise to evaluate your on-premises environment and make recommendations on the best way to adopt AWS solutions for your enterprise workload needs.

Contact your CDW account manager, or call 800.800.4239 to learn more about CDW’s AWS managed services and how we can help your organization leverage AWS to reduce development time and take advantage of the many benefits of a hybrid infrastructure.

Overview Of AWS Media Services

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Image result for Overview Of AWS Media Services
AWS Media Services

Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world.

Media Services

Amazon Elastic Transcoder
Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a highly scalable, easy- to-use, and cost-effective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones, tablets, and PCs.

If You Are interested To Learn AWS you Can enroll for free live demo AWS Online Training

AWS Elemental Media Connect
AWS Elemental Media Connect is a high-quality transport service for live video. Today, broadcasters and content owners rely on satellite networks or fiber connections to send their high-value content into the cloud or to transmit it to partners for distribution. Both satellite and fiber approaches are expensive,require long lead times to set up, and lack the flexibility to adapt to changing requirements. To be more nimble, some customers have tried to use solutions that transmit live video on top of IP infrastructure, but have struggled with reliability and security.

Now you can get the reliability and security of satellite and fiber combined with the flexibility, agility, and economics of IP-based networks using AWS Elemental Media Connect. Media Connect enables you to build mission-critical live video workflows in a fraction of the time and cost of satellite or fiber services. You can use Media Connect to ingest live video from a remote event site (like a stadium), share video with a partner (like a cable TV distributor), or replicate a video stream for processing (like an over-the-top service). Media Connect combines reliable video transport, highly secure stream sharing, and real-time network traffic and video monitoring that allow you to focus on your content, not your transport infrastructure.

AWS Elemental Media Convert
AWS Elemental Media Convert is a file-based video trans coding service with
broadcast-grade features. It allows you to easily create video-on-demand
(VOD) content for broadcast and multi screen delivery at scale. The service
combines advanced video and audio capabilities with a simple web services
interface and pay-as-you-go pricing. With AWS Elemental Media Convert, you
can focus on delivering compelling media experiences without having to worry about the complexity of building and operating your own video processing infrastructure.

AWS Elemental Media Live
AWS Elemental MediaLive is a broadcast-grade live video processing service. It lets you create high-quality video streams for delivery to broadcast televisions and internet-connected multi screen devices, like connected TVs, tablets, smart phones, and set-top boxes. The service works by encoding your live video streams in real-time, taking a larger-sized live video source and compressing it into smaller versions for distribution to your viewers. With AWS Elemental Media Live, you can easily set up streams for both live events and 24×7 channels with advanced broadcasting features, high availability, and pay-asyou-go pricing. AWS Elemental Media Live lets you focus on creating compelling live video experiences for your viewers without the complexity of building and operating broadcast-grade video processing infrastructure.

AWS Elemental Media Package
AWS Elemental MediaPackage reliably prepares and protects your video for
delivery over the Internet. From a single video input, AWS Elemental
MediaPackage creates video streams formatted to play on connected TVs,
mobile phones, computers, tablets, and game consoles. It makes it easy to
implement popular video features for viewers (start-over, pause, rewind, etc.),

like those commonly found on DVRs. AWS Elemental MediaPackage can also
protect your content using Digital Rights Management (DRM). AWS Elemental
MediaPackage scales automatically in response to load, so your viewers will always get a great experience without you having to accurately predict in advance the capacity you’ll need.

AWS Elemental MediaStore
AWS Elemental MediaStore is an AWS storage service optimized for media. It
gives you the performance, consistency, and low latency required to deliver live streaming video content. AWS Elemental MediaStore acts as the origin store in your video workflow. Its high performance capabilities meet the needs of the most demanding media delivery workloads, combined with long-term, costeffective storage.

AWS Elemental MediaTailor
AWS Elemental Media Tailor lets video providers insert individually targeted advertising into their video streams without sacrificing broadcast-level quality-of service. With AWS Elemental Media Tailor, viewers of your live or on-demand video each receive a stream that combines your content with ads personalized to them. But unlike other personalized ad solutions, with AWS Elemental Media Tailor your entire stream – video and ads – is delivered with broad cast grade video quality to improve the experience for your viewers. AWS Elemental Media Tailor delivers automated reporting based on both client and server-side ad delivery metrics, making it easy to accurately measure ad impressions and viewer behavior. You can easily monetize unexpected high-demand viewing events with no up-front costs using AWS Elemental Media Tailor. It also improves ad delivery rates, helping you make more money from every video, and it works with a wider variety of content delivery networks, ad decision servers, and client devices.

Amazon Personalize Now Available to All AWS Users

Amazon recently unveiled a new fully managed service under the AWS umbrella. Amazon Personalize is a machine learning tool that lets users create private, customized personalization recommendations for applications. The tool is now available to all AWS users. It had been available in preview starting in 2018. If you’re interested in making the most of this service for your organization, here’s what you should know.

Image result for Amazon Personalize Now Available to All AWS Users
AWS Users

About Amazon Personalize
Amazon Personalize makes recommendations to users based on the historical data you have stored in Amazon S3 and the streaming data that are sent in from your applications. It uses machine learning models that provide personalized buying experiences for users. Essentially, it takes the basic idea of the personalized recommendations Amazon already uses on its own platforms and makes it available in a simple form to developers. Amazon says that users don’t need to deal with complex infrastructures or complicated machine learning models to use the service.

If You are interested to Learn AWS You Can enroll for free live demo AWS Online Training

Benefits
The ability to create these personalized experiences for customers gives organizations an edge. You can use your own data to send out timely video recommendations within apps or personalized notification emails. These experiences are designed to be more relevant to each individual, thus more likely to lead to a positive outcome for the business or organization behind it.

Additionally, building a personalization engine from scratch can be quite a challenging and time-consuming process. Amazon Personalize gives you a head start on the tools you need to create these recommendations.

How it works
Amazon Personalize users provide unique signals in activity data and demographic information to let the platform create those personalized recommendations. So you can have it look at page views, sign-ups, purchases, customer age, or location. Then you provide the inventory of items that you want to recommend, which can be anything from articles to products. Amazon Personalize then processes the data to identify what is meaningful and create algorithms to optimize a personalization model that’s customized to your customers and inventory. That model is also accessible via an API.

What is Machine Learning on AWS and How it Works?

AWS Machine Learning is among the fastest-growing technologies today and being backed with ML skills is considered one of the most sought after attributes in today’s job market.

Image result for Machine Learning on AWS
Amazon Machine Learning

This blog will give you an understanding of AWS ML and SageMaker.

What is Machine Learning?
Machine Learning is the study of various algorithms and models that a computer system use to execute certain tasks without any explicit instructions.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

Machine Learning Methods:

Supervised ML Algorithm
In the Supervised Method, input and output variable is given. It learns from the input and output data to produce the desired output.

Unsupervised ML Algorithm
In the Unsupervised Method, only input data is given. It uses only input data to learn and produce the output.

What is AWS SageMaker?
Amazon SageMaker is a machine learning service that helps developers and data scientists to build and train the machine learning models and then directly upload them to the production environment.AWS SageMaker provides an integrated Jupyter authoring notebook instance for access to your data source for exploration and analysis.AWS SageMaker also provides optimized algorithms to run efficiently with large data in a distributed environment.

AWS SageMaker Providing the following features:

What is Machine Learning on AWS and How it Works?
AWS Machine Learning
  1. Amazon SageMaker Studio
    Amazon SageMaker Studio is an environment to build, train, analyze and deploy models in a single application.
  2. Amazon SageMaker Ground Truth
    It is used to create high-quality training datasets.
  3. Amazon SageMaker Autopilot
    It is helpful to build classification and regression models quickly.
  4. Amazon SageMaker Model Monitor
    It continuously monitors the quality, such as data drift of learning models in a production environment.
  5. Amazon SageMaker Notebooks
    Notebooks with SSO integration, fast startup and single-click sharing.
  6. Amazon SageMaker Experiments
    It automatically tracks the inputs, parameters, configuration and results so you can easily manage your Machine Learning Experiments.
  7. Amazon SageMaker Neo
    It enables the developers to train the model once and runs them anywhere in the Cloud.
  8. AWS Marketplace
    It is the platform where customers can find, buy, deploy and manage third-party software, data and services.
  9. Amazon SageMaker Debugger
    It automatically detects and alerts while errors are occurring.
  10. Amazon Augmented AI
    It is used to implement Human review for Machine Learning predictions.
  11. Automatic Model Tuning
    It helps to find the best version of a model.

If You are interested to ML you can enroll for free live demo Machine Learning Online Training

How AWS SageMaker works?

What is Machine Learning on AWS and How it Works?
  • Generate Data
    To design a solution for any business problem, we need data, where a type of data depends on a problem.To preprocess the data, we need to do the following:
    1. Fetch the Data (Pull datasets into a single repository)
    2. Clean the Data (Inspect the data and clean it if needed)
    3. Prepare / Transform the Data (Combine attributes into new attribute to improve performance)
    In AWS SageMaker, you can preprocess the Data in Jupyter notebook instance.
  • Train a Model
    • Training the Model
      To train a model, you need to use an algorithm. You can use algorithms that are provided by Amazon SageMaker. Or you can use your algorithm to train a model
    • Evaluating the Model
      You evaluate the model to determine whether the accuracy of the inferences is acceptable or not. You can use AWS SDK for Python (BOTO) or High-level Python library which are provided by AWS SageMaker to send a request to model for inferences.In AWS SageMaker you can use a Jupyter notebook instance to train and evaluate the model
  • Deploy the Model
    In AWS SageMaker, you can deploy your model using SageMaker Hosting Services.

What Is AWS IAM Access Analyzer?

Introduction to IAM

Identity and access management is a service that helps the user to control and access AWS resources. IAM users will have permission to authenticate and authorize their AWS services. Using IAM, the user can create multiple users and groups and grant and deny permission on accessing the services in AWS.

Image result for aws iam access analyzer
AWS IAM Access Analyzer

What is IAM Access Analyzer?

IAM Access Analyzer is used to analyse the resources and the policies that are accessed by an external user from an external account. The external users can be an AWS account, root user, IAM user, IAM role, federated user, AWS service, the anonymous user or any other entities.

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

The users and the resources within the access Analyzer are called trusted within the zone. The Analyzer generates findings if the resource is not within the zone.

Access Analyzer will analyse and update the policies within the region in which the resources are enabled. If you want to analyse the policies in all the regions, then you should create the access Analyzer in all the regions.

Why Access Analyzer?

IAM Access Analyzer helps the user to control and access the AWS service and the resources. It also grants complete permission to the user to access AWS services. IAM Analyzer gives you complete permission on the resources which you are sharing with the external principals. This functionality is achieved by using logic-based reasoning to analyse resource-based policies in the AWS environment.

User can create Access Analyzer for their account by enabling access Analyzer policy. Once the Analyzer is enabled, your account is the zone of trust for the Analyzer. The Analyzer can monitor all the resources and the services within the trusted zone.

The resources that are accessed within the trusted zone is can be called trusted resources. Once the access Analyzer is enabled, the Analyzer analyses the policies that are applied to all supported resources to your account. Once the Analyzer finishes analysing the policies for the first time, it keeps analysing the policies every 24 hours. If the policies are changed or any other new policies are updated the access Analyzer keep updating with policies for every 30 minutes.

While analysing the policies, if access Analyzer analyses the external principal who is not within the trusted zone, it automatically generates a finding, which includes resources and granted permissions to the user. So that the IAM user can take immediate action. Sometimes the Access Analyzer will not be notified when new policies are added or policies are updated at that time. In that case, access Analyzer will analyse or update the policy in the next upcoming scan.

The benefit of using Access Analyzer:

  • Access Analyzer saves time in analysing resource policies and cross-account accessibility to public
  • IAM Analyzer gives a user complete permission on the resources which they are sharing with the external principals
  • All the resources within the trusted zone can be easily monitored
  • Access Analyzer generates findings if the resources are not within the trusted zones
  • The Analyzer will analyse the policies for every 24 hours

How Access Analyzer work:

Access Analyzer in AWS generates finding for instances based on resource policies that grant access to the resources within the trusted zone. The operations within the trusted zone are considered to be safe and secure, therefore the Analyzer will not generate findings if the operation is safe.

If the user grants permission to S3 bucket from your AWS account to another AWS account, then Analyzer will generate findings, if you grant permission to S3 bucket from your AWS Account to an IAM role in your account, the Analyzer will not generate findings.

Access Analyzer supported resource types:

The following are the resources types that are supported by the IAM Access Analyzer:

Amazon Simple Storage Service Buckets: While Analysing S3 bucket, access Analyzer generates a finding when a bucket policy or ACL rule is applied to the bucket to grant access to an external principal. It creates a filter when an entity is not within a trusted zone.

Access Analyzer analyses the block bucket policy setting at the bucket level whenever the policies are changed or updated. Analyzer evaluates the bucket policy setting only once every 6 hours.

AWS Identity and Access Management roles: Access Analyzer analyses the trusted polices. In a role define policy, the IAM user will define the principal of the trusted role. Resource base policy is attached to the IAM role which is required for role trusted policy. The Analyzer will generate findings for a role within the trusted zone. Access Analyzer will generate the findings only in the enabled regions.

AWS Key Management Service Keys: In AWS KMS, Access Analyzer analyses the keys policies and grant applies to the key. The Analyzer will generate finding if the Analyzer analyses the external entities to access the key. The Analyzer reads the key metadata and lists the grant permission for the user to access KMS. If Key policy denies the Analyzer to read the key metadata, an access denied error finding will be generated.

AWS Lambda Functions and Layers: Access Analyzer analyses the policies along with the condition statement in the policy that will grant the function to external entities.

Amazon Simple Queue Service Queues: Access Analyzer analyses the polices along with the condition statement in the policy, that grant external access to the queue.

Stay tuned for our next blog to know more on how to enable IAM Access Analyzer

AWS Reinvent Reinforces the Growth of Cloud Computing

AWS held its annual user conference last week, and — as always — it brought home just how transformational cloud computing is. AWS announced a number of new services at the event; these made clear s how far beyond the traditional IPS (Infrastructure/Platform/Software-as-a-Service) model AWS has moved.

Image result for AWS Reinvent Reinforces the Growth of Cloud Computing
AWS Growth of Cloud computing

Simply stated, if you are an enterprise IT organization and wish to participate in the future of IT, you have to adopt public cloud computing. It is where the most advanced technologies are introduced, and only it has the scale to provide the most functional versions of those technologies.

If You are interested to Learn AWS you can enroll for free live demo AWS Online Course

I’d like to discuss a few of the new services AWS announced at Reinvent and discuss what they mean for IT organizations:

Amazon Athena
I call S3 the filing cabinet of the internet. It holds vast amounts of data from organizations large and small. Unfortunately, a lot of it is a hodge-podge of unstructured objects: spreadsheets, documents, logfiles, etc. What they have in common is no metadata schema. Nonetheless, AWS announced Athena, which provides an SQL interface to S3 data. I come from the relational database world, and I can’t imagine how AWS has figured out how to map unstructured data objects to the neat world of SQL. What I do know is that this is a tremendous capability and will provide ways for organizations to pore over objects and find critical information. Athena is also likely to be a convenient staging mechanism for data analytics — extract the data via Athena into other products for analysis.

Amazon Lex, Polly, and Rekognition
Amazon’s Echo makes voice recognition a powerful tool available in a compact package. The explosion of what Amazon terms “skills” transforms Echo from a useful device into the hub of the connected home. As I wrote here, voice recognition will be the next application UI.

Unlike most companies that would treat a runaway hit like Echo as a crown jewel to be protected at all costs, Amazon released new services at Reinvent that make it possible for anyone to create an Echo-like device. Lex can translate speech or text, and is designed to enable bot development. Polly is a voice recognition service that allows VR application development; by the way, it supports 24 languages and provides 47 different voices. Rekognition is an image analysis service that allows sophisticated image parsing to identify faces, products, objects, and so on.

The key fact about these machine learning services is that they improve with more data; as they are used by more people, they get more capable and accurate. AWS’s huge user base is likely to make these services leaders. More to the point, because of their focus on scale, they are beyond the scope of what any individual enterprise could implement on its own; they only really make sense as a cloud service.

Amazon CodeBuild
One of the hallmarks of DevOps is frequent builds. Most organizations want to do a build and initial test every time a development checks in code. However, as development practices adopt this practice, and especially in large organizations, constant build processes can overwhelm the resources available for building applications. CodeBuild allows any organization unlimited build capabilities. With the existing CodePipeline service, AWS removes barriers for any IT organization seeking to move to a streamlined application lifecycle.

Lambda Edge and Greengrass
AWS created an entirely new computing paradigm when it announced Lambda. It provides the ability for users to upload code functions, which AWS loads and executes in response to events. These events can be external (e.g., a Lex-enabled program that triggers a function in response to an event from an IoT device) or internal (e.g., in response to insertion of an S3 object in a particular bucket).

Get started Your Journey with AWS Online Training

In a world in which those external devices can be spread across the globe, it is possible that significant latency may occur between the device and the Lambda function inside AWS. Lambda Edge reduces that latency by providing Lambda endpoints at every AWS edge location. Greengrass removes the latency altogether by placing Lambda in IoT devices themselves. AWS has worked with processor manufacturers to put Lambda right on the chips, allowing disconnected function processing at the device itself. Moreover, the service offers mesh networking, allowing communication among a collection of Greengrass devices located near one another. Naturally, Greengrass can also communicate with AWS when network connectivity is available, allowing data storage in the cloud.

With Lex, Polly, and Greengrass AWS offers a path to a whole new world of smart, distributed applications and devices. I can’t wait to see what people build with these services.

These are only a few of the services that AWS announced at Reinvent. What they reinforce for me is how far cloud computing has come. For a long time IT organizations looked at the cloud as outsourced infrastructure and debated whether it was as good as or as cost-effective as what IT could operate for itself.

That debate is over. Even if an IT organization could run infrastructure better than Amazon, there’s no way it could hope to match the kinds of services I’ve examined in this piece. Amazon and its scale cloud brethren make it clear: to build tomorrow’s applications, IT organizations need to embrace public cloud computing.

AWS launches SageMaker Studio, a web-based IDE for Machine Learning & Data Science

At the event of re:Invent conference, the Amazon Web Service CEO Andy Jassy announces the release of SageMaker Studio, an integrated development environment for Machine Learning.

The AWS’s SageMaker Studio is a web-based IDE for building and training machine learning workflows. It helps to brings code editing, training, job tracking, tuning, and debugging all into a single web-based interface.

Image result for AWS launches SageMaker Studio, a web-based IDE for Machine Learning & Data Science
Amazon segemaker

The Amazon Web Service includes everything a data scientist would need to get started in SageMaker Studio, including ways to organize and manage notebooks, data sets, code, and model development and training.

The SageMaker Studio attempts to solve important pain points for Data Scientists and Machine Learning Developers and Engineers by streamlining model training and maintenance workloads.

To Get In depth Knowledge in Amazon you can enroll for free live demo AWS Online Course

The SageMaker Studio offers a number of features such as the ability to share projects and folders with others who are working on the same project, including the ability to discuss notebooks and results.

The SageMaker Studio has already integrated most of its features with AWS’s SageMaker Machine Learning service so you can directly train your models and can also automatically scale based on your needs.

In addition to Studio, AWS also today announced a number of other updates to SageMaker that are integrated into Studio. Including Amazon SageMaker Notebooks, Amazon SageMaker Experiments, Amazon SageMaker Autopilot, Amazon SageMaker Debugger, and Amazon SageMaker Model Monitor.

One of my personal favorite features includes the SageMaker Notebooks that lets you quickly spin up a Jupyter notebook for ML projects. The CPU usage with SageMaker Notebooks also managed by AWS.

At its core, the SageMaker Studio is based on JupyterLab, the next-generation interface from Project Jupyter which is the most common environment used by data scientists for exploring data and ML algorithms.

With the new upgrade, the Amazon Web Service said that the SageMaker now offers long supported notebook instances, which require a user to log on to AWS & provision a virtual machine.

The new offering promises to launch notebooks “in seconds” and supports sharing with multiple users by integrating with AWS’s single-sign-on (SSO) services, allowing users to access notebooks hosted in AWS without requiring AWS-specific credentials.

It provides Jupyter NoteBooks running R/Python kernels with a compute instance that we can choose as per our data engineering requirements on demand.

With that the SageMaker Notebooks attempt to solve the biggest barrier for people learning data science: getting a Python or R environment working and figuring out how to use a notebook.

The studio delivers single-click Notebooks for the SageMaker environment, competing directly against Google Colab or Microsoft Azure Notebooks in the Notebook-as-a-Service category.

The SageMaker Studio includes an integration with the new SageMaker Experiments service.

It is designed to help ML practitioners manage large numbers of related training jobs, this is a problem that arises when searching for hyperparameters that lead to the best-performing model.

The AWS has also introduced hyperparameter-tuning jobs in 2018, SageMaker Experiments provides an abstraction-layer by introducing two core concepts: a trial, which is a training job with a certain configuration and set of hyperparameters, and an experiment, which is a group of related trials.

Another feature that grabs my attention was the SageMaker Autopilot, which follows the same old rules, it automates the creation of machine learning models and automatically chooses algorithms and tunes models.

SageMaker Autopilot can automatically generate & run experiments given only a file containing a dataset.

Autopilot runs data pre-processing and feature-engineering jobs to infer the best model architecture before running hyperparameter-tuning jobs to find the best fit of that model.

“With AutoML, here’s what happens: You send us your CSV file with the data that you want a model for where you can just point to the S3 location and Autopilot does all the transformation of the model to put in a format so we can do machine learning; it selects the right algorithm.”

“Then it trains 50 unique models with a little bit different configurations of the various variables because you don’t know which ones are going to lead to the highest accuracy,” CEO Andy Jassy said onstage at re:Invent.

“Then what we do is we give you in SageMaker Studio a model leaderboard where you can see all 50 models ranked in order of accuracy. And we give you a notebook underneath every single one of these models so that when you open the notebook, it has all the recipe of that particular model.”

Get start with your Journey with Machine Learning Online Training

If we talk about the SageMaker Experiments, the SageMaker Experiments are for training and tuning models automatically and capture parameters when testing models. Older experiments can be searched for by name, data set to use, or parameters to make it easier to share and search models.

And the SageMaker Debugger is made to improve the accuracy of machine learning models, while SageMaker Model Monitor is a way to detect concept drift.

“With concept drift, what we do is we create a set of baseline statistics on the data in which you train the model and then we actually analyze all the predictions, compare it to the data used to create the model, and then we give you a way to visualize where there appears to be concept drift, which you can see in SageMaker Studio,” Jassy said.

The AWS Sagemaker is a great tool or most data scientists who would want to accomplish a truly end-to-end ML solution. It’s a one-stop-shop for all the machine learning tools and results you need to get started.

It takes care of abstracting a ton of software development skills necessary to accomplish the task while still being highly effective and flexible and cost-effective.

Most importantly, it helps you focus on the core ML experiments and supplements the remainder necessary skills with easy abstracted tools similar to our existing workflow.

What is AWS ECS? Running Docker in Production

Running Docker in production has quickly become the norm. Cloud hosting providers like AWS, GCE and Azure realized that this is what organizations need. Services like EKS and ECS from Amazon offer a completely managed environment for your Docker containers to run on. Through this article, we’ll take a closer look to one of them, Amazon ECS, which is Amazon Elastic Container Service. We are going to describe what AWS ECS is, its functions, and its importance in the current market.

Image result for What is AWS ECS? Running Docker in Production
Amazon Ecs

If you don’t know what any of this means, then the rest of the article is going to help you with that. Suffice it to, say, “fully-managed” implies you don’t have to pay any third-party software vendor to run your containerized application. “Scalable” means you don’t have to worry, ahead of time, about resource utilization. AWS Cloud will make resources, like CPU, memory and storage, available to you, on demand.

To get In depth Knowledge on AWS you can enroll for free live demo AWS Online Training

But why should you care about it? The reason behind it is two-fold.

First of all, is the flexibility and scalability of a microservice-based architecture which most applications are now adopting. As it turns out, Docker containers are really good for deploying microservices. So, it stands to reason that your application may get shipped as a Docker image. If that’s not the case, you may not want ECS, since this service is exclusive for those who intend to run Docker containers.

The second reason is cost-effectiveness. This is true especially if you use Amazon Fargate. It’s a pay-as-you-go policy that bills you by the minute and can result in significant price reductions. You can also launch your own ECS task on top of EC2 instances.

What is AWS ECS?
Of the many services that AWS offers like S3 for Storage, and VPC for networking, ECS falls into the category of a compute service. This places it in the same category as Lambda functions or EC2 instances. Containers, just in case you don’t know, are like lightweight VMs that offers a secure environment for the users to run their application isolated from all the other applications running on the same infrastructure.

So, what is AWS ECS?

Well, Amazon ECS runs and manages your containerized apps in the cloud, helping you save valuable time. Typically, running containers in the cloud involves spinning up compute resources (like EC2 virtual machines), installing Docker inside them, connecting it to your container image registry, securely, and then launching your containers on top of it.

The application itself is made up of multiple containers, each with its own specific nuances and attributes. So operators use tools like docker-compose to launch multiple containers. These tools themselves are under constant improvement and one update or another might render the entire stack unusable.

Amazon ECS is here to give you a unified approach to launch and manage Docker containers at scale in a production-ready environment.

ECS is designed to be a complete container solution from top to bottom. Docker images can be hosted in Amazon ECR (ECS’s sister project) where you can host your private image repository, create a complete CI/CD workflow and have fine-grained access control using IAM or ACL, etc.

Is It like Kubernetes or Docker Swarm?
Why not use Elastic Kubernetes Service from Amazon or any other container orchestration services like DC/OS, OpenShift, Kubernetes, or Docker Swarm? These technologies are all free and open source and can be run on any cloud service. For example, if your Ops team is familiar with Kubernetes, they can setup Kubernetes and run applications on any cloud, not just on AWS.

Why bother with closed-source, externally-managed ECS, when others have better alternatives?

First and foremost, because of its complexity. Kubernetes is a complex body of software with many moving parts. In order to get the most out of it, you need either to get expensive support from your hosting provider, or your team will have to go through the steep learning curve themselves.

Secondly, the return on investment, especially for startups, is very small. Running your own container orchestration incurs on an additional charge, which can be more expensive than what your application itself might cost you alone!

If you are using ECS, there’s no additional cost for using ECS. You only pay for the resources your applications consume.

Even when you are using Amazon EKS as your Kubernetes provider, you have to pay $.20 per hour for EKS control plane alone in the USA. This doesn’t even include the EC2 instances that you will have to allocate as worker nodes. Keep in mind that there will be quite a few of these EC2 instances since the application is supposed to be scalable. Here is a beginner’s guide to creating Amazon EC2 instances.

Features and Benefits of Using AWS ECS
Amazon ECS comes packed with every feature you may already know and love about AWS and Docker. For example, developers running Docker on their personal devices are familiar with Docker Networking modes like Docker NAT and Bridge Networking. The same simple technology running on your laptop is what you see when you launch ECS containers across an EC2 cluster.

The same services that we use for logging, monitoring, and troubleshooting EC2 instances can be used to monitor running containers as well. Not only that, you can automate the action that needs to be taken when a certain event is seen in your monitoring system. For example, AWS CloudWatch alarms can be used for auto-scaling purposes. So when the load increases, more containers are spawned to pick up the slack, but once things are back to normal, then extra containers are killed. This reduces human intervention and optimizes your AWS bills quite a bit as well.

As modern desktops programs are stored on the disk as executable files, containers are stored as container images. A single application is made up of multiple containers, and each one of these has a corresponding image.

As your app evolves, new versions of these images are introduced, and various branches for development, testing, and production are created. To manage all of this, ECS comes with another service, ECR, which acts as a private repository where you can manage all the images and securely deploy them when needed.

You Will Need Docker
Docker is one of the key technologies that underlies all the container orchestration systems like Kubernetes, DC/OS, and Amazon ECS. Docker is what enables containers to run on a single operating system. This could be a desktop or an EC2 instance. Docker runs and manages various containers on the OS and ensures that they are all secure and isolated from one another as well as the rest of the system.

Technologies like ECS take this model of running multiple containers on a single OS and scales it up so that containers can run across entire data centers. Given the importance of Docker and its concepts, keep in mind that they are an essential prerequisite before you adopt ECS. Applications are broken down into microservices and then each one of these microservices is packaged into a Docker container.

If Docker is not a core part of your development workflow, you should probably try to incorporate it. The free and open-source Community edition is available for most desktop operating systems including Windows, Mac OS, and most Linux distros.

You can start by choosing the base image upon which to build your application. If a microservice is written in Python, there are Python base images available to get started with. If you need MongoDB for data storage, there’s an image available for that as well. Start from these building blocks and gradually grow your application as new features are designed and added.

Containers are the fundamental unit of deployment for ECS. You will have a hard time migrating to ECS if your app is not already packaged as a Docker image. Conversely, if you have a “Dockerized” application, you do not have to over complicate the task with Docker Compose or Docker Swarm. Everything else, including how the containers will talk to one another, as well as networking, load balancing, and more, can be managed on ECS platform itself.

In terms of your Docker skillset, you only need to be aware of the basic networking, volumes, and Dockerfiles.

Tools You Won’t Need
If you are already accustomed to using Docker, there are a plethora of services that can help you deploy Docker containers. Services like Docker Compose help you deploy applications that are made up of multiple containers. You can define storage volumes, networking parameters and expose ports using Docker Compose.

However, most of these tools are limited to a single VM or Docker Swarm exclusively. Services like Docker Swarm are incompatible with AWS ECS. Creating a Docker Swarm instance typically involves launching a cluster of EC2 instances, installing Docker on all of them, running Docker Swarm, and creating a Docker Swarm out of the EC2. Then you have to install Docker Compose, write docker-compose.yaml files for each application and then deploy them.

Furthermore, you will need to maintain and update all the underlying software like Docker, Docker Compose, Docker Swarm, and then ensure that the docker-compose files are compatible with the new versions.

AWS ECS, on the other hand, takes all of that away from you. You don’t need to allocate EC2 instances for Docker Swarm master nodes. You won’t have to worry about updating any of the container management software. Amazon does that for you. You can deploy multi-container applications using a single Task definition. Task definitions replace your docker-compose.yml files and can be supplied either using the Web Console or as a JSON payload.

ECS, when used with services like AWS Fargate, can take away even the EC2 instances away. You still have to pay for compute and memory, however, your containers don’t consume all of the allocated memory and compute allocated to them, resulting in cost savings.

Pricing for AWS ECS
The pricing model for ECS depends on one important question: where are your containers running? They can run on either Amazon Fargate or on your own EC2 cluster

If you choose the traditional way of running containers on EC2 instances then you simply pay for the EC2 prices. Every EC2 pricing policy works. You can use Spot Instances for non-critical workload, On-demand Instances, or Reserved Instances, whichever makes economic sense for your applications.

If you are using AWS Fargate to run your containers on, then the pricing consists of two independent factors:

CPU requested: Here, you typically pay up to $0.06 per hour per vCPU

Memory requested: This is priced at $0.015 per hour per GB of memory

When you define your services, you will set the values for vCPU and memory for each different kind of container you will be launching. At the end of the month, your Amazon Fargate bill would include memory utilization charges plus the CPU utilization charges.

You are billed by the second, with a minimum of one minute of usage any time you run an ECS task on Fargate. Overall, you are billed from the instant you start your task to the moment that task terminates. The pricing differs from one region to another, and you can visit this page for more details. The CPU values start from 0.25 vCPU all the way up to 4 vCPUs and each CPU value has a minimum and maximum memory that can be associated with it.

For example, a single vCPU needs at least 2GB of memory and can’t have more than 8GB of memory.

The bill you incur depends upon the way your application scales. Suppose you are running a single container and suddenly the workload spikes up. Then the application will autoscale and spawn, say, n more containers. This would result in n times normal resource utilization. Consequently, in times of peak load, you will be charged more.

AWS Fargate can save you a lot of money, if you want to run containers for batch processes like data processing and analytics. For services, like web servers, which are supposed to be active all the time, your billing would not differ all that much from EC2 prices. However, you may still want to leverage ECS for running containers over EC2, because containers come with a whole different set of advantages.

Amazon ECS Architecture
Tasks and Task Definition

An application consists of many microservices, and each one of these services can be shipped as a Docker image (a container image). You define an ECS task to within which the Docker image is selected, the CPU and memory allocated per container are selected. IAM roles can be associated with the task definition for granular privilege control and also various other Docker specific parameters like Networking Mode and Volumes can be specified here.

You can have multiple containers inside a single task definition, but rarely should you ever run your entire application on it. For example, if you are running a web app, a task definition can have the front-end web server image. Similarly, you can have a different task associated with your backend database.

Later, you may realize that your app can perform better if the front-end has a caching mechanism. So you can update the task definition to include a Redis container to go along with your front-end container.

To summarize, you can have multiple closely-related containers in a task. A task is run according to its task definition. The task definition can be updated to update a part of your application. Notice, you don’t touch the backend software when you update the front-end task definition.

If you are familiar with Kubernetes, then tasks are similar to pods in a Kubernetes cluster.

Service
Remember that we still have to ensure that our application is scalable. Services are what allow us to do that. It’s the next level of abstraction on top of tasks. You can run multiple instances created from the same task definition across your entire cluster (multiple EC2 instances, for example).

Services help you autoscale your application based on CloudWatch alarms; they can have load balancers to distribute the incoming traffic to individual containers and are the interface via which one part of your application talks to another. Going back to our previous example, a web server doesn’t directly talk to the database but instead talks to the database service, which in turn, talks to the underlying containers running your database server. This is the service in a microservice-based architecture. Kubernetes has a similar concept with the same name.

Cluster, VPC and Networking
Lastly, you may want to logically separate one set of services from another. Say you have multiple applications. You can create a different ECS Cluster for each one of them. Inside each Cluster would reside the services that make up the application and inside those services the tasks run.

Moreover, from a security standpoint, it is better to run each ECS cluster on its VPC — Virtual Private Cloud. This will provide you with a range of private IP addresses and you can further split it into subnets if you so desire. Sensitive information can reside in a different subnet with only one gateway, this way if a service has any vulnerability and gets compromised, it may not reach the sensitive stuff.

The ECS console creates a VPC for you if you don’t have one.

AWS Fargate
We have talked a little about Fargate before and how it is different in terms of pricing from the regular EC2 clusters and how management is simpler with it. Let’s take a closer look at it.

A given ECS cluster can pool compute resources from both EC2 and AWS Fargate and schedule containers across them as and when needed. However, when you are writing task definitions you need to specify whether the task would run on AWS Fargate or is it designed for EC2.

Besides the ease of management and a highly-scalable model that AWS Fargate offers, it also offers the right environment to practice running containers in production. You don’t get the option of tweaking the underlying VM or restarting your container from the Docker host. This is important if we are ever going to run containers on bare metal servers.

The ultimate goal for cloud providers is to run containers from multiple users on the same server, instead of virtualizing the hardware and then running containers on top of it. We as application developers should no longer desire to “restart our containers” from the VM. Worse still is having an implicit assumption that your container will run in an isolated VM instead of a multi-tenant environment.

AWS Fargate doesn’t let you get away with those assumptions. Instead, it encourages cloud-native logging and monitoring solutions, fine-grained access policies and allows you to build apps that are ultimately scalable without us having to spin up more VMs or EC2 instances. Some things are still region-specific, but it is certainly a step in the right direction.

To Summarize
Amazon ECS, from a business perspective, is easy to use as a means of learning to manage and deploy apps. It lets you run Dockerized apps across multiple EC2 instances or on Amazon Fargate without paying for control nodes or setting up Kubernetes or any other distributed system on your own.

Yes, there is always a fear that this will lead to vendor lock-ins, but Docker containers are fairly portable to begin with, so if you wish to migrate away from AWS you won’t have to rewrite your code. You can also save a significant amount of money in terms of your AWS bills if you use AWS Fargate and/or set up auto-scaling to leverage the pay-as-you-go model of AWS.

Finally, running Docker containers in production is the way going forward in the future. Adopting technologies like ECS will also make your application and your team well-prepared for the multi-tenant cloud computing environment.

Auto Remediation with VMware Cloud on AWS

One of the benefits of running your workloads in VMware Cloud on AWS is that VMware manages the platform, including all of the infrastructure and management components. VMware also performs regular updates across the SDDC fleet to deliver new features, bug fixes, and software upgrades.

Image result for Auto Remediation with VMware Cloud on AWS
VM Ware and AWS

Operationalizing common tasks for these components is crucial. The Autoscaler Service within the platform helps with this. Autoscaler consists of three primary functions:

If you are interested to Learn AWS you can enroll for free live demo AWS Online Training

Auto Remediation: Replace problematic infrastructure based on virtual infrastructure events.
Planned Maintenance: Replace Amazon EC2 instances and vSAN witness virtual machines (VMs) that are scheduled for retirement.
Dynamic Scalability: Scale the SDDC up or down dynamically based on resource usage. You can read more about this in my Elastic DRS blog post.
The goal is to ensure your SDDC is truly elastic and self-healing without impacting the hosted workloads.

Continuous monitoring and validation
The three primary functions above can be carried out because we monitor the health of various SDDC components and services all the time. When an event occurs, it is forwarded to the Autoscaler, which reacts very quickly to validate and execute a remediation plan based on the type of event.

VMware Cloud on AWS

Prior to executing the remediation plan, the service will validate the condition. This is useful in the event of a transient error – for example, a minor network glitch may fire a false positive. If an event is thrown stating that a host is disconnected from vCenter, when in reality it is connected and healthy, further validation would ignore the event instead of attempting to remediate.

If the event is validated and identified as a real failure, we can now execute our remediation plan.

Remediation in the event of a failure
Let’s look at a host failure example. Whether on-premises or in the cloud, components within a host can and will fail.

Sometimes, it’s a minor issue – such that the host is running, but in a degraded state. This could be a redundant component like a fan or power supply, or even a single memory module. Other times, the component failure could be catastrophic – such as a processor or system board. In this case, Autoscaler receives the event, validates it, and then springs into action.

A key advantage of VMware Cloud on AWS is that we always have access to a fleet of hardware. This allows us to provision and add a host immediately to the cluster to ensure there is enough compute and storage capacity to perform VM migrations or an HA reboot if necessary. If a non-transient event occurs, a host is provisioned and added to the cluster before remediation action continues.

It’s important to note that you would never be charged for the addition of a host during auto remediation processes. The only time you would be charged for a host is in the event of an Elastic DRS (EDRS) scale-up due to storage or compute restraints from customer workloads.

Sample remediation plans
Let’s look at some high-level examples of what remediation plans might include:

  • IF host experienced PSOD, THEN collect EBS snapshot and reboot host
  • IF host is still not healthy, THEN remove and re-sync vSAN data to new host
  • IF vSAN is not healthy, THEN soft reboot host and trigger vSAN repair
  • IF host has history of multiple failures, THEN remove and re-sync vSAN data to new host
    Of course, these are high level examples, and the workflows can range from very simple to complex in an effort to maintain SDDC availability.

As mentioned above, remediation steps only occur after an additional host has been successfully added to the cluster. Once remediation has been performed – and if the failure in the original host was able to be resolved and health checks are passed – the newly added host will be placed in maintenance mode and removed from inventory.

However, if the failed host could not be recovered, then it will be removed and the newly added host will now remain in the cluster. Once a failed host is removed from the cluster, it is returned to the fleet for AWS to repair.

Building a truly resilient SDDC
While the above example referenced a host hardware/component failure, Autoscaler will also address software failures such as PSODs, vCenter, vSAN, FDM, and so on.

It’s all about giving you access to the services and workflows that enable your SDDC to be truly resilient and highly available.

What Is Amazon Web Services and Why Is It so Successful?

Amazon Web Services (AWS) was a little known, infrequently thought about part of Amazon.com Inc (AMZN) until this year. This year was the first time in the division’s nine year history that Amazon revealed its revenue figures and were the numbers ever shocking. In the first quarter of 2015, AWS brought in over $1.5 billion of revenue, a figure which grew to $1.8 billion the following quarter and to $2 billion in the third quarter. More recently, AWS generated nearly $7.3 billion in operating income in 2018, more than half of Amazon’s total.What is AWS and why is it so lucrative and successful for Amazon?

AWS Tutorial

KEY TAKEAWAYS

  • Amazon is one of the world’s most valuable companies, but it does not actually make a majority of its income from selling books and other items.
  • Amazon’s main profit driver is Amazon Web Services, or AWS – the company’s cloud computing and web hosting business.
  • Amazon controlled more than a third of the cloud market in 2018, more than twice its next closest competitor.

If You are interested To Learn AWS you Can enroll for free live demo AWS Online Training

What Is AWS Exactly?
AWS is made up of so many different cloud computing products and services. The highly profitable Amazon division provides servers, storage, networking, remote computing, email, mobile development and security. AWS can be broken into two main products: EC2, Amazon’s virtual machine service and S3, Amazon’s storage system. AWS is so large and present in the computing world that it’s now at least 10 times the size of its nearest competitor and hosts popular websites like Netflix Inc (NFLX) and Instagram.

AWS is divided into 12 global regions, each of which has multiple availability zones in which its servers are located. These serviced regions are divided in order to allow users to set geographical limits on their services (if they so choose), but also to provide security by diversifying the physical locations in which data is held.

Cost Savings
Jeff Bezos has likened AWS to the utility companies of the early 1900s. One hundred years ago, a factory needing electricity would build its own power plant but, once the factories were able to buy electricity from a public utility, the need for pricey private electric plants subsided. AWS is trying to move companies away from physical computing technology and onto the cloud.

Traditionally, companies looking for large amounts of storage would need to physically build a storage space and maintain it. Storing on a cloud could mean signing a pricey contract for a large amount of storage space that the company could “grow into”. Building or buying too little storage could be disastrous if business took off and expensive if it didn’t.

The same applies to computing power. Companies which experience surge traffic would traditionally end up buying loads of power to sustain its business during peak times. On off-peak times—May for tax accountants for example—computing power lays unused, but still costing the firm money.

With AWS, companies pay for what they use. There’s no upfront cost to build a storage system and no need to estimate usage. AWS customers use what they need and their costs are scaled automatically and accordingly.

Scalable and Adaptable
Since AWS’s cost is modified based on the customers’ usage, start-ups and small businesses can see the obvious benefits of using Amazon for their computing needs. In fact, AWS is great for building a business from the bottom as it provides all the tools necessary for companies to start up with the cloud. For existing companies, Amazon provides low-cost migration services so that your existing infrastructure can be seamlessly moved over to AWS.

As a company grows, AWS provides resources to aid in expansion and as the business model allows for flexible usage, customers will never need to spend time thinking about whether or not they need to reexamine their computing usage. In fact, aside from budgetary reasons, companies could realistically “set and forget” all their computing needs.

Security and Reliability
Arguably, AWS is much more secure than a company hosting its own website or storage. AWS currently has dozens of data centers across the globe which are continuously monitored and strictly maintained. The diversification of the data centers ensures that a disaster striking one region doesn’t cause a permanent data loss worldwide. Imagine if Netflix were to have all of their personnel files, their content and their backed-up data centralized on-site on the eve of a hurricane. It would be madness.

In fact, even failing a nature disaster, localizing data in an easily identifiable location and where hundreds of people can realistically obtain access is unwise. AWS has tried to keep their data centers as hidden as possible, locating them in out-of-the-way locations and allowing access only on an essential basis. The data centers and all the data contained therein are safe from intrusions and, with Amazon’s experience in cloud services, outages and potential attacks can be quickly identified and easily remedied, 24 hours a day. The same can’t be said for a small company whose computing is handled by a single IT guy working out of a large office.

The Bottom Line
AWS is a cash cow for Amazon. The services are shaking up the computing world in the same way that Amazon is changing America’s retail space. By pricing its cloud products extremely cheaply, Amazon can provide affordable and scalable services to everyone from the newest start-up to a Fortune 500 company.

Start Your Journey with AWS Online Training Hyderabad

Compete Risk Free with $100,000 in Virtual Cash
Put your trading skills to the test with our FREE Stock Simulator. Compete with thousands of Investopedia traders and trade your way to the top! Submit trades in a virtual environment before you start risking your own money. Practice trading strategies so that when you’re ready to enter the real market, you’ve had the practice you need.

Design a site like this with WordPress.com
Get started