Saturday, 27 December 2014

Amazon Web Services (Cloud Concepts, Design, Implementation, Practice)

Activities @ Lynda (Personal Notes) - 

DONE Amazon Web Services Essential Training with Jeff Winesett

Cloud Concepts

Cloud Services Categories (Hosted)
  • IaaS (i.e. AWS, Rackspace)
  • PaaS (i.e. AWS Elastic Beanstalk, Google App Engine, Heroku)
  • SaaS (i.e. AWS, Google Apps, Basecamp, Mint)
Cloud (Elastic-nature) Business Benefits
  • JIT infrastructure to match and outpace fluctuating demand with capacity of supply (Scalable Infrastructure is enabled by the level of cloud environment Elasticity property (expansion and contraction) allows adaption and accommodation of changes including growth or diminishing workload or demand to the system, rapid (i.e. scale in extra web server tiers within minutes) ongoing planning and estimating capacity (after initial predictions), and monitoring usage and demand for hardware procurement changes, over-allocation (wastage), overpaying with large investments, or under-allocating capacity (down-time and denial of access, but overcome by rapidly scaling out to satisfy customers when CPUs reach a certain threshold and indicate that full utilisation has been reached and servers should be removed) but first requires designing and building a Scalable Application Architecture to take advantage of the scalability, which should include incorporating the Elasticity Concept of the cloud )
  • Efficient on-demand resources (avoids planning for hardware procurement and idle hardware)
  • Pay-for-usage (allowed to test different configurations with low infrastructure costs upfront)
Technical Implementation & Benefits
  • Automation (scriptable leveraging AWS APIs to achieve repeatable build and deployment systems with less boilerplate emphasis and increased reliability and sustainability) 
  • Elasticity / Auto-Scaling Types (automatically and reactively scale cloud computing system to match event demand). 
    • Elasticity Dfn: Ability to Scale Cloud Resources (Up, Down, In, Out)
    • Scaling types:
      • Horizontal Scaling
        • Scaling Out { Increase Capacity to system by adding Components / Nodes i.e. add web servers to handle traffic }
        • Scaling In { Remove capacity from system by reducing Components / Nodes i.e. scale in web server tier by reducing them }
      • Vertical Scaling
        • Scaling Up { Increase Resources to the systems' Single Component / Node (to Increase Capacity to handle load) i.e. increase quantity of CPUs or memory of database server to scale up capacity of server }
        • Scaling DownReducing Resources of the systems' Single Component / Node
  • Auto-Scaling Initial Setup 
    • Automate Deployment Process (leveraging AWS APIs for automation capabilities, to reduce errors, and ensure efficient scaling methods adopted)
    • Streamline Initial System Configuration / Application Build Process to accommodate Scaling (scripting and checking in AWS Console UI)
      • Bootstrapping Instances (creates Startup Process that runs on its own in AWS context on an EC2 Instance)
  • Auto-Scaling Implementation Methods
    • Proactive Scaling (Schedule-based) (automatically scale architected system Components on regular fixed basis i.e. daily, weekly, quarterly) (i.e. highly predictable traffic patterns and demand known)
    • Proactive Scaling (Event-based) (automatically scale architected system Components in anticipation for planned event demands,  i.e. Scale Out or Up due to large marketing campaign where large increase in demand forecast
    • Proactive Scaling (Metrics-based) (setup monitoring of metrics to automatically scale architected system Components upon certain metric thresholds being breached i.e. Scale Out or In depending on whether metric (i.e. CPU utilisation, network I/O) is above or below threshold
  • Cloud System Architecture
    • Risk Assessment Workshops
      • Dfn: Risk Assessment Workshops upfront at Design-time to consider potential causes of failure for each Component across the System Architecture, and propose recovery strategies to achieve stable, robust, and improved end-product 
      • Example #1 System Architecture with Fail-Over Secondary to avoid Single Points of Failure (SPF) 
        • Risk Assessed #1: Achieve Design Stability and avoid overall System Failure (Single Points of Failure SPF(i.e. web application with single DB tier instance serving entire system of multiple web servers, such that the entire system is offline if the DB is offline) 
        • Mitigation Strategy #1: Upfront Architecting and building the overall System with minimal Dependency between its Components (Loose Coupling) and building it to expect individual Component-Level Failure (i.e. Architect in AWS to leverage an RDS service which syncronises a Primary DB instance that still serves multiple web servers with an additional standby Fail-Over Secondary DB server instance that is setup to avoid a SPF by switching using a Fail-Over Process, leveraging various software, hardware, and networks, to become the active DB instance in case of a Component failure)

    • Loose Coupling (Decouple Components)

      • Dfn: Software Architecture Design Principle of minimising Dependencies between Components (Loosely-Coupled Components) to improve Scalability of overall System 
      • Example #1 Loose Coupling VS Tight Coupling (i.e. in Web Application Architecture of multiple web servers, which are connected via a Load Balancer (Component to facilitate Distribution of web server connection to any number of app servers) to multiple application servers, which are connected to a DB server instances, and where each web server and application server do not know much about each other (if we architected the system so they needed to know what to connect to we'd have a Tight Coupling between Components). Loose Coupling removes the Dependencies between web server and app server Components and allows ease of Add / Remove Components without impacting other Components and to match demand, and so that in case of Component Failure, other Components still work) 
    • Data Security Responsibilities (for Implementation and Maintenance in Cloud Computing)
      • Cloud Provider (i.e. AWS): Underlying Physical-level Infrastructure Security and Services (buildings, infrastructure, equipment, inter-customer security separation)
      • Cloud UserNetwork-level & Application-level Security (Operating Systems, Platforms, and Data)
        • Protect confidential data transmissions (in motion) using SSL Certificate management at Load Balancer-level or directly across multiple AWS EC2 Instances serving similar HTTPS requests particularly for API requests sent over the Public Internet and for applications with multiple AWS services requiring the AWS Secret Access Key, build app to pass in AWS Secret Access Key as arguments during launch and encrypt it before sending, or launch instance in an AWS Identity & Access Management (IAM) { manages access control user roles, groups (with access granted only to services required or to learn, rather than distributing Root Account Info), and permissions } role so it has access to the credentials associated
        • Protecting data at rest by Encrypting Data before storage on any Device Storage location, and Encrypt File Systems Storage of both Elastic Block Storage type (persists beyond lifetime of underlying incidence) and Local Storage type (not survive after termination of instance where it resides) and protect and rotate (incase compromised) Security Credentials { AWS Access Keys (Public Access Key ID and Secret Access Key, which is used in API call requests for authentication) and X.509 Certificates } and Application Security using AWS Security Groups (firewalls restricting access to associated instances)
        • Updating software packages and apply security patches (pre-Cloud security practices)
  • Efficient Software Development Lifecycle and Improved Testability (leveraging flexibility of cloud without hardware limitations i.e. clone production for use as development and test env.)
  • Disaster Recovery (allows ease of Replicating Configuration to deploy app to cloud-based hosts in different geographical locations)
Switching to AWS Cloud
  • Resource Requirements determination
  • Trial Mapping a replica of existing rigid fixed systems (i.e. hardware) to a scalable elastic architectures in the cloud (exact specifications of existing on-premises infrastructure is no longer a constraint) and change resources to compensate for adapting to a scalable and distributed structure (i.e. re-think Application Architecture if encounter any Constraints in the infinitely scalable cloud infrastructure (where combinations of resources and services should allow constraints to be overcome) and distribute a server's traditional need for centralised hardware RAM, or leveraging the clouds elastic-nature with a distributed in-memory cache cloud service i.e. AWS ElastiCache) (i.e. if DB requires more IOPs (inputs / outputs per second) then alternatives depend on data end use-cases and may include Scaling Out the DB layer by distributing across a Cluster (single layer) to accommodate needs, or leverage other DB Sharding techniques to route data / requests to appropriate targets)
  • Build Cloud-based System leveraging cloud building blocks combined with the cloud On-Demand Provisioning Model
Cloud Solutions Developer
  • Dfn: Understands and integrates overlapping roles when building technology solutions using flexible and on-demand cloud services whilst sharing knowledge
  • Systems & Network Administration
    • Old Role: Server provisioning, software installation, network wiring
    • New Role: API calls, automation of abstract programmable virtual cloud resources and applications of underlying infrastructure (with an understanding of the applications to be supported), business decisions
  • Database Administration (DBA)
    • Old Role: Relational DB (RDS) Management Systems without customisation
    • New Role: Web-based console using scripts to alter DB capacity, using Virtual Machine (VM) Images for deployment, and provide high availability and reliability of data tiers by embracing Geographic Redundancy and Async Replication, test alternate storage options, rethink Data Architecture (leveraging Sharding Techniques like Horizontal Partitioning)
  • Application Developers
    • New Role: Understanding infrastructure and systems supporting applications

Cloud Design

EC2 (Virtual Server Instances) 
  • Dfn: Elastic-natured Servers launched in Cloud Compute services at AWS Regions (geographically independent collections of AWS resources, with no data replication between them, located in the US, EU, South America, Asia Pacific, that each comprising multiple Availability Zones (AZ)) 
  • Example #1 Redundancy and Encryption in Multiple AZs and Regions (i.e given System Requirements of strict Disaster Recovery and High Availability with redundant and isolated systems in geographically isolated locations (to mitigate against unforeseen local physical disasters affecting other locations) try deploying the architecture to different regions, but use Encryption methods to protect sensitive data when communicating between regions across public internet, or alternatively just try to rely on using one region and multiple associated AZs (logical data centers), which are distinct locations (Data Centers) isolated from the failure of another but with low-latency inexpensive fast fibre network interconnections across the same Region)
Elastic IP Addresses (in the Cloud)
  • Dfn: Static IP Addresses controlled and associated with AWS Account until released that allow easy remapping of the public Static IP Address to any Instance of the AWS Account (designed for dynamic cloud computing)
  • Example #1 Elastic IP abstracted to AWS Account Remaps after Failover given web app domain name at associated with a specific IP address by your DNS server, when using AWS EC2 with Elastic IP, which is designed for failure, disaster recovery is handled by the elastic IP tied to the AWS Account by the user performing failover by simply remapping the association between the IP address and a new replacement set of servers (without any DNS changes)
AMI (Amazon Machine Image) (aka Units of Deployment)
  • Dfn: Packaged environment with a Software Stack that is specified when ready to setup and launch one or multiple EC2 Instances to each correspond to different Components of the System Architecture (i.e. Web Servers, App Servers, DB Servers). Maintain AMI to easily restore and clone environments across multiple AZs
  • AMI's Composition includes:
    • Root Volume Template (for EC2 Instance) comprising:
      • Operating System / Platform (i.e. Debian Linux, Ubuntu, GNU Linux, Cent OS, Red Hat, OpenSUSE Linux, Amazon Linux, Windows Server, Fedora, Gentoo)
      • App Server & Applications
    • Permissions (AMI Accounts that can use AMI to launch EC2 Instances)
    • Block Device Mapping (specifies Volumes to attach to the EC2 Instances)
  • AMI (EC2 Service) Tools (to create AMIs)
    • AWS Management Console
      • List Available AMIs (EC2 > Images > AMIs)
      • Filter by Public Images, AWS Images (My AMIs), AWS Marketplace Images (catalog of open source and commercial vendor software), Community Images (contributions by individuals and dev teams to share dev projects), Platforms
      • Create New AMI of an Instance (performed after: EC2 Instance setup, running, and configured for app) (EC2 > Instances > Instances > Launch Instance > Choose AMI > Quick Start (Server Instance from Existing AWS Package, including OS and supported languages and DBs)
      • Re-Create Instance replica multiple times (Right-Click Individual AMI > Launch Instance) across multiple AZs using the created AMI (programmatically using API or with Auto-Scaling Service) to handle heavy app load or replace failing component
ELB (Elastic Load Balancing)
  • Dfn: Cloud Component built to balance traffic load across multiple EC2 Instances and multiple AZs to improve Fault Tolerance in apps, and Elastically Scales its request-handling capacity to cater for traffic demands of app automatically (built-in elasticity and redundancy) by distributing traffic across multiple EC2 Instances and AZs (geographically separated locations within a Region) to keep system available
  • Characteristics
    • Routes & Load Balances HTTP, HTTPS, and TCP traffic to EC2 Instances and AZs
    • Health Check of EC2 Instances being routed to and used (setup, configuration, and monitoring)
    • Demand Pattern Responsive by automatically and dynamically Scaling (Growing and Shrinking)
    • ELBs c/w single constant CNAME (for DNS config and pointing to web app domains even during ELB Scaling) that resolves Round-Robin DNS requests across multiplying ELBs to yield the IP Address of each additional unique IP Address inserted as an ELB DNS Entry in each AZ as traffic increases, and removes the IP Addresses from the ELB DNS Entries when traffic decreases (reducing quantity of ELB Components in the System)
Cloudwatch Monitoring (free)
  • Dfn: Monitor Visibility of Resource Utilisation, Performance, Demand Patterns, present Metric Graphs (CPU Utilisation, Disk I/O, Network Traffic Load, Throughput, Latency), and take action by an Alert or triggering Processes to Handle issues (Add or Remove Resources) when Thresholds breached, indicative of Potential or Actual System Failure. Create a PUT API Request to define, setup, and Monitor Custom App-generated Metrics
  • Services Supported
    • EC2, ELB, EBS, RDS, SQS, SNS, DynamoDB, ElastiCache, EMR, Redshift, Route 53, OpsWorks, AutoScaling, Billing
EBS (Elastic Block Storage)
  • Dfn: Volume Storage for running Apps that store Data on EC2 Instances
  • Preparation: Design to ensure Retain Data in event of EC2 Instance Failure with consideration for Storage Options
  • Volume Storage Types
    • EBS { EBS Volume Storage Resources created separate from EC2 Instances that are attached to EC2 Instances for use (i.e. format with a file system, load with apps and run DBs). EBS's Persists after EC2 Instance terminated }. EBS Types include (select based on budget, performance requirements, and Use Case):
      • Standard Volumes
      • Provisioned IOPS EBS Volumes (Input / Outputs per Second) { allow users to Control and specify Consistent I/O Performance Parameters when creating multiple Volumes up to 1 terabyte that may all be attached to EC2 Instances, take point-in-time Snapshots either programmatically or through API of EBS Volumes that capture the present Data and persist them to S3 for incremental storage (only saves any changed blocks since the last Snapshot, and user only billed for changes) or instantiate New Volumes (transferring data stored in S3 to it) or Change Attributes (Volume re-Size if supported, moving Volumes across AZs, sharing EBS Volume Data across Regions (i.e. share with coworkers who create their own EBS Volumes based on EBS Snapshots) and copy across Regions. Ensure consistent and complete Snapshot if in use and pause writes to EBS Volume not long enough to take Snapshot by unmounting the EBS Volume from the EC2 Instance first. Snapshot info is sufficient to Restore the EBS Volume to instant the Snapshot taken, which occurs using a Lazy Loading Approach (any data the user attempts to access is prioritised in the transfer for immediate use). }
    • Ephemeral Volumes (Local Storage) (expires upon termination)
RDS (Relational Database Service)
  • Dfn: AWS Handles Boilerplate DBA. Simply Setup a DB Instance (using Web Console or API and specify Engine Type & Size to Performance Requirements), Operate and Scale a Full Featured Relational DB for High Availability and High Redundancy, and Recover (Safe and Quick) in the Cloud, with DBA (Database Administration) tasks automatically covered (Backups for Retention Periods and Point-In-Time Recovery, Security Patches), including when deploying to multiple AZs and when creating DB Instance replicas. Upon DB Instance Availability, CloudWatch monitors its health
  • Supported Engine Types { MySQL, PostGreSQL, SQL Server, Oracle }

Cloud Implementation

Bootstrapping Phase
  • Dfn: Implementation of a automated self-sustaining startup process that leverages running the app on an elastic EC2 Instance of the App so when online self-determines from its Configuration certain Actions for the Production Environment 
  • Example #1 Bootstrapping with Configuration and Deployment Tools use Tools depending on Application Environment that are compatible with Configuration (i.e. Chef, Puppet) and Deployment Tools (i.e. Travis CI), to say... run Custom Scripts to set Configuration Settings, mount required Drives, start required Services, update Code Files, pull down more Scripts, pull latest supporting Software Version and apply required Patches, Register with Load Balancer to receive traffic } :
    • Role { App Server, DB Server ? }
    • Resources Required { Code Files, Configuration, Utility Scripts }
    • Registration as a System Component (the EC2 Instance)
  • Example #1 Access to EC2 Instance Metadata (i.e. SSH into an EC2 Instance that is based on an AMI, then Access the List of all EC2 Instance Metadata options from same URL End-Point over HTTP using curl or a specific metadata from the EC2 Instance echo'ed on a new line curl http://xx.xx.xx.xx/latest/meta-data hostname; echo $line; or alternatively use get commands or by using Custom Scripts that run on the EC2 Instances to return { ami-id, ami-launch-index, ami-manifest-path, block-device-mapping, hostname, iam, instance-action, instance-id, instance-type, kernal-id, local-hostname, local-ipv4, mac, metrics, network, placement, profile, public-hostname, public-ipv4, public-keys, reservation-id, security-groups, services }
Opsworks (free)
  • Dfn: Application Management Service (Full Configuration Management) integrated into AWS to manage App Lifecycle (built upon Chef Framework)
Cloud-Init (Linux Servers) and EC2Config (Windows Servers)
  • Dfn: Allow users to specify and implement their Bootstrap Phase needs by automatically triggering any Custom Scripts and Shell Commands to run during startup Life Cycle of EC2 Instance
  • Dfn: Service where Conditions defined that determine when EC2 Instance should be Scaled Out or In, which implements Elasticity in response to match Capacity with Load Demand by Adding or Removing EC2 Instances to satisfy 
  • Example #1 Verify Load Patterns with CloudWatch to define Conditions when EC2 Instances should Auto-Scaling (i.e. given a web app architected and distributed across two AZs as part of design for failure with high fault tolerance, we may find two web servers work fine with the light load periods at night, but high traffic during the day may be of concern, so we verify the CPU level of each web server using CloudWatch over a couple of days to confirm the pattern, then use Auto-Scaling and define Condition with a threshold where if average CPU utilisation rises above x% for y duration, it should Add z more EC2 Instances, with instance z1 in AZ1, and z2 in AZ2 that will hopefully spread the traffic demand across the EC2 Instances and reduce the CPU utilisation burden on individual servers to keep customers happy using only minimum resources and only paying for resources used)
  • Auto-Scaling Components (to achieve fault tolerance)
    • Launch Configuration { defines what to Scale, including EC2 Instance Size to use, AMI to use, Security Group, Data Storage required, etc }
    • Auto-Scaling Group { defines where (Locations) to launch EC2 Instances, Limits on Quantity of Instances (desired target Capacity) to launch in response to event-handling for EC2 to maintain, 
    • Scaling-In /-Out Policy { defines how to launch EC2 Instances and leverage Elasticity (Scale Out to handle increased load, and Scale In to reduce costs and waste), where CloudWatch alerts defined based on breaches of thresholds given to Metrics }
Scalable Storage Solutions
  • Storage Options (Scalable)
    • Block Storage { EBS }
    • Object Storage { S3, Glacier, CloudFront }
      • Object Store Dfn: Objects Stored in 'Buckets' and retrieved by Keys (Strings) allows for Scalability and fast object retrieval (but limited features when compared to Traditional File Systems with directory hierarchy with centrally housed repo of file metadata and search features) (i.e. use for data that does not change much such as backups, archives, log files, videos, images, audio files, etc)
      • ECM (Eventual Consistency Model) - ECM-based Object Stores may be unsuitable for Data that changes frequently. 
        • Dfn: Data Consistency is only eventually achieved, as it takes time for changes to propagate to all replicas before latest version returned in response to an Updates request being made on an Object) (i.e. if delete an Object, the request to delete takes time to propagate to all replicas, the Object may still be returned after delete request has been made during a brief period of time)
    • Storage Gateway { Connect and Synchronise Local On-Premise Storage Volumes with Cloud-based AWS Storage Volumes }
    • Relational Databases { RDS }
    • NoSQL Databases { DynamoDB }
    • In-Memory Cache (Distributed) { Elasticache (similar to 'memcached') that may use RDS }
    • Content Cache { CloudFront }
  • S3 (Simple Storage Service)
    • Dfn: Object Store based on ECM. S3 is 99.999999999% Highly Durable (High Redundancy by replicating and Storing Objects on multiple devices across multiple AZs in an S3 Region for High Availability). Stores EBS Snapshots. Limit of 5 TB size of each Object. Unlimited Objects. Access Objects using REST or SOAP API Protocols
    • RRS (Reduced Redundancy Storage): Optional to reduce storage costs for easily reproducible objects in event of failure (i.e. generation of thumbnails or different image sizes or different video encodings from original source)
  • Glacier (S3 Extension for Archiving)
    • Dfn: Object Store. 99.999999999% Highly Durable (same as S3). Used to Freeze Data. Extension of S3 for Infrequently Accessed Data where retrieval time of 3-5 hours is acceptable. Storage Costs are lower (pay per Archive and Restore request). Manage Objects through S3 (i.e. upload to S3, then transition S3 Objects to Glacier for Archiving)
  • CloudFront (CDN Service)
    • Dfn: Object Store. Content Delivery Network (CDN) Services for Assets (works seamlessly with S3) that uses Globally Distributed Network of Edge Locations to minimise Latency in delivering Content to End-Users. Similar to other CDNs like (i.e. Akamai or Windows Azure
    • Example #1 Domain Name routes to High Performance Edge Locations for fast Content Delivery using CloudFront { i.e. store original versions of web application asset files on multiple Origin Servers (where the definitive version of Objects stored) either on AWS S3 bucket, EC2 Instance, or ELB, or Custom origin server, then create a New Distribution (with an associated Domain Name) to Register the Origin Servers with CloudFront using the AWS Management Console, then end-users request access to stored Objects using the Domain Name, which routes them to the nearest Edge Location for fast (high performance) Content Delivery, only paying for data transfer and requests used }
  • Elastic Beanstalk (PaaS Fully Managed Self-Scaling Cloud Application Container with High Availability and Performance options)
    • Dfn: Platform-as-a-Service (PaaS) that allows uploading App to AWS and automatically handles Capacity Provisioning, Load Balancing, Auto-Scaling, CloudWatch and all necessary Resources to run it in the Cloud. It automatically pushes Server Logs to S3. Users Retain Full Control over created AWS Resources and decisions that are made that power their App (some competitors reduce programming and configuration required to launch an app, but at the cost of control) and only pay for Resources actually used to store and run app. 
    • Default Settings of Elastic Beanstalk
      • Single EC2 Micro Instance (App Server)
      • AMI configures dependent on Development Environment (i.e. Amazon Linux or Windows Server) and contains software to act as Web Server or App Server { i.e. deploying a PHP App, the AMI contains Linux, Apache, MySQL, and PHP (LAMP Stack) } 
      • ELB is run to allow for Scaling Up and Down and distributes incoming traffic across multiple EC2 Instances depending on load, automatically routing only to CloudWatch determined healthy and reliable instances
      • EC2 configured for Auto-Scaling and High Availability (auto-adjusting quantity of EC2 Instances to peaks and troughs in traffic workload, setting minimal EC2 Instances, and setting to use multiple AZs)
      • Custom URL for opening running App in web browser 
    • Example #1: Deployment with Elastic Beanstalk Workflow Create app in editor or IDE, create Elastic Beanstalk PaaS App Container (in desired Language Environment) using AWS Management Console or API calls, await Elastic Beanstalk to handle the provisioning of ELB and EC2 Resources, meanwhile install and configure Git, and then commit and push changes and Elastic Beanstalk will use Git to deploy the App files in the Elastic Beanstalk PaaS App Container to one or more EC2 Instances, then after a brief period access the App at Custom URL
    • Supported Containers { Ruby, Node, Java, PHP, Python, .NET }
  • CloudFormation (Manage and Version Control AWS Resource Collections and Define Entire App Stacks for AWS)
    • Dfn: Manage Collection of Related AWS Resources and Solutions that Architect the System. Define Entire App Stacks as either a Single or a Set of text-based descriptive Template Files (JSON format describes the specification) that integrate and condense the Entire App Stack so it may easily be rebuilt with all associated Dependencies for deployment using the Application Management Console or API { i.e. rebuilt Entire App Stack for a Development Environment, a QA-Testing Env, quickly move to new AWS Region, share them with others }. Git Version Control may be used to managed different versions of the Full Cloud Environment. { i.e. define an App Stack in a text-based Template File which uses two AZs, and two app web servers tiers (one in each AZ), which are each loaded balanced by a common ELB, and where each web server has been deployed on Elastic Beanstalk (on the corresponding AZ), using ElastiCache (in-memory distributed cache storage service that includes RDS) in multi-AZ mode, and where each deployment has branching Clusters of read-replicas and Master-Slave configurations across multi-AZs for redundancy
    • CloudFormation Template File Example:
  "Description"       : "Create EC2 Instance",
  // "Parameters" Object 
  "Parameters"        : {
    "EC2KeyPair"      : {
      "Description"   : "EC2 Key Pair for SSH Access",
      "Type"          : "String"
  // Consider inserting allowed values list to restrict user inputs
  // "Resources" Property defines Components & Services to Create
  "Resources"         : { 
    // Resource Name
    "Ec2Instance      : { 
      // "Type" Property defines the type of AWS Resource
      "Type"          : "AWS::EC2::Instance",
      // "Properties" Properties that are required to launch the new Instance of a specific Resource being defined (is dependent on the Type of Resource)
      "Properties"    : {
        "KeyName"     : "my-key-pair", // Access Key { must be defined to create Resources but should not be hard-coded like this when shared, instead leverage the "Parameters" Object of CloudFormation Template Files to reference declared parameters as placeholder values for each Property (populated with actual specific values passed in from each user when they create a New Resource) }
        "ImageId"     : "ami-0123abcd", // AMI Identifier
        "InstanceType : "m1.medium" // Instance Size
  • CloudFormer Tool
    • Dfn: Tool for creating CloudFormation Template of an existing App Stack with syntax and structural complexities (multiple Components)
    • Setup Guide: 
      • Create App Stack and configure and launch Resources using AWS Management Console or API
      • Create CloudFormer Stack and launch it (which is itself a CloudFormation Stack)
      • Run CloudFormer Tool on single EC2 Micro-Instance (after launching tools needed stack)
      • Create CloudFormation Template File from existing App Stack using the CloudFormer Tool
      • Shutdown CloudFormation Stack (payment savings)
      • Launch Newly Replicated App Stack in a New Environment with the CloudFormation Template File, setup Git Version Control, and share with others if desired
Loosely Coupled (Decoupling) System Components
  • Dfn: Reduce Tight Dependencies improve System Scalability so any failure of a Single Component in the System would not disturb other Components
  • SQS (Simple Message Queue Service)
    • Dfn: Implements Message Queuing Software in the Cloud (Reliable, Scalable, Distributed System) that passes Messages to communicate between the Computer System (Notification Center) and Independent App Components, to Bind them and maintain Loose Coupling. The Command Design Pattern may be used to handle messages that invoke specific workload tasks. SQS requires the App to constantly Poll for New Messages (referred to as a Pull Approach)
    • Example #1: Tightly Coupled System where each Component controller calls a method in the next Component directly and Synchronously such that a failure in any will cascade to others since they call and wait upon each other, and may prevent processing of further System inputs into their black box. 
                    A calls B                   
Controller A       ►      Controller B      
    • Example #1 (Cont'd): Loosely Coupled System is what the System should be re-Architected to be, using SQS to manage queues, so instead of direct communication between controllers, the Components use Queue Buffers intermediaries to achieve Asynchronous Communication (improving overall System and supporting Concurrency, remaining Highly Available, and improving handling of Load Spikes), so other Components continue functioning and use a Queue Buffer upon failure of a different Component, and when the broken Component returns online again then Concurrent / Parallel Processing by multiple Controller Components occurs to cater for a build-up of messages
Controller B Fails, Queue C flushes, Queue B Buffer builds-up with messages from Controller A (still working) input until Controller B recovers, then multiple Components of Controller B may run in Parallel to Process the waiting message build-up
Queue B       ►      Controller B       ►      Queue C             ►      Controller B  
Controller B

    • Example #2: Watermarking App Use Case using Message Queues { given an app Workflow that adds a watermark to an image with workflow including uploading image to server, downsizing image to a smaller image, watermark added to smaller image, user notified watermarked image is ready for use via email. Re-Architect for AWS using SQS (so if anything breaks in watermarking routine, the resizing and notifications routines continue working) by Independently running the following, and leveraging Message Queues to achieve necessary Communication between each (Resizer Process, Watermarking Process, Notification Process). The New Workflow involves the following Decoupling Approach, where efficiencies gained include that any individual Component failure does not stop other Components, if any bottlenecks occur then Additional Individual Components may be added to run Concurrently in Parallel to improve performance, and different team members may separately work on each Component to improve process and development throughput:
      1. Web Server Process Component where user uploads image to server that handles the upload, and server sends message notifying the system a new image was uploaded causing the process to commence which includes
      2. Resizing Process Components of the image being notified to work, and when complete another message notifying the system the image is ready for watermarking is sent, so
      3. Watermarking Process Components start work and add the watermark and when finished and then send a message notifying the system of its completion and to perform the
      4. Notification Process Component to inform the user. 
  • SWF (Simple Workflow Service)
    • Dfn: 
      • Decouples System Components whilst helping in Definition, Management, and Coordination of Complex Business Workflow tasks
      • SWF coordinates Task Management workflow communication between different SWF Tasks (that may be implemented in different programming languages using different tools) that are assigned and carried out (performed) by Activity Workers and where a Decider schedules their forecast workloads
      • SWF handles the overall State of the System
      • SWF uses user predefined Configuration of Workflow Decision Making by using Deciders, that are Event-Driven and assigned to SWF, that respond to the results of tasks completed by Activity Workers associated with these Events, and then Decides how to proceed based on Business Rules 
      • SWF delivers Boilerplate and Complex code for Dependency Management of Activities defined in a Business Process to allow for users to focus on needed specifics within Business Rules. 
      • SWF supports Distributed Processes allowing different tasks performed by Activity Workers to be Decoupled and processed at different locations
    • Example #1: Ecommerce Order Processing Workflow using SWF { sequential workflow where Sequence Important with a given a customer placing an order, it ensures order is valid before charging the customer for the order (Order Verifiers), it then processes the payment, and if successful, it ships the order (Credit Card Processors), and after shipping the order (Warehouse Employees), it saves the order details (Database Recorders)
    • Example #2: Watermarker with SWF
      • SWF provides another option to implement the workflow previously defined using a SQS, whereby a single Decider is used and 3 OFF Activity Tasks (Resize Photo, Watermark Photo, and Send Notification). 
      • Workflow Starter 
        • The Lifecycle of the Workflow using SWF involves user uploading a photo, triggering workflow execution to start
      • SWF Service 
        • Receives workflow execution notification and Schedules the first defined Decision Task
      • Decider
        • The Decider sets itself up to Poll for Decision Tasks from SWF before it receives the Decision Task
      • Cycle #1 
        • The Polling Decider then Decides to Schedule the Resize Photo Activity Task with an Activity Worker (with any info it needs) and returns its Decision to SWF, which actually Schedules the Resize Photo Activity Task, then a capable Activity Worker that is Polling the Resize Photo Activity Task assigns itself the Task from the Schedule and resizes the photo, subsequently returning the Results to SWF when complete
        • Once complete the Results are returned to SWF, which Logs them in Workflow History, and Schedules the next Decision Task. 
      • Cycle #2
        • The Polling Decider receives the next Decision Task from SWF, reviews the Workflow History, and Decides to Schedule the Watermark Photo Activity Task with an Activity Worker (with any info it needs) and returns its Decision to SWF, which actually Schedules the Watermark Photo Activity Task, then a capable Activity Worker that is Polling the Watermark Photo Activity Task assigns itself the Task from the Schedule and creates the Watermark, subsequently returning the Results to SWF when complete. 
        • Once complete the Results are returned to SWF, which Logs them in Workflow History, and Schedules the next Decision Task
      • Cycle #3
        • The Polling Decider receives the next Decision Task from SWF, reviews the Workflow History, and Decides to Schedule the Notification Photo Activity Task with an Activity Worker (with any info it needs) and returns its Decision to SWF, which actually Schedules the Notification Photo Activity Task, then a capable Activity Worker that is Polling the Notification Photo Activity Task assigns itself the Task from the Schedule and performs the Notification Task, subsequently returning the Results to SWF when complete. 
        • Once complete the Results are returned to SWF, which Logs them in Workflow History, and Schedules the next Decision Task
      • Workflow Shutdown
        • The Polling Decider observes that the Final Task has completed from SWF, and Decides to triggering workflow execution to closedown, and returns its Decision to SWF, which performs workflow execution closedown, Archiving Workflow History
  • SNS (Simple Notification Service)
    • Dfn: Message Notifications may be sent from within an App or direct from AWS Management Console to be Published to a Centralised Topic using various Publish Subscribe Protocols (the Publisher Subscriber Design Pattern). Subscribers to the Topic receive the Message Notification. Use Cases are when want a Single Message sent to Many Subscribers.
    • Supported Protocols & Endpoints
      • HTTP (URL http)
      • HTTPS (URL https)
      • Email (Email Addr via SMTP)
      • Email-JSON (Email Addr delivered JSON-encoded via SMTP)
      • SMS (Phone No)
      • SQS (SQS Queue Name is delivered JSON-encoded messages)
      • Application (Application is delivered JSON-encoded messages)
    • Push-Notification System 
      • The Application Protocol with Application Endpoint allows Developers to set-up a Push-Notification System with SNS to allow App Components to communicate and keep Components Independent and Decoupled, since Posting Message Notifications to an SNS Topic causes it to be immediately sent to all Subscribers. Note: This differs from SQS's Pull Approach
    • Message Queues as Endpoints
      • Message Queues themselves can be Endpoints such that SNS (Push Approach) may be used to trigger Message Notifications to multiple SQS Queues (that use a Pull Approach) to help further Decouple Components and build out Concurrency with Parallel Workers
    • Example #1: Notify DevOps of Production System Issues { use CloudWatch to monitor System Components, set an Event-Driven Alert when a CPU utilisation exceeds a threshold, Configure the Result of the Alarm to Publish the Message Notification to an SNS Topic, set up Subscribers of the Topic and choose the Protocol to use, either use a Simple String Message Notification or use a JSON structure for the Message Notification, when the Alert is triggered the Message Notification is sent via a POST Request to multiple Subscriber sources (i.e. email a team, SMS person on-call)
      • Sub-Example: String Message Notification GET Request 
      • Sub-Example: JSON Message Notification 
"default" : "my_message",
"email"   : "email_message",
"email-json" : "email_json_message",
"http" : "http_message",
"http" : "http_message",
"https" : "https_message",
"sqs" : "sqs_message"

      • Sub-Example: Full POST Request String Message Notification 
http://sns.<AWS_Region_AZ> HTTP/1.1

    • Example #2: Watermarker Revisited with SNS { SNS used to spread out the Workload in Parallel across Decoupled Components for Efficient and Scalable Systems, using the same as previous requirement for SQS and SWF but with additional requirement during Resizing Photo Activity Task to create a new photo in multiple formats (i.e. JPG & PNG). 
      • Workflow Lifecycle Image uploaded causes 
        • Topic Process to Start
        • Subscription by 2 OFF SQS Queues to the Topic
          • PNG Queue #1
          • JPG Queue #2
        • The same Message Notification is sent into both Queue #1 & #2 and triggers Separate Processes with Activity Workers in each to Start resizing to JPG in and PNG respectively, and upon completion, a Notification is sent to the Watermarking Queue and the Process Continues
  • Scalable NoSQL Data Store (DynamoDB)
    • Dfn: Decouple Components using a NoSQL, no-Relational, Schemaless DB Service built for Low-Latency, High Performance, High Throughput (Scaling Automatically to handle Throughput requirements), using SSDs (Solid-State Drives) for Data Storage (instead of Traditional Hard Drives) for fast Data Access, with Data Auto-Replicated across multiple-AZs (built-in Durability)
    • Stateless Systems 
      • Dfn: DynamoDB is used to store Session State outside of the Application, to helps keep Applications as Stateless as possible
    • Example #1: User Session Management 
      • Solution
        • Application Session State storage outside of the Web Server (EC2 Instances) in common Data Store in Shared Location so System Components all have Access. DynamoDB is optimised for Session Management interactions (more than Traditional RDBMS's) and maintains Decoupling of Components
      • Problem
        • Storing multiple App Session States directly on a Web Server Tier, which presents a Single Point of Failure (SPF) 
        • In attempting to eliminate the SPF by changing the System Architecture by Adding Additional Web Servers (EC2 Instances) and a Load Balancer (ELB)
        • But since Session State is being Locally Stored, and the ELB is balancing Round-Robin, the logged-in user will eventually visit one of the Web Server EC2 Instances  that does not have their required valid Session and they will be logged-out. This may be fixed to solve the Session State problem by setting the ELB to remember users and always send them back to the same Web Server EC2 Instance (Sticky Session Management on the ELB), but we again have an SPF, because if the EC2 Instance fails the user is redirected to another EC2 Instance (same problem with original approach)
  • Cloud Information Security 
    • SSM (Shared Security Model)
      • Dfn: Priority to protect mission critical information safe from accidental or deliberate theft, leakage, integrity compromise, and deletion, at every Layer of an App to minimise Risk exposure. Understand the AWS SSM to appropriately Architect Systems. 
      • SSM is shared security responsibility between AWS Services Consumer, and the Provider (AWS) toward Security Objectives. Data Security Responsibilities (see Cloud Concepts Section for Overview) are shared
    • Categories of SSM (SSM Varies by AWS Service) 
      • Infrastructure Services (IaaS) Offerings (EC2, EBS, Auto-Scaling, VPC, Direct Connect) that allow Architecting and building Cloud Infrastructure (using technologies similar to Traditional Rack Server Hosting Solutions)
        • Cloud Provider (AWS) - 
          • Foundation Services (Compute, Network, Storage, DB)
          • Global Infrastructure (Regions, AZs, Edge Locations, Facilities, Virtualisation Infrastructure, Network Infrastructure, Hardware Infrastructure)
        • Cloud Consumer - Securing Data (at rest), AMIs, Applications, Supporting Software Stack, Operating System, Network & Firewall Config, Client & Server-Side Data Encryption, Network Traffic Protection, Account Credentials & Management, IAM User Management
      • Container Services (PaaS) Offerings (RDS, CloudFormation, Elastic Beanstalk, EMR, OpsWorks) are usually AWS Managed Services running Application Containers on EC2 Instances without the OS or Platform Layer being managed
        • Cloud Provider (AWS) - Application, Platform & Software Stack, Operating System, Network & Firewall Config, 
          • Foundation Services (as above)
          • Global Infrastructure (as above)
        • Cloud Consumer - Securing Data (at rest and in transit), Client & Server-Side Data Encryption, Network Traffic Protection, Account Credentials & Management, IAM User Management
      • Abstracted Services (SaaS) Offerings (S3, SQS, DynamoDB, SNS, SES, CloudSearch) that Abstract the Platform or Management Layer where Cloud Apps are built and operated, and where Endpoints of these Abstracted Services are Accessed using APIs, whereby AWS manages the underlying Components or Operating System where they reside
        • Cloud Provider (AWS) - Server-Side Data Encryption, Network Traffic Protection, OS, Network, Firewall, Application, and Platform Config
          • Foundation Services (as above)
          • Global Infrastructure (as above)
        • Cloud Consumer - Data (at rest and in transit), Client-Side Data EncryptionAccount Credentials & Management, IAM User Management
    • IAM (Identity and Access Management)
      • Dfn: Control Access & Permissions to AWS Resources & Services. Manage Access Control Entities (Users, Groups, Roles, Permissions) and apply to Services and API Calls. Restrict User Access to Control Services, Perform Actions, and Visible Resources within Services 
      • Master Account
        • Dfn: Master User (Root User) with Full Access. Creates Users (no privileges by Default Least Privilege Principle) and specify Permissions to fulfill their specific tasks
        • Credential Types { Console Login, Access Key / Secret Key for API }
        • Groups { Groups have Privileges with Collections of Users }
        • Roles { Roles have Permissions allowing Users and Services to act on behalf. Prevent Long-Term Credential Sharing and Embedded in Applications. Simplifies Application Access Permissions to use EC2 }
        • Application Access { Create Role, Launch EC2 Instance in Role, Embed Access Keys into Instance Metadata. Applications can Access Metadata on the Instance to retrieve required Access }
      • Access Control Structures (Resources and Services)
        • Dfn: Setup using IAM using Multi-Factor Authentication (recommended) using credentials that differ from the Master Account
    • Security Groups
      • Dfn: Virtual Firewalls in Cloud to Control Inbound Traffic (i.e. EC2, RDS, VPC, Elasticache) to Resources with Inbound Rules { Traffic Type (Primary open Traffic Protocol i.e. SSH, RDP, PCP, HTTP), Underlying Protocol (i.e. TCP), Port Range, Source (IP Address specified using Classless Internet Domain Routing Notation (CIDAR) or another Security Group) } and determine who may Communicate with Servers belonging to the Group. Flexibility since Traffic Rules for Business Security Management only need to be set-up once and applied to Multiple Resources, or Resources may be applied to Multiple Groups. Security Groups with Virtual Private Cloud Networks (VPCs) Control Outboard Traffic too. Default Security Group allowing "no access" is applied if no Security Group is specified.
      • CIDAR (Classless Internet Domain Routing) Notation
        • Dfn: Specific IP Address and Routing Prefix appended by Bits that represent Matching Restrictiveness
        • Example:
          • - No Access
          • - Single IPAddress Access Only 
          • - Octets 1-3 Must Match
          • - Octets 1-2 Must Match
          • - Octet 1 Must Match
          • - Open Access
      • Example #1: Security Group to achieve Security Requirements
        • Situation - Given Web App with Layers 
          • Web Server (Port 80)
          • App Server (Port 6081)
          • DB Layer (Port 3306)
        • Security Requirements
          • Limit Inbound Traffic as much as possible to each Resource
          • Allow Admin from specific IP Address external Access to Resources for maintenance purposes
        • Solution
          • SSH Protocol Port 22 - Single IP Address Access Only (only via Bastion Server (uses Hardened Server solely for Access Restrictions to Web, App, and DB Server Resources)
          • HTTP Port 80 - Open Access (only to Web Server)
          • All other traffic denied and ports blocked by outer Firewall and blocked by Security Group 
          • Admin must SSH to Bastian Server, and from Bastian Server SSH to Resources
          • Each Resource may Communicate together but only via Port 6081 on App Server, and Port 3306 on DB Server
          • Security Group to Achieve Inbound Traffic Rules 
Layer   |   Type / Port             |  Source
Bastion |   SSH / 22                |  text
DB      |   SSH / 22 / TCP / 3306   |  text
App     |   SSH / 22 / TCP / 6081   |  text
Web     |   SSH / 22 / HTTP / 80    |  text
    • VPC (Virtual Private Cloud)
      • Dfn: More Security Options so use it instead of EC2 Classic Platform (Less Secure), EC2-VPC Instances offer:
        • Run in a Virtual Private Cloud (VPC) (logically isolated to only one AWS Account) where EC2 Nodes in VPC are not automatically addressable via Public Internet
        • Users Control both Public and Private Network Interfaces. 
        • Users may specify Custom Private IP Range and Architect the range into a combination of Public / Private Subnets
        • Users are able to Control both Inbound and Outbound Traffic
        • Users may assign Multiple IP Addresses, Elastic IP Addresses (EIPs), and Elastic Interface Networks (EINs) to EC2-VPC Instances
        • Users may Securely (with Encrypted VPN connection) connect VPC to on-site Systems (non-AWS)
      • Traditional EC2 Classic Platform (Less Secure & Deprecated)
        • Dfn: Original Release of EC2 before VPC introduced. EC2 Instances were run on a Single Flat Shared Network with other AWS Customers. All EC2 Instances were associated with Public IP Address (plus Private one) whether Users liked it or not, including for EC2 DB Instances, and there was no distinction between Public & Private Network Interfaces so disabling traffic on public host name was not an option 
      • Example #1: VPC Setup
        • VPC Region selection (AZ A, B, C)
        • VPC Block of Internal IP Addresses (i.e. (consider needs and ensure sufficiently large enough range to support growth)
        • VPC Subnets created and each confined within an individual AZ (i.e. Subnet 1, Subnet 2, Subnet 3 Note that each Subnet has a Block of IP Addresses defined using CIDAR Notation and are a Subset of the Block defined for the entire VPC
        • Gateway Interfaces Attached to allow Access to Network (i.e. between Subnet 2 and Public Internet, causing it to become a Public Subnet)
        • VPN Gateway to allow Subnets to Route to Corporate IT Data Center over secure Encrypted VPN Connection

Cloud Practice

Problem Definition
  • System & Web Application Architecture Overview

     Region (US East)
        AZ a   WS (ASG)
             /          \
  PI ___  ELB            \
             \            \
        AZ b   WS (ASG)___RDS

    • Region - Launch in US East
    • System Components - 
      • 1x ELB (Common) 
      • 2x Web Servers (WS) (Tier) 
        • Separate AZs
        • Auto-Scaling Group (ASG) (always min. 2x WS running, 1x in AZ a, 1x in AZ b) (High Availability, Auto-Elasticity leveraged)
      • 1x RDS (DB Instance)
    • Traffic Specification & Security Requirements
      • Access from Public Internet (PI) on Port 80 to ELB
      • Restrict Access to Web Server Tiers on Port 80 to only ELB and Admin Port 22 
      • Restrict Access to RDS MySQL DB to Web Server Tiers
  • API Connection using AWS SDK (allow Interfacing and building Components within App and Scripts)
Solution Steps
  • Step 1: Sign Up 
    • Multi-Factor Authentication (MFA)
  • Step 2: AWS Management Console (Dashboard)
    • IAM 
      • Master Account (already set-up, only use for Payment Management)
      • New User (Account) for each User Profile instead of accessing Account and all Resources as Account Owner (with Access to Billing)
      • New User (Console & Services) set-up with Permissions & Privileges to Access specific Resources
    • IAM > IAM Resources > Users > Create New User
    • Click User > Permissions > Attach User Policy > Set Permissions
    • Policy Document Settings (JSON Format)
      • Explicitly Define the Effect when User requests Access (i.e. Allow or Deny)
      • Explicitly Define Allowable Actions for each AWS Service (All Denied by Default) (i.e. action to use S3 List Bucket Action to return info about bucket items)
      • Explicitly Define Allowable Resources the Actions may be taken upon (i.e. specific S3 Buckets we allow User to perform List Bucket Action upon)
    • Policy Example:
{ // Template Policy Document created using JSON
  "Version"    : "2012-.....",
  "Statement"  : [ // Policy contains Statement(s) that each describe a Set of Permissions
      "Effect" : "Allow",
      "Action" : "*", // Allow all Actions on all Resources
      "Resource" : "*",
    • Click User > Security Credentials > Sign-In Credentials > Manage Password
    • IAM User Sign-In URL 
      • Dashboard > IAM User Sign-In URL > Create Account Alias (easier URL to remember)
    • Create Key Pair 
      • EC2 uses Public Key Cryptography to Encrypt / Decrypt Log Info (i.e. Password). Recipients use Private Key to Decrypt. Combined, the Public & Private Keys are known as a Key Pair. 
      • Create Key Pair before Login to EC2 Instance (since a Key Pair is assigned to the EC2 Instance and Controls SSH Access to it) 
      • Specify Key Pair to Launch Instance
      • Provide Private Key when Connect to Instance
      • Specify Key Pair to Login using SSH to Unix Instance
      • Specify Key Pair to obtain Admin Password and Login using RDP to Windows Instances
      • Dashboard > EC2 > Key Pairs > Create Key Pair (Specify a convention that indicates when created i.e. dev-2014)
        • Downloads the Private Key .pem File (of Public / Private Key Combination) that we need to use to Access any Server Launched with the Key Pair
        • Set Permissions on .pem File so can Login to EC2 Instances via SSH (prevents SSH Client complaining)
        • Open Terminal and use Change Mode to change file to Read-Only for the Owner 
chmod 0400 ~/Downloads/dev-2014.pem
ls -al ~/Downloads/dev-2014.pem // Check Read-Only is Set

    • Create Security Groups Control Inbound Traffic Access, Firewall Security Rules }
      • Determine Security Group Requirements
        • Allow Public Traffic into ELB
        • Restrict to only allow Traffic from ELB to Web Server Instances
        • Restrict to only allow Traffic from Web Server Instances to RDS
      • Specification of Security Groups Requirements (to handle traffic needs)
        • ELB Security Group { Allow HTTP Traffic on Port 80 only from ELB }
        • RDS Security Group { Allow MySQL/TCP Traffic on Port 3306 only from Web Server Instances }
      • Create Security Groups Resource
        • Dashboard > EC2 > Security Groups > Create Security Group (Default Security Groups with No Permissions exist if none explicitly chosen yet)
        • Default Security Group (Retained)
        • ELB Security Group
          1. Name: load-balancer
          2. Description: allow http 80 in from public
          3. VPC: <insert_vpc_to_launch_in> (use Default VPC if Custom VPC not yet created)
          4. Dashboard > EC2 > Security Groups > Create Security Group > Security Group Rules > Inbound > Add Rule
        • Web Server Tier Security Group
          1. Note: Grab the "Group ID" of the ELB Security Group, as we will restrict traffic on HTTP Port 80 to only be from the ELB, by applying the "Group ID" to the "Source"
          2. Name: web-tier
          3. Description: allow http 80 in from elb
          4. VPC: <insert_vpc_to_launch_in> (use Default VPC if Custom VPC not yet created)
          5. Dashboard > EC2 > Security Groups > Create Security Group > Security Group Rules > Inbound > Add Rule
            • Source > Custom IP > <insert_Web_Server_Tier_Security_Group_ID>
            • Note: Save the Web Server Tier without this Rule first to grab its Group ID and then return to Edit this Security Group to Add this Rule
ELB Security Group
Type    Protocol  Port Range   Source
HTTP    TCP       80           Anywhere

Web-Server Tier Security Group
Type    Protocol  Port Range   Source
HTTP    TCP       80           Custom IP <insert_ELB_Security_Group_ID>
MYSQL   TCP       3306         Custom IP <insert_Web_Server_Tier_Security_Group_ID> 

    • Create ELB { Assigning the Security Group to be used }
      • ELB Purpose { ELB balances traffic across multiple Web Servers to achieve High Availability in a Design to account for Failure }
      • Create ELB
        • Dashboard > EC2 > Load Balancers > Create Load Balancer
        • Name: web-tier-elb
        • Create LB Inside: <insert_vpc_to_launch_in> (use Default VPC if Custom VPC not yet created)
        • Create an "Internal" Load Balancer? No  { in case of VPC it may be made Public (given DNS Name using Public IP Address) or made Private ("Internal" LB where only a Private IP Address used) } (we require the ELB to be Public to receive Public Traffic for the specification)
        • Listener Configuration: <use_Default> (i.e. HTTP to ELB on Port 80 and ELB Forwarding Traffic over HTTP to Web Server Instances on Port 80)  (Note: Change to HTTPS secure protocol on Port 443 for e-commerce sites and let SSL terminate at ELB-level, which allows for SSL Certificate Management on the ELB instead of on each individual Web Server EC2 Instance so Scaling Out each EC2 Instance Tier is easier)
        • Configure Health Check: <insert_Script_on_Instances_specifying_what_Healthy_means> (i.e. heartbeat.php) (allows ELB to use CloudWatch Monitoring to check health of Web Server EC2 Instances so only Route Traffic to healthy instances)
        • Assign Security Groups: load-balancer
        • Add EC2 Instances to Load Balancer: <insert_EC2_Instances_when_Setup>
        • AZ Distribution: 
          • Cross-Zone Load Balancing (required if want EC2 Instances across multiple AZs. This ELB setting distributes traffic evenly across all EC2 Instances in multiple AZs)
          • Connection Draining (allows requests to gracefully complete before ELB completely removes an unhealthy EC2 Instance, or if EC2 Instance is down for maintenance it allows existing requests to complete so minimal impact on customers)
    • Create & Launch EC2 Instance (Configurating Apache and PHP with User Data)
      • Create EC2 Instance
        • Dashboard > EC2 > Instances > Launch Instance > 
        • > Choose AMI > QuickStart > Amazon Linux AMI
        • > Choose Instance Type (Size) (Determine best fit Virtual Server to run our App Use Case and Org based on Compute, Memory, Storage Options, Network Capacity, Resource Mix, and Price Points) (i.e. start small with a Micro Instance and increase EC2 Instance size when specific limitations reached and unable to meet requirements of limitations by Scaling Out Horizontally)
        • > Configure Instance Details
          • No of Instances: (i.e. Launch Multiple EC2 Instance at once from same AMI)
          • Purchasing Option: (i.e. request to use Name Your Own Price Model to bid on spare EC2 instances that launch when bid exceeds current Spot Price)
        • > Configure Instance Details > Advanced Details
          • User Data: { i.e. use user-data.txt to Bootstrap the EC2 Instance by leveraging Cloud-Init to set-up the initial Configuration for a LAMP Stack Web Server so when the EC2 Instance (the "box") spins up, it will be pre-Configured as a Web App, which saves having to perform SSH and explicitly perform these commands on the server }
          • Example: As Text
#! /bin/bash -ex // Code after Shebang Symbol & Vim Bash is executed as Bash Script as sudo during EC2 Instance launch
yum update -y // Yum Package Manager used to update repo with latest options and use -y flag so it does not wait for further input from user 
yum groupinstall -y "Web Server" "MySQL Database" "PHP Support" // Yum Group Install to install all required packages, without waiting for further user input, and providing "Web Server" option, followed by "MySQL Database" option, followed by providing "PHP Support"
service httpd start // Start the Apache Service, as soon as EC2 Instance has started up
chkconfig httpd on // Configure Apache to Restart automatically whenever this EC2 Instance is booted up
        • > Configure Instance Details > Add Additional Storage (if required)
        • > Configure Instance Details > Tag Instance
          • Tag EC2 Instance to Categorise and improve Management & Administer of Resources in our System Architecture by providing additional Metadata about the EC2 Instance (Key Value Pairs)
        • Example: Tag Instance
Key          |  Value
name         |  web server 1
env          |  dev
role         |  webserver

        • > Configure Instance Details > Configure Security Group (Create New or Use Existing) 
          • Select the existing Web Server Tier Security Group)
        • Specify the Key Pair to use to Launch the EC2 Instance
          • Select the Private dev-2014
    • Connect via HTTP to EC2 Instance (verify EC2 Instance)
      • Dashboard > EC2 > Instances > Launch Instance > 
      • > Description > (Copy the Public IP Address)
      • Browser Testing (Paste URL http://<insert_Public_IP_Address>)
        • Fails (because we created ELB Security Group that restricted to only allow the ELB to have HTTP Access to the Web Server but not allow the Public Internet to have access)
      • Permanent Solution Re-Structure the EC2 Instance to be behind the ELB (eventually) by SSH'ing
      • Temporary Solution Add a New Inbound Rule to the Web Tier Security Group
Web-Server Tier Security Group
Type    Protocol  Port Range   Source
HTTP    TCP       80           My IP 
      • (Refresh) Browser Testing (Direct Testing Paste URL http://<insert_Public_IP_Address>)
        • Passes (Amazon Linux AMI Test Page loads indicating that Apache was installed and loaded)
    • Connect via SSH to EC2 Instance
      • Purpose: Login to EC2 Instance (already running and passed all Status and Instance Checks) via SSH and Add the health-check.php Script so that we can implement Permanent Solution by Re-Structure the EC2 Instance to be behind the ELB (instead of just the Temporary Solution), such that when a user visits http://<insert_Public_IP_Address>, it will load via the ELB rather than directly.
        • Previously Configured on the EC2 Instance as the file that will allow ELB to use CloudWatch Monitoring to check health of Web Server EC2 Instances so only Route Traffic to healthy instances
      • Temporarily Enable SSH Access to EC2 Instance (the "Box") via Port 22 Add a New Inbound Rule to the Web Tier Security Group (to be removed when no longer need to SSH into it)
      • Note: If a Bastion Host is used this EC2 Instance can be setup with an Inbound Rule to only allow SSH in from it as the source (instead of a specific IP)
Web-Server Tier Security Group
Type    Protocol  Port Range   Source
SSH     TCP       22           My IP 
      • SSH into EC2 Instance

usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-e escape_char] [-F configfile]
           [-I pkcs11] [-i identity_file]
           [-L [bind_address:]port:host:hostport]
           [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
           [-R [bind_address:]port:host:hostport] [-S ctl_path]
           [-W host:port] [-w local_tun[:remote_tun]]
           [user@]hostname [command]

$ ssh -i ~/Downloads/dev-2014.pem <insert_Public_IP_Address> -l ec2-user // use SSH Client with flag -i to allow the Identity File to be specified (the Read-Only .pem file that was downloaded), and specify the Public IP Address of the EC2 Instance we want to login to, and specify the Login using flag -l, where it is important to note that AWS Linux EC2 Instances are launched with an ec2-user login name

      • Add heartbeat.php File to the Doc Root
        • Allows ELB to understand if this EC2 Instance is health or not
$ cd /var/www/html // Default location for Apache Doc Root
$ sudo groupadd www // Update Ownership and Permissions of Doc Root Directories to allow EC2 User (which currently already has sudo Access on the EC2 "Box") and any other future Member of the WWW Group to manipulate Files on the Apache Web Server Doc Root (Add, Modify, Delete). Add the WWW Group to the EC2 Instance, giving it Group Ownership of the /var/www/html Directory, and Write Permissions for the group
$ sudo usermod -a -G www ec2-user
// Logout and re-Login for changes to take effect
$ exit
ssh -i ~/Downloads/dev-2014.pem <insert_Public_IP_Address> -l ec2-user
$ groups // Verify that current user EC2 User is a Member of the WWW Group now (ec2-user should be listed)
$ sudo chown -R root:www /var/www // Change Group Ownership of /var/www (Doc Root and its Contents) to the WWW Group (using the Super User and Change Ownership Command to specify owner of Root, WWW Group, on the /var/www Directory)
$ sudo chmod 2775 /var/www // Change Permissions of Directory /var/www and its Sub-Directories to Add Group Write Permissions and to Set the Group ID on all future Sub-Directories created (easier when add more files)
$ find /var/www -type d -exec sudo chmod 2775 {} + // Recursively Find and Change all "Sub-Directories"Adding Group Write Permissions to Directory /var/www and Cascading down to its Sub-Directories 
$ find /var/www -type f -exec sudo chmod 0664 {} + // Recursively Find and Change all "Files", Adding Group Write Permissions to Directory /var/www and Cascading down to its Sub-Directories 

      • Add Content (Static Website or PHP App)
        • Verify Navigation to /var/www Directory and opening the HTML Directory (the actual Doc Root) is working 
$ pwd
$ cd /var/www
$ cd html/p
        • Create Heartbeat.PHP Script File (to check that the EC2 Instance is healthy before being used)
$ touch heartbeat.php // Create PHP File
$ echo "<?php echo 'OK';" > heartbeat.php // Echo Contents into File
$ more heartbeat.php
<?php echo 'OK';

        • Create Instance.PHP Script File (to decipher between which EC2 Instance gets hit by the ELB)

$ touch instance.php // Create another PHP File

$ subl instance.php // Open the PHP File in Sublime Text to Edit

$this_instance_id = file_get_contents('http://<Insert_IP_Address>/latest/meta-data/instance-id'); // Get Contents of Metadata for the EC2 Instance "Instance ID", which is always at the same End Point
  echo (string)($this_instance_id);
  echo "Sorry, instance id unknown"; // Explicitly test and Echo out if not unknown

        •  Browser Testing of "Heartbeat.PHP" Script (Direct Testing Paste URL http://<insert_Public_IP_Address>/heartbeat.php)
          • Passes (displays the Contents: "OK" with status: 200)
          • Note: This is sufficient for the ELB to know the EC2 Instance is healthy, allowing us to actually now start using the ELB instead of Directly accessing the EC2 Instance in the Browser
        •  Browser Testing of "Instance.PHP" Script (Direct Testing Paste URL http://<insert_Public_IP_Address>/instance.php)
          • Passes (displays Instance ID in browser so we know we are hitting a specific Instance ID)
      • Add EC2 Instance(s) to the ELB
        • Dashboard > EC2 > Network & Security > Load Balancers > Instances > 
        • > Edit Instances (Add an EC2 Instance(s) behind the ELB)
          • Select the EC2 Instance (currently running 'web server 1')
          • Note: Status will be "OutOfService" initially while the EC2 Instance is being registered with the ELB. Eventually it will change Status to "InService" and be able to receive traffic from behind the ELB
        •  Browser Testing of ELB (Direct Testing Paste URL to try and hit ELB Instance)
          • Dashboard > EC2 > Network & Security > Load Balancers > Description >  (Copy the ELB DNS Name (A Record))
          • Open Browser and Paste
            • http://<insert_ELB_DNS_Name>
          • Passes (Amazon Linux AMI Test Page loads indicating that Apache was installed and loaded, and confirming that traffic is being routed to the ELB Instance and then forwarded, routing to healthy web server)
        • Check Health Check is Working by Tailing the Apache Log of the ELB
          • Follow all Requests to the ELB that are checking to ensure the EC2 Instance being used is being checked by CloudWatch, which is supposed to be pinging the heartbeat.php File at a Configured frequency 
$ cd /etc/httpd // Change to Directory with Apache Log Files
$ sudo tail -f logs/access_log // Tail the logs using Root Access to start viewing the GET Requests to /heartbeat.php from the ELB-HealthChecker

    • Create & Launch MySQL RDS Instance (Persistent DB Tier)  
      • Create DB Instance
        • Dashboard > RDS > Create Instance > Launch a DB Instance
        • > Select Engine (i.e. MySQL, PostgreSQL, Oracle, MS SQL Server)
        • > Production? (Business Decision based on App requirements as implementation costs are involved, but cheaper than dealing with traditional performance related issues and traditional disaster recovery approaches)
          • For Production Servers rather than just a Dev Server we get Durability and Redundancy in DB Tier with no downtime achieved by means of Multi AZ Deployment such that 2 OFF DB Instances are launched, a Primary DB in AZ a, and Standby/Backup DB in AZ b. AWS performs synchronisation and reroutes connection requests if issues encountered, switching the Primary DB)
          • For Production Servers rather than just a Dev Server we get Provisioned IOPS Storage (fast, predictable, and consistent throughput performance) for DB Layer
        • > Specify DB Details (
          • DB Instance Class: micro (eligible for Free Tier)
          • Allocated Storage: 5GB (minimum)
          • Use Provisioned IOPS: No (only consider if using Production Environment)
          • DB Instance Identifier: dev-db
          • Master Username: admin
          • Master Password: ____
        • > Configure Advanced Settings (
          • VPC Security Group(s): web-tier (VPC) (i.e. allows MySQL on Port 3306 from the Web Server Tier)
          • Database Name: my_db_name
      • Test Connection to RDS DB Instance from Web Server Tier
        • SSH into the EC2 Instance
        • Note: MySQL should already be Installed on this EC2 Instance based on the User Data (user-data.txt) Bootstrapping performed previously
$ mysql -h <Insert_DB_Instance_Endpoint> -u admin -p // SSH into EC2 Instance and use MySQL Client with HostName (obtain the "Endpoint" from the DB Instance in the AWS Management Console), specify that wish to Login with Master Username and Password
mysql> show databases;
mysql> use my_db_name;

    • Create & Launch Custom Server Image (Create an Image (AMI) based on the existing EC2 Instance already setup so we can easily recreate and spin up Clone EC2 Instances of the Web Server Tier Resources based on the Image)
      • Create Image (AMI) of the existing EC2 Instance
        • Dashboard > EC2 > Instances > Right Click > "Create Image (AMI)"
          • Image Name: initial-dev-image-2014
          • Image Description: initial web server
          • No Reboot? (allows EC2 Instance to reboot when making an Image)
        • Dashboard > Images > AMIs (Processes the Creation of the AMI Image, and when ready it has "available" status)
      • Create New EC2 Instance (Clone) based on the Image (AMI) in a Different AZ
        • Check AZ of existing EC2 Instance 
          • Dashboard > EC2 > Instances > AZ
        • Launch New Clone   Dashboard > Images > AMIs > Right Click > "Launch"
          • Choose Instance Type: Micro
          • Configure Instance Details: Change Subnet to Different AZ
          • Tag Instance:
Key          |  Value
name         |  web server 2
env          |  dev
role         |  webserver

          • Configure Security Group > Assign Existing Security Group: web-tier
          • Launch EC2 Instance (Clone)
        • Check EC2 Instance (Clone) Status
          • Dashboard > EC2 > Instances (Wait for Instance & Status Checks to Pass)
        • Test in Browser (EC2 Instance Clone)
          • Click 'web server 2'
          • Dashboard > EC2 > Instances > Description (Copy the Public IP Address, as we will leverage Security Group Rule that allows hitting from our IP Address Directly instead of through the ELB for Testing)
          • Open Browser and Paste
            • http://<insert_EC2_Web_Server_Instance_Clone_Public_IP_Address>
            • Passes (Amazon Linux AMI Test Page loads indicating that Apache was installed and loaded)
          • Test PHP Scripts 
            • http://<insert_EC2_Web_Server_Instance_Clone_Public_IP_Address>/heartbeat.php
          • Open Browser and Paste
            • http://<insert_EC2_Web_Server_Instance_Clone_Public_IP_Address>/instance.php
        • Add EC2 Instance(s) to the ELB
          • Repeat the Previous Process "Add EC2 Instance(s) to the ELB" but for this Clone of the 1st EC2 Instance
          • > Edit Instances (Select 'web server 2')
        • Browser Testing of ELB
          • Repeat the Previous Process performed
          • Check which EC2 Instance the ELB forwarded us to. Open Browser and Paste
            • http://<insert_ELB_DNS_Name>/instance.php
          • Note: With only one Users hitting the ELB, it processes Round-Robin (switching from one EC2 Instance Web Server at one AZ to the other in a different AZ with each refresh of the Browser page)
      • Auto-Scaling Setup
        • Allows New Servers to be automatically spinned up incase of any problems (down or unhealthy) with any of the existing EC2 Web Servers Instances (leveraging Elasticity of the Cloud), so our Web Server Tier is never allowed to have less than 2 OFF EC2 Web Server Instances running
        • Auto-Scaling Components
          • Launch Configuration (what to Scale)
          • Auto-Scaling Groups (how want to Scale)
          • Scaling Policy (assocated with both Launch Config & Auto-Scaling Groups) { how decide to implement Elasticity:
            • Scale In (i.e. Remove Servers on event indicator of usage indicator where low CPU or low Memory being used to reduce our bill)
            • Scale Out (i.e. Add Servers on event indicator of server unhealthy or down) }
        • Create Launch Configuration (what to Scale)
          • Dashboard > EC2 > Auto Scaling > Launch Configurations > Create Auto Scaling Group > Create Launch Configuration (Define EC2 Instance Properties for Launch Cycle Scaling In or Out)
          • Choose AMI > My AMIs (Select AMI with Web Server Tiers previously Created called 'initial-dev-image')
          •  > My AMIs > Configure Details Name: web-server-launch-config
          •  > My AMIs > Configure Security Group Assign Existing Security Group: Select 'web-tier' Security Group
        • Create Auto-Scaling Group (how want to Scale)
          • > Configure Auto Scaling Group Details
          • Specification: Ensure always at least a capacity of 2 OFF Web Servers running
          • Name: web-dev-scaling
          • Group Size: Start with 2 Instances
          • Network: Default VPC
          • Subnet: <Insert_EC2_Web_Server_Instances_Of_Both_AZs> (i.e. US East 1a, and US East 1b, for High Availability)
          • > Advanced Details
          • Load Balancing (associate with ELB?): Yes, 'web-tier-elb'
          • Health Check Type: ELB (selecting ELB performs extra coverage with ELB System checks in addition to the EC2 System checks on Web Servers)
          • Keep this Group at its Initial Size: Yes (our Use Case is simply to ensure at least 2 OFF Web Servers running at any time)
          • Add Notifications (i.e. given an SNS Topic, we could Subscribe to be notified of any Scaling action via email / SMS)
          • Configure Tags for New Instances:

Key          |  Value

name         |  web-server-auto-scaled-instances
env          |  dev
role         |  webserver

          • Finish Creating Auto Scaling Group
        •  Review Launch Configuration 
          • Launch Config: web-server-launch-config
          • Auto Scaling Group: web-dev-scaling (associated with the 'web-server-launch-config', Required Instances: 2, Desired Capacity: 2, Min & Max: 2) 
          • Dashboard > EC2 > Instances 
          • Check that 2 OFF Additional EC2 Web Servers have spun up, both with the same name 'web-server-auto-scaled-instances', on each of the AZs of the the original 2 OFF EC2 Web Servers 'web server 1' and 'web server 2' that are also shown
          • Click Settings > Show/Hide Columns > aws:autoscaling:groupName (displays extra Column)
        • Check Auto Scaling EC2 Web Server Instances behind ELB have 'InService' Status
          • Dashboard > EC2 > Network & Security > Load Balancers > Instances
          • Wait for Status & Instance Checks to Pass
        • Test Auto Scaling Works
          • Dashboard > EC2 > Instances
          • Simulate by Shutting Down one of the 'web-server-auto-scaled-instances' (so it enters the 'shutting-down' State), and notice that an Additional 'web-server-auto-scaled-instances' is automatically spun up (handled by the Auto Scaling Group)
        • Verify Auto Scaling Logged
          • Dashboard > EC2 > Auto Scaling > Auto Scaling Groups
          • Select Group and Click "Scaling History" (shows Termination of EC2 Instance and Launch or New EC2 Instance to meet requirement to always have at least 2 OFF EC2 Instances running behind the ELB)

Cloud Interaction with SDK

SDK { Select desired Language, Framework, Platform }
SDK Authentication Options { Security Credentials provided to authenticate against AWS Account performed by either: 
  • Client Factory Method being passed Credentials
  • Setting Credentials Post-SDK Installation
  • Temporary Credentials from AWS STS 
  • IAM Role Approach (Recommended by AWS when App on EC2 Instance) (since IAM helps grant App Access to other AWS Resources)
SDK Authentication Options Considerations for API 
  • Environmental Variables usage (Development, Testing, Production)
  • Environment Host
  • Quantity of Sets of Credentials 
  • Type of Project being Developed
  • Frequency of Rotating Credentials 
  • Reliance on Temporary Credentials
  • Deployment Process
  • Application Framework
Launch EC2 Instance in specific IAM Role
  • Steps Overview
    1. Create IAM Role
    2. Define AWS Accounts & Services can assume the IAM Role
    3. Define API Actions & Resources the App can use when in the IAM Role
    4. Specify IAM Role when EC2 Instance is Launched 
  • Steps Implementation
    • Steps 1 - 3
      • Dashboard > IAM > Roles > Create New Role
      • Name: webserver (the Role we will Launch the EC2 Web Server Instances in, so the Instances have Access to API and Services)
      • Role Type: AWS EC2 (allow EC2 Instances to call AWS Services on our behalf)
      • Permissions > Select Policy Template (Role Access & Privileges): Power User Access (Restrict to Services Required)
      • Modify Policy Document in JSON (if desired)
    • Steps 4 (Launch EC2 Instance and Choose IAM Role associated)
      • Dashboard > EC2 > AMI > Right Click 'initial-dev-image' > Launch  (Launch EC2 Instance from AMI we already created 'initial-dev-image')
      • > Choose Instance Type:  Micro Instance
      • > Configure Instance Details > IAM Role:  webserver 
      • > Tag Instance:

Key          |  Value

name         |  webserver-in-role
env          |  dev
role         |  webserver

      • > Configure Security Group > Assign Existing Security Group: web-tier
      • Launch EC2 Instance (Clone with IAM Role)
      • Check EC2 Instance Status
        • Dashboard > EC2 > Instances (Wait for Instance & Status Checks to Pass)
    • Connect via SSH to Verify EC2 Instance Clone is Launched with IAM Role
      • Click 'webserver-in-role'
      • Dashboard > EC2 > Instances > Description (Copy the Public IP Address)
      • SSH into EC2 Instance
$ ssh -i ~/Downloads/dev-2014.pem <insert_Public_IP_Address_of_EC2_Instance> -l ec2-user

$ curl http://<Insert_IP_Address>/latest/meta-data/iam/security-credentials/webserver // Access IAM Role Security Credentials for Role just created called 'webserver' from Metadata

      • IAM Returned JSON
{  // All Role Information (of Power User) required to supply to API in order to Access Information
  "Code" : "Success",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "....",
  "SecretAccessKey" : "...",
  "Token" : "...",
SDK Installation 
  • Install the AWS SDK for PHP now that an EC2 Web Server Instance is running 
SDK Installation Options 
  • PHP
    • Composer (Recommended) { PHP Dependency Management Tool that allows Dependency Libraries and 3rd Party Tools to be declared for project and automatically installs them }
    • Phar { Pre-Packaged PHP Archive File }
    • PEAR { Package Manager }
    • ZIP Download & Unpack
  • SSH into EC2 Instance
$ ssh -i ~/Downloads/dev-2014.pem <insert_Public_IP_Address_of_EC2_Instance> -l ec2-user
cd /var/www/html/ // Change Directory to App 
touch composer.json // Create Composer JSON File (composer.json) that instructs Composer on what Dependencies to install
subl composer.json // Open and Add Dependency to install the AWS SDK for PHP

  "require" : {
    "aws/aws-sdk-php" : "2.*" // Latest PHP Version

curl -sS | php // Install Composer (s for Silent, S for showing any Errors, | pipe directly into PHP interpreter for auto-installation, and usage with 'php composer.phar')
php composer.phar install // Load and Install PHP SDK along with all Dependencies (symfony/event-dispatcher, guzzle/guzzle, aws/aws-sdk-php) needed by the SDK by running PHP Archive File created during Installation of Composer 

Script (in PHP) to allow SDK to interface with AWS Services (i.e. S3)
  • Create New S3 Bucket in Dashboard
    • Dashboard > S3 > Create New Bucket
  • Create New S3 Bucket with API using SDK (PHP)
  • SSH into EC2 Instance
$ ssh -i ~/Downloads/dev-2014.pem <insert_Public_IP_Address_of_EC2_Instance> -l ec2-user
cd /var/www/html/ // Change to Doc Root
touch createbucket.php // Create New PHP File 
subl createbucket.php // Open and Edit to Write PHP Script that Creates the S3 Bucket

require 'vendor/autoload.php'; // Call Composer autoloader to find and load any needed Dependency Classes
use Aws\S3\S3Client; // Use shorthand S3 Client rather than Full Namespace to AWS-S3 S3 Client allowing us to simply refer to it as just S3Client in the code
try { 
  $s3Client = S3Client::factory(); // Implement the S3Client using the Factory Method 
  $s3Client->createBucket(array( // Call the 'createBucket' Method of the S3Client that has been created, passing in required Parameters (giving it a Name)
catch(\Aws\S3\Exception\S3Exception $e) {
  echo $e->getMessage();
echo "All done";

    • Note: No AWS Security Credentials have been explicitly been provided, which are required to Interface with the AWS Account through the API, since we have Launched the EC2 Web Server Instance in the 'webserver' IAM Role (which has Access to the S3 Service), our Credentials are already available to the Metadata of the EC2 Instance, and the SDK (PHP) is already setup to obtain the Credentials from the EC2 Instance Metadata automatically (i.e. no hard coding, and no config files required to be maintained across different EC2 Instances, the code is clean and we simply Fully Control Application Access by managing the Role and its Permissions through IAM)
    • Note: When not using the IAM Role-based Approached, you need to pass in the Credential Profile directly into the call to the Factory Method to obtain API Access
  • Run Script (PHP) and Check that App Communicating with AWS Services using API and has Created New S3 Bucket using the Language-Specific SDK (PHP)
    • Run with: 
      • Command-Line (Directly since PHP Interpreter is running)
php createbucket.php
All Done
      • Browser (since in Doc Root)
        • Dashboard > S3
        • Check that a New S3 Bucket has been created Named 'aws-training-2014-test'


1 comment:

  1. Bluehost is definitely the best website hosting provider with plans for all of your hosting requirments.