Thursday, October 01, 2015

Notes Amazon AWS Developer Certification - Associate

Below are the notes that I made while studying for the Amazon AWS Developer Certification - Associate.

Hope someone will find this useful too.

-------------------------------------

AWS Essentials
sudo yum install python-pip
IAM - role - must assign role during creation/build of instance,
Python Boto SDK
API access credentials, use in code and use in AWS account
- access KeyId, Secret Access Key -> Attach user policy
- boto.connect_s3('accessKey','Secret')
Federated credentials ASSUME role to work on instances

AWS S3 - Simple Storage Service
- 100 buckets cannot increase, no limit of objects
- name: lower/upper cases, numbers, period, dashes - cannot place symbols together. 3-63 characters
- min 1 byte, 5 TB; largest single upload is 5Gb, large objects use Multipart Upload
- bucket.s3-website.region.amazonaws.com
Error 404 file not found - contents not in bucket
eorror 403 - no permission to that bucket
error 400 - invalid bucket state
409 - bucket has something in, cannot delete yet
500 - internal server error.

Stored obj in lexicographical order - user RANDOMness by using hash key. eg bucket/8761-2010-25-05......
sequential name files is slower because files are likely stored in same partition. different random names, then send different storage partitions.

- host static website on buckets
- have index, error, redirect documents
- route 53 allow bucket to redirect to website yourdomain.com

Static Hosted Website
- infinitely scalable
- create static HTML - create redirect from blog.
- if use route53 to redirect webpage to S3, need same bucket AND domain name.
- use AWS nameservers and put it on nameserver IP with website host.


CORS - Cross origin resource sharing
- to load from another buckets, need to load from another domain name
- use ajax and javascript.
- setup to allow for CORS - allow javascript to perform AJAX calls in other domain. Go To bucket where content is to share from -> Permission - add CORS - specify manually bucket URL that it is coming from. User Cors Configuration Editor.

IAM and Bucket policies
-restrict based on IP address or agents
IAM - user/account level
Bucket - resource level - 20 kb size, only bucket owner can do bucket policy
object / bucket / namespace / subdirectories etc
bucket ownership cannot transfer - need remove everything first.
ACL - are cross account object/bucket resource level perm
Owner of bucket - has full permissions, if no IAM or bucket then he can be denied. Even owner can be denied.
- explicit deny always overriides allow.
- permission apllied to S3 ARNs
- apply policy to user's ARN (each user has one)
- explicit deny overrides allow.


S3 Error Messages
----
import boto
conn=boto.connect_s3()
bucket=conn.create_bucker('test')

Server Side Encryption - SSE
- AWS handle most - just need to choose which needs to be encrypted - the bucket level and object level.
- x-amz-server-side-encryption request header to upload request.
- AES256, bucket policies require all objs to SSE, enable SSE vi AWS console
- Python SDK does not support encryption for now.
- go to Object -> Properties -> details


DynamoDB - NoSQL

-only create tables, not DB
- 256 tables / region -> contact AWS to increase limit
- read throughput, write throughput - determines resources - auto provision resrouces needed for the load - stored on SSD (fast)
- Primary Range Key - Primary Hash Key
- don't need to manage DB server
- Forum - Reply - Thread tables work together
- Product Catalog table - has Hash key only - specify what we can search on. Can search on ID only since we only has Hash key.
Can still search other columns if A) have range key, b) setup primary key as another column c) setup secondary indexes.
- Pri key should have: unique, otherwise it will be SLOWER
- Reply table has Hash key and Range Key.
-- combine Hash / Range in search
-- cannot do table join
- Create a table with Secondary Index
-- create table, select Hash as Id (UNordered index), Range key(Ordered index is created) as PostedBy,
- need to create index for column that we want to search on.

Limits & Overview
- fully managed, read/write scale without downtime, can specify throughput by calling update table. Data spread across servers, stored on SSD, replicated on diff zones.
- fault tolerance by synch replication
- 64kb limit per item (item=rows) (attributes=colums)
- integrate with MaprReduce and Redshift (Data Warehouse)
- Scalable - no limits for the storage
- provision throughput - during create or update - read/write specify
- Hash index - indexes with PK attrib allows apps to retrieve data by specifying the PK values. ONLY PK can be queried.
- Hash PK - a value that unique identifies item in table
- Hash-and-Range PK - pair of items that together form unique identifier for each item in table.
- Hash is unodered, Range is ordered
- Secondary indexes - datat structure with subset of attrib from table, along with alternate key.
- Local secondary index - same hash key as tbale but diff range key, local because scope stays to same hash key partition as table.
- Global secondary index - hash and range key are diff than that of table, therefore queries on index an span all data in table across partitions.
- Limits: 256 tables per region by default but can increase
- range PK 1024 bytes, hash PK 2048 bytes, item 64kb incl attrib name, 5 local and 5 global secondary index/table,

Provision Throughput
- Read - 1 strongly consistent read/sec or two eventually consistent read/sec for items up to 4KB. Eventually - when write, it can write to two places. But if want to read, make sure read the most recent data by consistent read.
- Write 1 write per second up to 1 KB
- 1 thruput ->eg 1 shared by 7 items -> 1kb/s -> 7 secs
- 4 thruput -> eg 1 shared by 7 items -> 1kb/s -> UP(7/4) = 2 secs
- Read example: item 3kb rounded to 4kb, 80 items / sec to read. Thruput required = 80*(1 4kb item) = 80 strongly consistent
80/2 = 40 eventually consistent.

Queries vs Scan API calls
- Query(Get) only PK and Sec Index keys for search, efficient coz it searches index only.
- Scan - reads every item, so it's inefficient. uses filter and looks thru all rows. Returns the filters only, not other attributes, only does eventually consistent reads

Conditional Writes and Atomic counters
- someone updates table, another tries to write the same row.
- Conditionally write - only if current atribute meet spec, say ONLY IF price = $10, then only update = $12. If another update to $8, then it won't write.
- Atomic counter - allow increase / decreasse value without interfering with other write requests, all write requests applied in order received. Use UpdateItem to increment/decrement.
- Eventually Consistent - multiple copies across servers BUT read data may not be most recent. THUS use strongly consistent read, but doubling thruput.

Temp Access to AWS resource (eg DynamoDB)
- eg mobile app - need to use DynamoDB.
- don't want to put API credentials in the code.
- create temp access on user end, and AWS end
- Federated ID providers and IAM role.
- create new role for each identity provider, eg facebook, Google, ...
- when login, Facebook give temp credentials.
- In role, define what permissions the role has, eg read/write access, which tables can access.
- assumeRoleWithWebIdentity() to request temp AWS Security credentials using Provider tokens, and specify ARN for the IAM role, need Amazon resource name of role.
- Create Role - "Role for Identity Provider Access" - Grant access to web identity -


SNS - Simple Notification Service
- subscription = designation of an endpoint
- send email, notice to apps, google cloud messaging, integrate with Cloud watch
- push notification service - Apple Push.. APNS, Google Messaging, Amazon Messaging
- need to subscribe to topics of SNS, from say DynamoDB, CloudWatch, S3 RRS (Reduced Redundancy Storage 99.99%).
- eg notified if CPU > 80%, DB needs more provisioning.
- Create new Topic -> topic ARN-> send to www.xxx.notify.php where the webpage listens for SNS email/json, etc. Endpoint = ARN
SQS -  Amazon queue, when send a message, add its ARN to SNS. In SQS, select permissions, receive messages on all resources.
- SQS - usually EC2 will poll SQS in order to do something after getting something in SQS.
- SNS messages sends to ALL endpoints. Each message can be different.
- HTTP/s, SMS, EMail/JSON, SQS, Apps

- SNS - message create by Publisher Endpoint -> SNS Topic -> Subscriber ->
- When register each mobile device as endpoint, receive a Device Token (APSN Appple) or Reg ID(Google GCM, Android ADM)
1- Receive Token, RegIDfrom notification service when app registered
2)Tokens unique for each app/ mob device
3) Amazon SNS uses token to create mob endpoint
4) Reg app with Amazon SNS by giving App ID, credentials
5)add returned device tokens, regID to create mob endpoint
i) manually add ii) migrate from CSV iii) CreatePlatformEndpoint iv) Register token from devices that will install your app in future.

SNS Message Data
- message posted to subscriber endpoint - key/val pair in JSON format
- Signature - Base64/ 'SHA1withRSA' signature of message,
- SignatureVersion
- MessageId,
- Subject type,
- timestamp,
- Topic ARN for the topic this message was published to
- Type - General Notification
- SigningCertURL
- Unsubscribe URL

S3, SNS, Python Hands On, LOL Cats
- Eg website where people upload image, our app applies a filter.
- Store original source file in Standard S3 - 11 9s reliability
- don't store the filtered image - put them in RRD 99.99% - saves cost.
- SNS on S3, say one object in RRD is lost, AWS will send SNS that image lost. Want to automate so that image will be re-processed and reupload.
- Case Study
- EC2 create worker instance to poll SQS message queue, apply filter, upload to S3
--- SQS to deploy images - poll the queue and process messages
- SNS use SQS as subscription endpoint
- Create role - AWS service roles = Amazon EC2 (allows ec2 instances to call AWS servcies on our behalf
- setup SQS - create new queue - visibility timeout hides message so a node has time to work on it, before other instances try to work on it.
- setup SNS - create new topic - choose SQS endpoint to subscribe -
sudo yum install python-pip
sudo pip install boto


Cloud Formation
- limit of 20 stacks, then fill a form to increase
- deploy resources through a template in JSON.
- eg create dev, test, staging
- able to version control, rollback, monitor changes,
- need to save as *.template
- Template has: Resources, Parameters, Outputs, Description, Mappings, AWSTemplateFormatVersion
- "Resources" - what resources in AWS to use, eg S3Bucket, type is AWS::S3::Bucket. Go to Template References to see properties
- Output: Fn::GetAtt->WebsiteURL or Fn::GetAtt->DomainName
- Parameters: Define an input for users to put bucket name.
Eg KeyName, VpcId, SubnetId, SSHLocation
- Mappings eg "RegionMap" : { "us-east-1" : {"AMI":"ami-13141"}}
- Can update stack, update template code, add resources without downtime.


SQS - scalable messaging system
- loosely decoupled: elasticity, scaling, layered, protection against lost of data.
- message size 256KB
- guarantee message will be delivered at least once (can have duplicate messages)
- delay queues - delay delivery eg 30 secs, min 0, max 12hrs
- Message retention period - time message will live in Q, if not deleted, def is 4 days, . min 1minute, max 14 days
- Visible timeout - seconds message received from Q is invisible to other components polling SQS, min 0, max 20sec, so that other instances don't try to work on task while one instance already working on it. Def:30secs
- Receive Message Wait Time - if value > 0, this activates long polling. It's max time that long polling will wait for message, if there is no message, before returning empty.
- acl - who can retrieve/send messages
- multiple writers/readers, ie multiple EC2 consistently polling queue - allow auto scaling when needed.
- messages can send instructions, eg tell where uploaded images is storeed.
- Lifecycle - our component sends message A then SQS creates multiple messages. ii)component 2 retrieves message from Q and message A is returned. Message A is in Q while being processed and not returned to subsequent receive request for duration of visibility timeout. Component 2 deletes Message A from queue.
- no downtime, High Availability, fault tolerance - visibility timeout.
- Short Polling (default) - queries a subset only, continuous poll needed to ensure every SQS server is polled. So get false empty responses sometimes.
- Long Polling - reduces empty responses, may wait until there is a message in queue before timeout. each polling is charged $$$. Long polling cheaper.
- SQS - guarantees at least ONE message arrive, but can be duplicate. NOT guarantee delivery order of messages.
- if need order, can use sequence in the instances.

SQS  Developer Requirements
- extend single message visibility timeout - ChangeMessageVisibility() - changes visibility of single message
- change a queue default visibility timeout, API-setQueueAttributes, VisibilityTimeout attribute
- enable long polling queue  API-SetQueueAttributes ReceiveMessageWaitTimeSeconds attrib
- enable delay queue - API-SetQueueAttributes  DelaySeconds attrib
- GetQueueAttributes(), ChangeMessageVisibilityBatch(), DeleteMessageBatch(), GetQueueURL()

AWS Documentation - AmazonSQS - API Reference -

SWF - Simple Workflow Service
- define task by task workflow - code execute each task - distributed service so components in pieces and scalable.
- applications can be in cloud or on-premise,
- workflow can consist of human events, last up to 1 year,
- Domains - determine scope of workflow, multiple workflows can live in one domain, workflows cannot interact with workflows in other domain
- Workers and Decider - activity worker perform activity - worker poll to see if there are tasks to do. After doing task, will report to SWF.
- Activity task do something
- Decision task - occurs when state has changed - tells decider state of workflow has changed. let decider choose what is next.
- SQS has duplicate task. But workflow like video transcoding where order is important CANNOT use SQS. Need to use SWF
- SQS/SWF similarity: distributed system, scaled,
- SQS has best effort and duplicate, order not guaranteed, messages live up to 14 days
- SWF guarantees order, can have human task, task up to 1 year, allow asynchronous and synchronous process.

EC2
- instance launched in VPC - allows provision own cloud area -  internal static IP addresses, build private subnet, secure routing between instances, network layer protocols, etc....
- subnet lives in multiple regions
- create new key-pair, then download, then launch instance
- can share AMI with other users, or make public - can be used in different regions
- can copy AMI from one region to another.
- ebs backed vs instance store. EBS backed is stored in storage device, ie maintain state in data. Show up in VOLUME. Instance-Store uses temporary storage, can be rebuild, but if stopped then changes not saved.

EC2 Classic
- virtual servers in Cloud.
- SpotInstances - bid for unused EC2 instances, not for critical service
- Reserved Instance - pay down EC2 price and guarantee compute time in AZ
- On Demand instances - hourly price, no upfront cost,
- Service Limits - 20 instances per account, 5 EIP Elastic IP Addresses
- S3 simple storage Service
- Instance store volumes (virtual devices), attached to actual hardware (like USB), data is gone when instance is stopped.
- EBS volumes (remote elastic block storage device), attached to network storage, root volume in /dev/sda1
--- attached to only 1 instance, min 1GB, max 1TB
--- PreWarming ensure data is not lost, during attaching/detaching volumes. Prewarming allow to touch every single block device.
--- Snapshots are incremental
- IP address - each instance has public IP address, public cname, prviate IP
- IOPS - IO operations per sec - measured in 16KB chunks, provision IOPSx16KB/1024 = MB transfer /sec
- ELB load balancers - distributes traffic, stops serving traffic to unhealthy instances, store SSL certificates.

Maintain session state on ELB
- by default instance are file based. if have multiple load balance instances, user leaves and comes back and get send to another session.
- Solution 1 - enable stickiness - app generated cookie stickiness.
- Solution 2 - app controlled stickiness. ELB issues cookie to associate session with original server.
- Elastic Cache - ELB will balance and distribute across EC2, maintain session state in DB or send session memory to Memcache

VPC
- allows AWS to define a network, resembles traditional network
- like on-premise, has internal IP addrees (private network).
- private network, public/private subnets
- can define custom IP addrress ranges inside each subbet.
- can configure route tables between subnets, configure gateways and attach them to subnets
- able to extand corporate/home/on-premise to cloud as if part of your network
- NAT allow instance to download from internet, but not send things out, within a private subnet
- VPN to cloud. extend home network to cloud with VPN,VPG with IPsec VPN tunnel, layered Security, within a private subnet
- Default VPC - make instances look like Classic Elastic, have public IP/subnet, all pre-configured, internet gateway is connected,
- non-default VPC - have private but not public ip address. Subnets will not have gateway attached by default. Connect by elasticIP, NAT or VPN
- VPC peering - allow setup direct network route, can access each other with private IP address, as if on same private network.
- Peering with two VPCs - multiple VPCs connected as if in private network. - Peering TO a VPC - multiple VPCs connect to a central VPC but not each other.
- Limits: 5VPC per region, number of Gateways  = number of VPCs, 5 internet gateways (can request more), 1 gateway attached to subnet at a time.
- 50 customer gateways per region, 50 VPN connections/reg, 200 route tables / region, 5 elastic IP, 100 security group. 50 rules pre security group.


Building a Non-Default VPC
- VPC = network, subnet is inside a specific VPC
- Create Private Subnets
-CIDR - Classless InterDomain Range, eg 10.0.0.0/16 up to 256 subnets,
Tenancy - Default (Shared) or Dedicated (Single Tenant) - This tenancy option takes preference over tenancy selected at instance launch.
- Subnet - has its own CIDR range, think about multi region availability,
eg 1st subnet: 10.0.1.0/24 -> us-east-1a
eg 2nd subnet: 10.0.2.0/24 -> us-east-1b  Use load balancer between 1st and 2nd subnet
VPC automatically allow subnets to communicate to each other.
--- cannot connect to these from the outside, no internet gateway yet, cannot send/receive to internet.
--- if create NAT - can download patches, but cannot serve outside, and cannot connect to instance from outside.
--- create gateway to public subnet, launch instance inside public subnet, attach elasticIP, then connect to private instance in private subnet.
--- or create VPN - use OpenVPN or Amazon VPN

Route Table for 10.0.0.0/16
- all subnets routed to have traffic routed to each one.
- one route assigned to a subnet at a time.

Internet Gateway
- one gateway attached to default VPC
- to communicate outside, launch instance into a subnet with internet gateway attached AND NEED to attach ElasticIP or in ELB group.
- attach Gateway to VPC (subnet still private)
- assign gateway to a route, then change route on public subnet.
- goto route table, choose a ROUTE, add gateway to this route, 0.0.0.0/0 <-> gateway.
- then attach route to subnet, make 10.0.1.0 to be public, goto SUBNET, select Route table, choose the Route #2 created above.
Route #2 has
Destinations   Target
10.0.0.0/16    local  
0.0.0.0/0      gateway
Route #1 (default)
10.0.0.0/16    local  

Elastic IP address
- can attach to any instance, but if instance is not in subnet with gateway, then CANNOT connect.

Security
Network ACLs - on network level- protects subnet
Security Group - on instance level

- Public Subnets
- attach a gateway to it


VPC Security
Router VPC - 10.0.0.0/16
- has Virtual Priv Gateway
- has Internet Gateway
- has TWO Routing table
Routing Table -> Network ACL -> Subnet 10.0.0.0/24 -> Security Group -> Instances
Network ACL = Firewall protecting Subnets, can deny DDOS with rule
- STATELESS-> return traffic (outbound) must be specified, eg port 80
- Explicit DENY OVERRULE ALLOW
- Rules from low number to high.
- LAST RULE: * 0.0.0.0/0 DENY

Secruity Group = Firewall protecting Instances
- STATEFUL -> outbound allowed automatically
- Instance can belong to multiple Security Group


Create VPC NAT Instance
- private instance is still protected
- one private, one public instance
- Create a Security Group called NATsec, launch this inside VPC
- for every subnet, needs to add its CIDR to NATsec,
eg INBOUND 10.0.3.0/24, OUTBOUND 0.0.0.0/24
- Create instance Linux-AMI-NAT, create in VPC, in public subnet, select security NATsec.
- associate ElasticIP with new NATinstance. This will communicate on behalf of private subnet.
- Right click NAT - Disable "Change Source Dest check"
- go to Route tables - go to ROUTE that is associated with private subnet,  enter Destination=0.0.0.0/0 target=Natsec-id.
- In Route tables - check subnet association to attach to private subnet


VPC Networking
- VPN - don't need Internet Gateway.
- VPN goes through internet - Change Internet Gateway in Virtual Private Gateway
- create Virtual private Gateway - then connect to VPN on cloud side
- One customer side, connect VPN to customer Gateway.
- if use OpenVPN, may be different to customer Gateway.


Elastic IP Addresses and Elastic Network Interfaces
- can attach private IP to elastic IP
- create an E.Network Instance - this will give a new private IP - can attach to any instance. can attach/detach. Has Elastic IP attached to it.
- reasons: creating HA architecture, etc
- instance automatically have primary private IP
- allows reassoication
- when instance stops, EIP stays attach because EIP attach to ENetwork Interfaces which belong VPC


Create a WIKI in VPC
- create wikiVPC - CIDR 10.0.0.0/16
- need public x2 (to use load balancer) and private subnets x2
- subnet public1 CIDR 10.0.0.0/24 -> us-east1b -> apache
- subnet public2 CIDR 10.0.1.0/24 -> us-east1c
- subnet private1 CIDR 10.0.3.0/24 -> us-east1b -> DB, failover, redundancy
- subnet private2 CIDR 10.0.4.0/24 -> us-east1c
- RouteTables - create route wikiPublic
- Internet Gateway - create wikeGateway -> attach to wikiVPC
- Route Tables, for wikiPublic routeTable , add route 0.0.0.0/0 - wikiGateway
- Route Tables -> Subnet associations -> 4 subnets to choose from, select public1, public2 subnets
- Go to RDS Dashboard -> Subnet Groups. Has default VPC associated.
--- Need to create DB subnet group - attach to wikiVPC .
--- add subnets - only private subnet i)private1, ii) private2
--- Go to Instances -> launch DB instance, enable Multi-AZ (Availability Zone) for failover DB. switch DNS to another zone. Data are replicated.
- Goto EC2 -
- launch instance, call websetup - choose wikiVPC, choose public1, create security gropu wikiSecGroup (add HTTP rule source=0.0.0.0/0)
- launch instance, call amisetup - choose wikiVPC, choose public1, create security gropu wikiSecGroup (add HTTP rule source=0.0.0.0/0, add SSH rule source=0.0.0.0/0) - create key pair
- go to Elastic IP - create new one - associate with instance amisetup
- change permission on PEM file is 400 or 600
- download php/apache - sudo apt-get .....
- go to Route53 - add Domain name with nameservers pointed to delegation set. Create new record set - add public address,
- go to RDS -> Security Group -> Inbound All traffic choose wiki-app security group - outbound traffic choose wiki-app security group
- Instances -> create Image of amisetup for HA, called wiki-ami
- Load Balancers - create Load Balance -> use wikiVPC -> protocol = http/80 -> select public1 , public2 -> use wikiapp Security Group
- Instances - Terminate Instance (now that the amisetup instance image is completed)
- goto AutoScaling - create auto scaling group -> new launch config -> select image MyAMIs- wiki-ami -> call this wiki-as -> launch in Security group called wiki-app (wikiSecgroup) ->
- Create Auto Scaling Group - name wiki-group - 2 instances needed -> use wikiVPC - add public1, public2 -> use wiki Load Balancer
- Scaling policy between 2 and 4, integrate with CloudWatch -> create Alarm CPU utilization > 50% for 1 minute, then scale up 50% of group, Use Tags as Name:wiki-as
- Self-Healing - if one instances terminated, after 60 seconds, an instance will relaunch



AWS compliance:
PCI DSS, SOC IRAP, ISO 9001, ISO 27001 MTCS, HIPAA, FERPA, ITAR, FedRAMP, DIACAP, FISMA, NIST, CJIS, FIPS, DOD CSM, G-CLOUD, IT-GRUNSHUTZ, MPAA, CSA,

DynamoDB:
- Actions: BatchGetItem, BatchWriteItem CreateTable DeleteItem DeleteTable DescribeTable GetItem ListTables PutItem Query Scan UpdateItem UpdateTable
- Item collection  - If an exceeds the 10 GB limit, DynamoDB will return an ItemCollectionSizeLimitExceededException and you won't be able to add more items to the item collection or increase the sizes of items that are in the item collection.
-- uses optimistic concurrency control, uses conditional writes
- NO CROSS JOIN support
- Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
- Global secondary index — an index with a hash or a hash-and-range key that can be different from those on the table. A global secondary index is considered "global" because queries on the index can span all items in a table, across all partitions.
- in scan operations, it returns data in 1MB



EC2 Instances
--Product Code - cannot be made public
-- AMI can be launched in the same region as AMI is stored.
- limit of 20 EC2 accounts

EBS Volumes
-- in stopped state, EBS vol can be attached/detached
-- when instance is terminated, volume is deleted
-- charged for volume, instance usage, in addition toAMI

CloudTrail
-- captures API calls made from SQS API from Console / API-calls, and delivers to S3 bucket.
-- from CloudTrail, can determine what SQS request is made, the source IP, who requested it, when, etc

IAM
- AWS temp credentials associated with IAM rotated many times a day
- Cannot change the IAM role of a EC2 running instance, but can change the permissions and effective immediately.
- IAM roles can have up to 250 policies, if more is needed, then fill a form to AWS.

SQS
-- can view messages that are Visible and NotVisible
-- valid identifiers for queue and messages are: QueueURL, MessageID, ReceiptHandle
-- SQSBufferedAsync - prefetch into local buffer. automatic batching of SendMessage / DeleteMessage
-- in the message, IP address of the sender is given by SenderId
- DLQ - is an SQS queue that can be setup to receive messages from other queue which have reached their limits

S3
- multipart upload - can stop and resume uploads, can start upload when file is being created.

EC2 - Relational Database AMIs
- stores data in EBS - fast, reliable, persistent
- avoid friction of infrastructure provisioning while gaining access to std DB engines
- enable complete control over administration and tuning of DB server

VPC
- allow up to 200 subnets in a VPC.
- allow 5 Virtual private gateways per region

SWF, IAM, RDS



Saturday, September 12, 2015

How to check Wifi Usage - if anyone is using it?

This post shows a collection of free software that allows one to check network traffic, from the point of view of monitoring data usage. In particular a home situation may be subscribed to ADSL, and there is a need to check that no external users are stealing the broadband.

Quite a few software mentioned in this site, about monitoring network traffic.
https://community.spiceworks.com/topic/90212-monitoring-traffic-usage-on-my-home-adsl

Glasswire
https://www.glasswire.com/
This is also a FIREWALL. There is a free and paid version.
"GlassWire's firewall software reveals threats your antivirus missed"

BitMeter OS
https://codebox.org.uk/pages/bitmeteros
"BitMeter OS is a free, open-source, bandwidth monitor that works on Windows, Linux and Mac OSX. BitMeter OS keeps track of how much you use your internet/network connection, and allows you to view this information either via a web browser, or by using the command line tools. "

“PRTG: Finally There Is a Network Monitoring Software That Is Powerful And Easy To Use!”
http://www.paessler.com/prtg
"PRTG Network Monitor runs on a Windows machine within your network, collecting various statistics from the machines, software, and devices which you designate. (It can also autodiscover them, helping you map out your network.) It also retains the data so you can see historical performance, helping you react to changes."

RSA Netwitness Investigator
https://isc.sans.edu/forums/diary/An+Introduction+to+RSA+Netwitness+Investigator/18199/
"In many cases using Wireshark to do a network forensics is a very difficult task especially if you need to extract files from a pcap file.
Using tools such as RSA Netwitness Investigator can make network forensics much easier. RSA Netwitness Investigator is available as freeware."

ntopng - High-Speed Web-based Traffic Analysis and Flow Collection.
http://www.ntop.org/products/traffic-analysis/ntop/
ntopng is the next generation version of the original ntop, a network traffic probe that shows the network usage, similar to what the popular top Unix command does. ntopng is based on libpcap and it has been written in a portable way in order to virtually run on every Unix platform, MacOSX and on Windows as well.


TbbMeter
http://www.thinkbroadband.com/tbbmeter.html
"tbbMeter is a bandwidth meter we have developed to help you monitor your Internet usage. It allows you to see how much your computer is sending to and receiving from the Internet in real time."

This seems to monitor traffic on the computer on which it is installed only.

Sunday, August 16, 2015

How To Check Wifi Strength

Below are some tools that will assist in checking wifi strength.



Desktop Software

WifiInfoView by NirSoft 
"WifiInfoView scans the wireless networks in your area and displays extensive information about them, including: Network Name (SSID), MAC Address, ...."

"Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, ...."


Smartphone Apps
Wifi Analyzer for Android
The advantage of having such smartphone app is that it is easier to carry the smartphone around the premises and analyse the wifi signal strength instead of carrying the laptop around. 

Wifi Solver FDTD
$0.99 to buy (as of 17 Aug 2015)
This app is supposed to be based on some seriously advanced computational solver on electromagnetics signal. The downside is that the app is not free.

Sunday, August 09, 2015

News - Windows 10

Windows 10

Here are some articles and links to things associated with Windows 10


----------------------

How to remove the Windows 10 GWX upgrade nonsense

Basically it is a 2 step process, it involves going to c:\windows\system32\GWX
1. Right click on Properties of GWX and change ownership to a local admin user
2. Right click on Properties of GWX and change permissions of the folder
3. Then rename the GWX and its exe into something else.
----------------------


// CRN

Not so fond of the new OS? 

----------------------


// CRN

And how to avoid them. 


-----------------------


// Lifehacker

With Windows 10, settings are split between the Control Panel and the Settings app. If you’d like an all-inclusive starting point for Windows commands, enable God Mode.
, and it’s alive and well with Windows 10.

To enable it, you do the same thing as previously: Create a new folder on your Windows desktop (New > Folder) and save it with the text below:
GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}
When you open that folder, you’ll have god-like access to 260+ functions and tools, some of them different from previous Windows versions. You can also drag and drop any of the commands to your desktop to create a shortcut for the command, which is especially helpful since the settings that have moved to the Settings app, while pinnable to the Start screen, aren’t able to be drag-and-dropped into shortcuts.
 | SuperSite for Windows

Sunday, June 21, 2015

Privacy for Linkedin

A few tips regarding privacy on using LinkedIn.

1. Click on your little photo on the Top-Right of LinkedIn page (after you have signed in)

2. When the Drop Down menu appears, choose "Privacy and Settings"
Then it will present on the bottom half of the page, four tabs on the left side:
- Profile
- Communications
- Groups, Companies & Applications
- Accounts

Choose Profile



3. Stealth Mode 
A.
After choosing Profile, then under the Privacy Control, choose:
"Turn on/off activity broadcasts"

This appears to be the best option if you are finding job, updating your profile, and don't want your connections to know about your activity.

B.
After choosing Profile, then under the Privacy Control, choose:
"Select who can see your activity feed".

I think this is also needed to be controlled, to be in stealth mode. Choose the "Only you" option.

"Your activity feed displays actions you've performed on LinkedIn. Select who can see your activity feed."




4. Blocking
After choosing Profile, then under the Privacy Control, choose:
"Manage who you are blocking"

Be careful with this because this will break your connection with another person. Although they are not notified, they will know that you are no longer connected if they check their connections list.

"Need to block or report someone? Go to the profile of the person you want to block and select "Block or Report" from the drop-down menu at the top of the profile summary. Note: After you’ve blocked the person, any previous profile views of yours and of the other person will disappear from each of your "Who’s Viewed My Profile" sections.
"

5. Manage your connections 
After choosing Profile, then under the Privacy Control, choose:
"Select who can see your connections."

Not sure how useful this is. The tip says:
"Note: people will still be able to see connections who endorse you and connections they share with you. (Don't want your endorsements visible? Just choose to opt out.)"




Hiding LinkedIn Job Hunting Activity from Your Connections
http://joehertvik.com/hiding-job-hunting-activity-linkedin/


How to Make LinkedIn Updates Without Showing It in the Stream
http://smallbusiness.chron.com/make-linkedin-updates-showing-stream-30625.html


Showing or Hiding Activity Updates About You
How do I control the updates I broadcast about myself?
https://help.linkedin.com/app/answers/detail/a_id/78/~/showing-or-hiding-activity-updates-about-you


5 LinkedIn Privacy Settings You Need to Know
http://www.cio.com/article/2413757/social-media/5-linkedin-privacy-settings-you-need-to-know.html

5 LinkedIn Privacy Settings For Job Hunters
http://www.networkcomputing.com/networking/5-linkedin-privacy-settings-for-job-hunters/d/d-id/1111108?

LinkedIn Best Practices
https://help.linkedin.com/app/answers/detail/a_id/267/~/account-security-and-privacy---best-practices
Manage Account Settings
https://help.linkedin.com/app/answers/detail/a_id/66/~/managing-account-settings



Saturday, April 04, 2015

Security - USB Antivirus, Scanners, Immunizers

Here is a list of possible software to guard your PC against an infected USB key flash drive

Panda USB Vaccine
Immunizes both the PC and the USB key.

Bit Defender USB Immunizer
"The USB Immunizer replaces any autorun file on the drive with a special one that can’t be deleted or modified by malware anymore. If you plan to to use your own autorun.inf file, then we’d recommend that you don’t immunize the drive, as you’ll lose your original file. Unfortunately, this is the only available approach to mitigate the effect of autorun malware.
"
Gaijin USB Write Protector
Make the USB key write protected. Does not need to install software.

Autorun Disable

USB Guardian
Isolates the autorun.inf file and prevent access to it.
Parses the autorun.inf to look for other executables, then locks those executables too.
The user can run any other files which are not locked.

USB Defender Mbentefor Projects

USB Flash Security
Protects the contents of the USB key with a password.

TrendMicro USB Security
Not Free. $19.95

One way a malware can make use of USB to transfer malicious code, is to modify the autorun.inf found in the USB to automatically run malware. Knowing this, one way to solve this problem would be to make the autorun.inf unmodifiable.

Sunday, February 01, 2015

Uninstall - Trend Micro Titanium Antivirus Security Suite

Uninstalling the Trend Micro Titanium Security suite has not been a pleasant experience for me. The usual techniques below don't work.

  1. Uninstall by going to "Programs and Features"
  2. Stopping Trend Micro services from the "Services". Even "Run As Administrator" would not allow the service to be stopped.

At the moment I have found a few useful links from other sites such as:
  • http://esupport.trendmicro.com.au/solution/en-us/1037161.aspx
  • http://esupport.trendmicro.com.au/solution/en-us/1058879.aspx
  • http://esupport.trendmicro.com/solution/en-US/1059018.aspx
  • http://forums.cnet.com/7723-6132_102-552546/trend-micro-security-won-t-uninstall/
One trick that seemed to work, thanks to the links above, is go to the START button on Windows (sorry Windows 8), 
- look under All Programs 
- look for Trend Micro folder
- go to something like More Tools and Help
- look for something like: Trend Micro Diagnostic Toolkit - run this.



Group Feature selection -a Binary Way


Suppose there are T distinct types and each of these types is labelled as 1,2,3,4,....,T. Let any of these types be represented as t.
And suppose we want to have various groups and each group is a combination of types. There are G groups in total and each of these groups can be labelled as g = 1,2,3,....G.

Each group can be composed of types as the following example:
g=1 has types  {2,3}
g=2 has types {1,2,3}
g=3 has types {1,2}

Then suppose that for each group g, it is required to randomly select a type which belong to the group.
Here is a method that is designed to be general and although may not be the the most efficient, should be efficient enough for small T.
The types are represented as binary as follows. Example with T=3
t=1 -> 001
t=2 -> 010
t=3 -> 100

The groups can be represented by combining the binary representation for the types. So using the example of the groups above, if g=1 has types {2,3}, then the group is the binary sum of the types:
   g=1 -> 010 +100 = 110
   g=2 -> 001 + 010 +100 = 111
   g=3 -> 001 + 010 = 011

Now for a particular group, it is required to select types which belong to that group. However for general efficiency, it is desired NOT to distinguish between any groups while selecting the random types. So there will be some wastage in this procedure, but it should be fast.

So here is the general random selection process to select any type t from 1....T. Let r be a random integer number from 1....T. Construct a binary representation of the random number, such that:  b=2^(r-1).
The table below shows the representation for various random numbers:
r=1: b=0 -> 001
r=2: b=1 -> 010
r=3: b=2 -> 100

To apply the random selection, simply apply the AND operator on r and g. So using the example of g=1->110, the combination with each of the random numbers would yield the following:
g->110 AND r->001 => 000 no type is selected
g->110 AND r->010 => 010 type 2 is selected
g->110 AND r->100 => 100 type 3 is selected

This algorithm can of course be made more efficient, but that is left for another bright person to accomplish. The present algorithm is efficient already due to use of simple random integers and binary operations and avoiding the use of decision logic like if-else.

Monday, January 26, 2015

Where to get free graphics

Graphics or pictures or images are very useful when we want to illustrate something on our webpages, blogs or other work. It is very easy, but sometimes not right to simply search for images on the internet and use them without permission. There are however, places on the internet that allow pictures to be used for free.

The aim of this post is to put up a list of sites that allow users to use the images for free.
1. Pixabay.com
2. Images under Creative Commons licence.

Friday, January 16, 2015

How to Check for Spyware, Malware Infection

This article will present various ways to check whether your PC is infected by malware, virus and other nasties. Firstly, here is a recollection of other related posts in this blog, so I will show the link here instead of writing the same material again. The new material will be after that.

How to Check for Botnets

Security - Protection against Botnets

Windows Server 2008 - Firewall, Antivirus, AntiSpyware

Security Software Review, Discounts and Special Deals

Links to Smartphone Mobile Security Software

Malware: Windows Enterprise Defender and Windows Diagnostics

Online Scan - PC tools

Online Scan - Websites

Online Scanning websites and links for virus, malware, spyware

Online Scan - AntiVirus

Now for some new stuff. Some tools that can be downloaded and used are:


Google and download: TDSSKiller
- Start tdsskiller - Run as Administrator.
- Click on Change parameters.
- Check all options except "Loaded modules" and click OK.
- Then click on Start scan.
- When threats are found, choose the Skip option for all of them, instead of deleting.
- To open the log file, click on Report. Or it can be found in :\TDSSKiller.<version_date_time>_log.txt


Google and download: Farbar Recovery Scan Tool.
- Start FRST - Run as Administator.
- Select the option Addition.txt as well as others selected by default. Press the Scan button.
- FRST will create two logs - FRST.txt and Addition.txt - in the same directory the tool was run from.

Some list of tools and other advice can be found on:
https://www.makeuseof.com/tag/download-operation-cleanup-complete-malware-removal-guide/

Monday, January 12, 2015

How To Uninstall Pokemon Trading Card Game

The Pokemon Trading Card Game, is not so easily removed.

It may happen that your beloved one installed this Pokemon Trading Card Game without your awareness. The first thing any Windows user would do nowadays would be to go to the:
Control Panel -> Programs and Features -> Uninstall -> Pokemon Trading Card Game.
..... or something similar depending on the version of Windows XP/Vista/7/8/etc

Error: a ''property.USER_profile'' error

However, some may encounter an error message above during uninstallation. There are several options.


Two of the recommended tools are:

Revo Uninstaller
CCleaner
See http://xtechnotes.blogspot.com.au/2011/07/links-to-free-software.html for the above and similar tools


The way without using any software is the following.
1. Go to Control Panel -> Programs and Features
2. Scroll down the list of programs until Pokemon Trading Card Game is found.
3. Do not click the uninstall button above. Instead at the line item that says "Pokemon Trading Card Game", right-click and then choose Change or Repair something similar.
4. Then when a Window pop-up, click on the Uninstall button.
5. This uninstall should complete properly. If not, check out the software mentioned above. The other alternative is to boot into Windows Safe Mode and uninstall there.




Saturday, January 03, 2015

Migrating WordPress Blog

This post is also posted at http://travel4work.wordpress.com/
Migrating WordPress from <yoursite>.wordpress.com – which is a free service – to another site that you are paying for such as <yourdomain>.com/<yourNewWordPressHome> is very easy and it is all done through WordPress.
In this situation, my new website has Fantastico and it allow WordPress to be installed on my domain very easily – few clicks of buttons. Now assume both <yoursite>.wordpress.com and <yourdomain>.com/<yourNewWordPressHome> are running, here are the instructions.
1. Go to <yoursite>.wordpress.com and login there.
2. Go to the classic Dashboard.
3. On the left panel, go to Tools – Export.
4. Choose to export “All Content” and when asked, save the XML file to your PC.
This XML file presumably contains all the data for the two WordPress to communicate.
There are lots of pictures in my original WordPress site, but the XML file is only over 100kB, so obviously the Contents are not being saved, ie not needed.
1. Now go to the new <yourdomain>.com/<yourNewWordPressHome> site and login.
2. 2. Go to the classic Dashboard.
3. On the left panel, go to Tools – Import.
4. At the Import page, choose the WordPress option, and you will be asked to install the plugin called WordPress Installer. Go through with this.
5. Then click on Activate Plugin and Run Importer.
6. Click the button to choose the XML file to import from.
7. In the Import WordPress page, choose Import Author and “Download and import file attachments”.
8. You can now go back to the original site and just manually check the old settings, and apply the same settings to the new website. Some example of this to consider are:
a) Appearance – Themes – search for the same previous theme using the search bar and install
b) Appearance – Widgets – Text – Add your text, including Google Adsense code.