Second Attemp...

Occasionally, as I come across interesting Oracle Database related issues, I’ll post my thoughts and opinions and who knows what else and perhaps, just maybe, others may find it interesting or useful as well.

Let the fun begin …


One needs to understand how Oracle works in order to use it safely.

----------------------------------------------------------------------
Jan-2017.

This is my second blogging attempt, I originally created the Oracle Blog to talk about Oracle, my learning and findings in Oracle world.

But as we move on, I started working on Big Data Analytics, DevOps and Cloud Computing. So here is second attempt of blogging on Cloud Computing (AWS, Google Cloud), Big Data technologies , concepts and DevOps tools.

Friday, September 1, 2017

Amazon AWS Certified !!!

Today I passed the AWS Certified Developer - Associate exam with 92% 

Wow, I have been working on AWS since last one year and mainly using couple of AWS services like EC2, S3 and IAM role. My frineds were using Alexa, Lamda and Cloudformation.

I thought of learning AWS and started studying other AWS services from August 1st and appeared for exam today September 1st.

I have learned following topic from Amazon site 

  • Amazon EC2
  • IAM
  • S3
  • Cloudformation
  • Beanstalk
  • VPC
Then my friend suggested to take a online course from acloudguru.com which is more of exam oriented. 

I learned following topics from Ryan (acloudguru)
  • SNS
  • SQS
  • SWF
  • DynamoDB
  • EBS 
  • Cloudwatch
I have also tried practice tests from AWS and acloudguru. Also read all FAQ's thoroughly.

My certification Notes as 

Amazon EC2:

EC2 is a web service that provides resizable compute capacity in the cloud
EC2 availability is 99.95%



Amazon S3


Object storage: 0 bytes to 5 TB.
Universal namespace
Unlimited storage
Largest object stored in single put is 5 GB
Multipart Upload for object from 5GB to 5TB, recommends for 100MB n above.
For PUTS, Read after Write consistency
For Overwrite, Eventual consistency
Number of s3 bucket limit per account is 100.

Storage class
S3 Standard: 11 9s, Durability, 99.99% availability
S3 IA, 99.9% available
RRS: 99.99% durability, 99.99% availability. 



DynamoDB:

Eventual consistent reads
Strongly consistent reads
Amazon writes data to 3 different locations synchronously giving you high availability
Primary Key
Partition Key (Hash Key) – Single attribute
Composite Key (Partition Key + Range Key) – composite key ==
Secondary Index
Local Secondary index – Same partition key + different sort key. Can be created at the time of creating table and cannot be deleted or modified
Global secondary index – different partition key + different sort key. Can be added / deleted later.
DynamoDB streams: can be captured any kind of modification to the dynamoDB tables.

Query Vs Scan
Query finds items in a table using primary key attributes
Scan returns entire tables. You can use ProjectionExpression parameter to only return a selected attributes
BatchGetItem API (read multiple items - can get upto 100 items or up to 1MB of data) ,
When you exceed your maximum allowed provisioned throughput for a table or one or more global secondary index you will get 400 HTTP Status code – ProvisionedThroughputExceededException
Only Tables(256 table per region) and ProvisionedThroughput(80 K read, 80K write per account for US east, 20K for other regions) limits can be increased

Throughput Capacity for Reads and Writes Calculations
One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. If you need to read an item that is larger than 4 KB, DynamoDB will need to consume additional read capacity units.
One write capacity unit represents one write per second for an item up to 1 KB in size. If you need to write an item that is larger than 1 KB, DynamoDB will need to consume additional write capacity units. 

The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.
The BatchWriteItem operation puts or deletes multiple items in one or more tables. When called in a loop, it also checks for unprocessed items and submits a new BatchWriteItem request with those unprocessed items until all items have been processed
The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.
Important
If you request more than 100 items BatchGetItem will return a ValidationException with the message "Too many items requested for the BatchGetItem call".
Although GETS, UPDATES, and DELETES of items in Dynamo DB consume capacity units, updating the table via the UpdateTable API call consumes no capacity units.

UpdateTable is an asynchronous operation; while it is executing, the table status changes from ACTIVE to UPDATING. While it is UPDATING, you cannot issue another UpdateTablerequest. When the table returns to the ACTIVE state, the UpdateTable operation is complete.


SQS:

Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consuming components from receiving and processing the message.
Each queue starts with a default setting of 30 seconds for the visibility timeout. Max 12 hours
Change the visibility timeout using the attribute ChangeMessageVisibility 
Message retention period set to 4 days and max is 14 days
Amazon SQS will deliver each message at least once, but cannot guarantee the delivery order. Because each message may be delivered more than once, your application should be idempotent by design.

SNS:

protocols: HTTP, HTTPS, EMAIL, EMAIL-JSON, SQS or Application - messages can be customized for each protocol
Different price for different recipient types
Amazon SNS messages do not publish the source and destination.


SWF:

Workers - interact with SWF to get task, process received task and return the result
Deciders - program that co-ordinates the tasks, i.e. - ordering, concurrency and scheduling
Workers and Deciders can run independently
Maximum task execution time: 1 year

Thanks Ryan and ACloudGuru for the great course on AWS.

Amazon AWS Certified !!!

Today I passed the AWS Certified Developer - Associate exam with 92%  Wow, I have been working on AWS since last one year and mainly usin...