Second Attemp...

Occasionally, as I come across interesting Oracle Database related issues, I’ll post my thoughts and opinions and who knows what else and perhaps, just maybe, others may find it interesting or useful as well.

Let the fun begin …


One needs to understand how Oracle works in order to use it safely.

----------------------------------------------------------------------
Jan-2017.

This is my second blogging attempt, I originally created the Oracle Blog to talk about Oracle, my learning and findings in Oracle world.

But as we move on, I started working on Big Data Analytics, DevOps and Cloud Computing. So here is second attempt of blogging on Cloud Computing (AWS, Google Cloud), Big Data technologies , concepts and DevOps tools.

Friday, September 1, 2017

Amazon AWS Certified !!!

Today I passed the AWS Certified Developer - Associate exam with 92% 

Wow, I have been working on AWS since last one year and mainly using couple of AWS services like EC2, S3 and IAM role. My frineds were using Alexa, Lamda and Cloudformation.

I thought of learning AWS and started studying other AWS services from August 1st and appeared for exam today September 1st.

I have learned following topic from Amazon site 

  • Amazon EC2
  • IAM
  • S3
  • Cloudformation
  • Beanstalk
  • VPC
Then my friend suggested to take a online course from acloudguru.com which is more of exam oriented. 

I learned following topics from Ryan (acloudguru)
  • SNS
  • SQS
  • SWF
  • DynamoDB
  • EBS 
  • Cloudwatch
I have also tried practice tests from AWS and acloudguru. Also read all FAQ's thoroughly.

My certification Notes as 

Amazon EC2:

EC2 is a web service that provides resizable compute capacity in the cloud
EC2 availability is 99.95%



Amazon S3


Object storage: 0 bytes to 5 TB.
Universal namespace
Unlimited storage
Largest object stored in single put is 5 GB
Multipart Upload for object from 5GB to 5TB, recommends for 100MB n above.
For PUTS, Read after Write consistency
For Overwrite, Eventual consistency
Number of s3 bucket limit per account is 100.

Storage class
S3 Standard: 11 9s, Durability, 99.99% availability
S3 IA, 99.9% available
RRS: 99.99% durability, 99.99% availability. 



DynamoDB:

Eventual consistent reads
Strongly consistent reads
Amazon writes data to 3 different locations synchronously giving you high availability
Primary Key
Partition Key (Hash Key) – Single attribute
Composite Key (Partition Key + Range Key) – composite key ==
Secondary Index
Local Secondary index – Same partition key + different sort key. Can be created at the time of creating table and cannot be deleted or modified
Global secondary index – different partition key + different sort key. Can be added / deleted later.
DynamoDB streams: can be captured any kind of modification to the dynamoDB tables.

Query Vs Scan
Query finds items in a table using primary key attributes
Scan returns entire tables. You can use ProjectionExpression parameter to only return a selected attributes
BatchGetItem API (read multiple items - can get upto 100 items or up to 1MB of data) ,
When you exceed your maximum allowed provisioned throughput for a table or one or more global secondary index you will get 400 HTTP Status code – ProvisionedThroughputExceededException
Only Tables(256 table per region) and ProvisionedThroughput(80 K read, 80K write per account for US east, 20K for other regions) limits can be increased

Throughput Capacity for Reads and Writes Calculations
One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. If you need to read an item that is larger than 4 KB, DynamoDB will need to consume additional read capacity units.
One write capacity unit represents one write per second for an item up to 1 KB in size. If you need to write an item that is larger than 1 KB, DynamoDB will need to consume additional write capacity units. 

The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.
The BatchWriteItem operation puts or deletes multiple items in one or more tables. When called in a loop, it also checks for unprocessed items and submits a new BatchWriteItem request with those unprocessed items until all items have been processed
The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.
Important
If you request more than 100 items BatchGetItem will return a ValidationException with the message "Too many items requested for the BatchGetItem call".
Although GETS, UPDATES, and DELETES of items in Dynamo DB consume capacity units, updating the table via the UpdateTable API call consumes no capacity units.

UpdateTable is an asynchronous operation; while it is executing, the table status changes from ACTIVE to UPDATING. While it is UPDATING, you cannot issue another UpdateTablerequest. When the table returns to the ACTIVE state, the UpdateTable operation is complete.


SQS:

Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consuming components from receiving and processing the message.
Each queue starts with a default setting of 30 seconds for the visibility timeout. Max 12 hours
Change the visibility timeout using the attribute ChangeMessageVisibility 
Message retention period set to 4 days and max is 14 days
Amazon SQS will deliver each message at least once, but cannot guarantee the delivery order. Because each message may be delivered more than once, your application should be idempotent by design.

SNS:

protocols: HTTP, HTTPS, EMAIL, EMAIL-JSON, SQS or Application - messages can be customized for each protocol
Different price for different recipient types
Amazon SNS messages do not publish the source and destination.


SWF:

Workers - interact with SWF to get task, process received task and return the result
Deciders - program that co-ordinates the tasks, i.e. - ordering, concurrency and scheduling
Workers and Deciders can run independently
Maximum task execution time: 1 year

Thanks Ryan and ACloudGuru for the great course on AWS.

Thursday, August 10, 2017

Elastic Stack Introduction

Hi All,
Elastic stack is becoming very popular tool in Log Monitoring and Real Time analytics. 

Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on Apache Lucene. Since the first version of Elasticsearch was released in 2010, it has quickly become the most popular search engine, and is commonly used for log analytics, full-text search, and operational intelligence use cases. When coupled with Kibana, a visualization tool, Elasticsearch can be used to provide near-real time analytics using large volumes of log data. Elasticsearch is also popular because of its easy-to-use search APIs which allow you to easily add powerful search capabilities to your applications.

I have been working on Elastic Stack since last one year and I think should share my knowledge, understanding about Elastic Stack with you hence I have created document, PPT and Tutorial to share with you.

Please have a look and let me know if you need any help in implementing the Elastic stack in your organisation. 






Elastic Stack Introduction from Vikram Shinde






Feel free to contact me @vikshinde

Thanks

Friday, August 4, 2017

Kubernetes Autoscaling

Please find the below document for Kubernetes Autoscaling



The Tutorial can be found on below link




Tuesday, August 1, 2017

Ansible Introduction

Ansible Introduction: 

Ansible is powerful IT Automation tool which can help you with configuration management, application deployment, task automation. It can also do IT orchestration, where you have to run tasks in sequence and create a chain of events which must happen on several different servers or devices.

Ansible Installation on CentOS

$ sudo yum install ansible

Ansible Quick Demo



Ansible in Details in link below

https://www.ansible.com/webinars-training/introduction-to-ansible


Monday, July 24, 2017

Docker

Introduction

Docker is a popular containerization tool used to provide software applications with a filesystem that contains everything they need to run. Using Docker containers ensures that the software will behave the same way, regardless of where it is deployed, because its run-time environment is ruthlessly consistent.
We can think of a Docker image as an inert template used to create Docker containers. Images come to life with the docker run command, which creates a container by adding a read-write layer on top of the image. 

Why Docker?

Docker allows applications to be isolated into containers with instructions for exactly what they need to survive that can be easily ported from machine to machine. 
Applications in containers run isolated from one another in the user-space of the host operating system sharing the kernel with other containers. This reduces the overhead required to run packaged software while also enabling the containers to run on any kind of infrastructure. To allow applications within different containers to work with one another Docker supports container linking.

Why Docker is famous?

The ever-present challenge of replicating your production set up in development suddenly becomes close to reality because of Docker.
This is why Docker is a huge help in enabling continous integration, delivery, and deployment pipelines. Here’s what that looks like in action:
  • Your development team is able to create complex requirements for a microservice within an easy-to-write Dockerfile.
  • Push the code up to your git repo.
  • Let the CI server pull it down and build the EXACT environment that will be used in production to run the test suite without needing to configure the CI server at all.
  • Tear down the entire thing when it’s finished.
  • Deploy it out to a staging environment for testers or just notify the testers so that they can run a single command to configure and start the environment locally.
  • Confidently roll exactly what you had in development, testing, and staging into production without any concerns about machine configuration.

Tutorial:
I have created a video for Tutorial for how to install docker and some basic commands




Pre-requisites

Virtual Machine (Ubuntu or Centos)

Create Non-sudo user

Login to server using root privilege
Once you are logged in as root, we're prepared to add the new user account that we will use to log in from now on.
adduser demo
passwd demo

As root, run this command to add your new user to the wheel group (substitute the highlighted word with your new user):
gpasswd –a demo wheel

Installing Docker

Login as non-sudo user
wget -qO- https://get.docker.com/ | sh

$ sudo usermod –aG docker $(whoami)
 
Log out and log in from your server to activate your new groups.

Set Docker to start automatically at boot time:
$ sudo systemctl enable docker.service

Finally, start the Docker service:
$ sudo systemctl start docker.service

Installing Docker Compose

Now that you have Docker installed, let's go ahead and install Docker Compose. First, install python-pip as prerequisite:
$ sudo yum install –y epel-release
$ sudo yum install –y python-pip
Then you can install Docker Compose:
$ sudo pip install docker-compose
You will also need to upgrade your Python packages on CentOS 7 to get docker-compose to run successfully:
$ sudo yum upgrade python*




Docker Commands


Docker Basics

Docker containers are run from Docker images. By default, it pulls images from Docker Hub, a Docker registry.
You can create the Docker hub account for hosting their Docker images. https://hub.docker.com

1.      Docker Hello-World

Let’s run some Docker commands.
$ docker run hello-world
The output shows like below
When you execute the Docker run command, it first checks if that image is available locally.If it doesn’t find the image, the docker client will pull the image from Docker hub. https://hub.docker.com/_/hello-world/
It created new container and then print the message and exited the docker container.
To see running Docker container execute following command
$ docker ps

Since the container is already exited, it will not show running.
If we add the –a flag, it will show all containers stopped or running.
$ docker ps -a
If you run the same command again,
$ docker run hello-world

It will not pull the image from Docker hub because it has found it locally but it created an entirely new container.

2.      Docker example

Execute following command to create container using the base image of Ubuntu.  –i flag is for interactive mode and –t flag will give you terminal.
$ docker run –it ubuntu

The command-line prompt changes to indicate that we are inside the container.
Create a file example1.txt as below
$ echo “Hello World” > /tmp/example.txt
$ cat /tmp/example1.txt
$ exit
Now create another container
$ docker run –it ubuntu

Execute following command to see if the example1.txt file exists.
$ cat /tmp/example1.txt
$ exit
Again logging to first container.
$ docker ps –a
$ docker start –ai <>
$ cat /tmp/example1.txt
$ exit

3.      Delete Containers and Images

To delete the containers
$ docker rm

To delete the container image from local.
$ docker rmi


4.      Pushing Docker image to hub.

First login to docker hub
$ docker login
Now Run docker tag image with your username, repository, and tag names so that the image will upload to your desired destination. The syntax of the command is:
$ docker tag image username/repository:tag
$ docker push username/repository:tag


Publish the image
$ docker push username/repository:tag






Docker Compose Commands

The public Docker registry, Docker Hub, includes a simple Hello World image. Now that we have Docker Compose installed, let's test it with this really simple example.
First, create a directory for our YAML file:
$ mkdir hello-world

Then change into the directory:
$ cd hello-world

Now create the YAML file using your favorite text editor:
$ vi docker-compose.yml
Put the following contents into the file, save the file, and exit the text editor:
docker-compose.yml
my-test:
  image: hello-world
The first line will be used as part of the container name. The second line specifies which image to use to create the container. The image will be downloaded from the official Docker Hub repository.
While still in the ~/hello-world directory, execute the following command to create the container:
$ docker-compose up -d
The output should start with the following:
Output of docker-compose up
Creating helloworld_my-test_1...
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker.
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |

Let's go over the commands the docker-compose tool supports.
The docker-compose command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml file for each container inside its directory.
So far we've been running docker-compose up on our own and using CTRL-C to shut it down. This allows debug messages to be displayed in the terminal window. This isn't ideal though, when running in production you'll want to have docker-compose act more like a service. One simple way to do this is to just add the -d option when you up your session:
$ docker-compose up -d
docker-compose will now fork to the background.
To show your group of Docker containers (both stopped and currently running), use the following command:
$ docker-compose ps
For example, the following shows that the helloworld_my-test_1 container is stopped:
Output of `docker-compose ps`
        Name           Command   State    Ports 
-----------------------------------------------
helloworld_my-test_1   /hello    Exit 0         
A running container will show the Up state:
Output of `docker-compose ps`
     Name              Command          State        Ports      
---------------------------------------------------------------
nginx_nginx_1   nginx -g daemon off;   Up      443/tcp, 80/tcp 
To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml file used to start the Docker group:
$ docker-compose stop
Note: docker-compose kill is also available if you need to shut things down more forcefully.

In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm command to fully delete all the containers that make up your container group:
$ docker-compose rm
If you try any of these commands from a directory other than the directory that contains a Docker container and .yml file, it will complain and not show you your containers:
Output from wrong directory
        Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
 
        Supported filenames: docker-compose.yml, docker-compose.yaml, fig.yml, fig.yaml

Reference


Friday, July 14, 2017

Docker Container Monitoring

Introduction

Docker: Docker is an open-source project that automates the deployment of applications inside software containers. 

Container: Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.

Docker-Compose: Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration.

cAdvisor (Container Advisor) : It provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects aggregates, processes, and exports information about running containers.


Prometheus: Prometheus is an open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.  It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Prometheus collects metrics from monitored targets by scraping metrics from HTTP endpoints on these targets.


Prometheus Monitoring stack : 
Prometheus-core : Time series database to store metrics
Node-Exporter : Exporter of node metrics
PromDash : Prometheus dashboard

Docker containers for prometheus stack: https://hub.docker.com/u/prom/

Grafana: Grafana is the ‘face’ of Prometheus. While Prometheus exposes some of its internals like settings and the stats it gathers via basic web front-ends, it delegates the heavy lifting of proper graphical displays and dashboards to Grafana.

Alertmanager: Alertmanager manages the routing of alerts which Prometheus raises to various different channels like email, pagers, slack - and so on. So while Prometheus collects stats and raises alerts it is completely agnostic of where these alerts should be displayed. This is where the alertmanager picks up.

Requirements:

In order to follow along, you will need only two things


Follow the links for installation instructions to install docker and docker-compose. 

Getting Started:

Launching Prometheus:
We will use  docker-compose.yml  for installing prometheus


# docker-compose.yml
version: '2'
services:
prometheus:
    image: prom/prometheus:0.18.0
    volumes:
        - ./prometheus.yml:/etc/prometheus/prometheus.yml
    command:
        - '-config.file=/etc/prometheus/prometheus.yml'
    ports:
        - '9090:9090'
and a prometheus configuration file prometheus.yml:
# prometheus.yml
global:
    scrape_interval: 5s
    external_labels:
        monitor: 'my-monitor'
scrape_configs:
    - job_name: 'prometheus'
      target_groups:
          - targets: ['localhost:9090']
As you can see, inside docker-compose.yml we map the prometheus config file into the container as a volume and add a -config.file parameter to the command pointing to this file.
To launch prometheus, run the command
docker-compose up
Visit http://localhost:9090/status to confirm the server is running and the configuration is the one we provided.


Targets

Further down below the ‘Configuration’ on the status page you will find a section ‘Targets’ which lists a ‘prometheus’ endpoint. This corresponds to the scrape_configs setting by the same job_name and is a source of metrics provided by Prometheus. In other words, the Prometheus server comes with a metrics endpoint - or exporter, as we called it above - which reports stats for the Prometheus server itself.
The raw metrics can be inspected by visiting http://localhost:9090/metrics.

Adding a node-exporter target

While it’s certainly a good idea to monitor the monitoring service itself, this is just going to be an additional aspect of the set-up. The main point is to monitor other things by adding targets to the scrape_configs section in prometheus.yml . As described above, these targets need to export metric in the prometheus format.
One such exporter is node-exporter, another piece of the puzzle provided as part of Prometheus. What it does is collect system metrics like cpu/memory/storage usage and then it exports it for Prometheus to scrape. The beauty of this is that it can be run as a docker container while also reporting stats for the host system. It is therefore very easy to instrument any system that can run docker containers.
We will add a configuration setting to our existing docker-compose.yml to bring up node-exporteralongside prometheus . However, this is mainly for convenience in this example as in a normal setup where one prometheus instance is monitoring many other machines these other exporters would likely be launched by other means.
Here’s what our new docker-compose.yml looks like:
# docker-compose.yml
version: '2'
services:
    prometheus:
        image: prom/prometheus:0.18.0
        volumes:
            - ./prometheus.yml:/etc/prometheus/prometheus.yml
        command:
            - '-config.file=/etc/prometheus/prometheus.yml'
        ports:
            - '9090:9090'
    node-exporter:
        image: prom/node-exporter:0.12.0rc1
        ports:
            - '9100:9100'
We simply added a node-exporter section. Configuring it as a target only requires a small extension to prometheus.yml :
# prometheus.yml
global:
    scrape_interval: 5s
    external_labels:
        monitor: 'my-monitor'
scrape_configs:
    - job_name: 'prometheus'
      target_groups:
          - targets: ['localhost:9090']
    - job_name: 'node-exporter'
      target_groups:
          - targets: ['node-exporter:9100']

Grafana :

grafana:
        image: grafana/grafana:3.0.0-beta7
        environment:
            - GF_SECURITY_ADMIN_PASSWORD=pass
        depends_on:
            - prometheus
        ports:
            - "3000:3000"

The complete final version version of all config files can be found in this https://github.com/vikramshinde12/dockprom.
After restarting the service with
docker-compose up
you can access Grafana at http://localhost:3000/login

Complete Monitoring Stack installation

Components included

  •  cAdvisor
  • NodeExporter
  • Prometheus
  • AlertManager
  • Grafana
  • Slack




  

Install

Detailed video for the docker monitoring stack installation


Clone dockprom repository on your Docker host, cd into dockprom directory and run compose up:
  • $ git clone https://github.com/stefanprodan/dockprom
  • $ cd dockprom
  • $ docker-compose up -d
Containers:
  • Prometheus (metrics database) http://:9090
  • AlertManager (alerts management) http://:9093
  • Grafana (visualize metrics) http://:3000
  • NodeExporter (host metrics collector)
  • cAdvisor (containers metrics collector)
While Grafana supports authentication, the Prometheus and AlertManager services have no such feature. You can remove the ports mapping from the docker-compose file and use NGINX as a reverse proxy providing basic authentication for Prometheus and AlertManager.

Setup Grafana

Navigate to http://:3000 and login with user admin password changeme. You can change the password from Grafana UI or by modifying the user.config file.
From the Grafana menu, choose Data Sources and click on Add Data Source. Use the following values to add the Prometheus container as data source:
  • Name: Prometheus
  • Type: Prometheus
  • Url: http://prometheus:9090
  • Access: proxy
Now you can import the dashboard temples from the grafana directory. From the Grafana menu, choose Dashboards and click on Import.
Following dashboards can be imported
Docker Host Dashboard

The Docker Host Dashboard shows key metrics for monitoring the resource usage of your server:
  • Server uptime, CPU idle percent, number of CPU cores, available memory, swap and storage
  • System load average graph, running and blocked by IO processes graph, interrupts graph
  • CPU usage graph by mode (guest, idle, iowait, irq, nice, softirq, steal, system, user)
  • Memory usage graph by distribution (used, free, buffers, cached)
  • IO usage graph (read Bps, read Bps and IO time)
  • Network usage graph by device (inbound Bps, Outbound Bps)
  • Swap usage and activity graphs

Docker Containers Dashboard


The Docker Containers Dashboard shows key metrics for monitoring running containers:
  • Total containers CPU load, memory and storage usage
  • Running containers graph, system load graph, IO usage graph
  • Container CPU usage graph
  • Container memory usage graph
  • Container cached memory usage graph
  • Container network inbound usage graph
  • Container network outbound usage graph
Note that this dashboard doesn’t show the containers that are part of the monitoring stack.

     Slack Configuration

Setup alerting

The AlertManager service is responsible for handling alerts sent by Prometheus server. AlertManager can send notifications via email, Pushover, Slack, HipChat or any other system that exposes a webhook interface. A complete list of integrations can be found here.
You can view and silence notifications by accessing http://:9093.
The notification receivers can be configured in alertmanager/config.yml file.
To receive alerts via Slack you need to make a custom integration by choose incoming web hooks in your Slack team app page. You can find more details on setting up Slack integration here.
Copy the Slack Webhook URL into the api_url field and specify a Slack channel.
route:
    receiver: 'slack'

receivers:
    - name: 'slack'
      slack_configs:
          - send_resolved: true
            text: "{{ .CommonAnnotations.description }}"
            username: 'Prometheus'
            channel: '#'
            api_url: 'https://hooks.slack.com/services/'


   Grafana Alert Configuration:

On the Notification Channels page hit the New Channel button to go the the page where you can configure and setup a new Notification Channel

You specify name and type, and type specific options. You can also test the notification to make sure it’s working and setup correctly.




Amazon AWS Certified !!!

Today I passed the AWS Certified Developer - Associate exam with 92%  Wow, I have been working on AWS since last one year and mainly usin...