Saturday, October 21, 2017

Javascript

Functions - may or may not return a value.

function declaration

function function_name(){
 // statements
};

var c = function_name();
console.log(c);

function expression

var function_name = function(){
 //statements
};

var c = function_name();
console.log(c);

Methods - when function is an object's property

var calc = {

add : function(a,b) {
          return a+b
          },

sub : function(a,b){
          return a-b
          }
}

calc => object
add,sub => method

calc.add(1,2);
calc.sub(3,2);

Constructors - when you add new keyword before a function and call it, it becomes a constructor that create instances.

function fruit(){
  var name,family;
  this.getName = function(){
      return name;
  };
  this.setName = function(value){
      name=value;
  };
}


var apple = new fruit();
apple.setName("Malus");
console.log(apple.getName());

function as a parameter

function f1(val){
    return val.toUpperCase();
}

function f2(val,passFunc){
    console.log(passFunc(val));
}

f2("small",f1);
// string, reference


Scope
global/public
local/private

global variable
- place a var statement outside any function
- omit var statement
local variable
- variables declared within a function

this parameter
refers to an object that's implicitly associated with function invocation

invocation as a function
- 'this' is bound to global object(window)
invocation as a method
- 'this' is bound to object
invocation as a constructor

Create JavaScript Object
object literal => similar to JSON format
var author ={
// variables
 firstName : "Megha",
 lastName : "Dureja",
 book : {
            title : "JS",
            pages : "172"
 }
// method
 meetingRoom : function(roomId){
    console.log("BookedRoom");
 }
};

console.log(author.lastName);
console.log(author.book.title);

object constructor
var author = new Object();

function constructor

prototype

function/prototype combination

singleton


Variable / Object

variable - container for data value
var car = "Polo";

Object - container for many value and methods to perform action
 <script>
var car = {
 type : "Polo",
 model : "500",
 color : "white"
 fullName : function(){
 return this.type + "" + this.model + "" +this.color;
 }
};
</script>


Hadoop Ecosystem

Hadoop Ecosystem = HDFS + MapReduce + Tools(Hive,Pig,HBase,Zookeeper,Flume,Sqoop,Oozie,Mahout)

Three modes
Local standalone mode - single JVM
pseudo distributed mode - separate JVM
fully distributed mode - multiple JVM

Hadoop Components
Namenode
Datanode
Secondary Namenode
JobTracker
TaskTracker

Configuration files
core-site.xml => location of namenode
hdfs-site.xml => replication factor
mapred-site.xml => location of jobtracker

hadoop dfs -ls
dfs -copyFromLocal
dfs put
dfs -cat
dfs -get

Wednesday, September 27, 2017

Web Server vs Web Container vs Application Server

Apache software foundation produces two types of web servers.
Apache HTTP used for static content that can also be equipped with modules to serve dynamic content (e.g. PHP, Ruby), as well as
Apache Tomcat which is a web-container (i.e. application server) used for serving dynamic content written in Java.

Once you've confirmed the Apache version, you should check the web server's modules. Modules serve as add-ons to support extra features that can include things like CGI, Secure Socket Layer (SSL), Virtual Hosting, as well as the processing of web applications written in just about any programming language. Inclusively, there are certain modules that can be helpful for increasing performance.

Sunday, September 24, 2017

Cordova

- create cross platform applications by using web technologies.

How it works
-  provides a base application based on a WebView where your newly developed application based in web can live.
- For advanced functions more closely related to concrete devices like GPS, battery, notifications, etc., Cordova allows the installation of plugins which provide a bridge between JavaScript and the native device based on an API.

$ npm install -g cordova

Creating an Cordova project

 cordova/
├── config.xml
├── hooks
├── platforms
├── plugins
└── www

    - config.xml is the main project configuration file. Here we will define many aspects and configurations for our application and even configure concrete details for the different platforms.
    - hooks allows to run scripts in the different Cordova build phases. This directory is now deprecated in favour of hooks configured in config.xml, so we can safely remove it.
    - platforms will contain the target platforms for our projects. At the moment it will contain only the browser platform, but it will grow with more targets. The modification of this directory by hand is discouraged.
    - plugins will contain the installed plugins. As explained before, plugins will help us to create richer applications by allowing us to access to native device functionality or abstracting from common functionalities.
    - www will contain the web application.


WebView
- allows you to easily add a web browser to your application.
- is a view that display web pages inside your application.
- it makes turns your application to a web application.

In addition to pages hosted on the web, you can also use it to display local content (including various document formats), and you can even interact with JavaScript in pages it has loaded.


Android WebView
iOS WebView


Cordova Plugin
A plugin is a bit of add-on code that provides access to device and platform functionality that is ordinarily unavailable to web-based apps.

Battery Status
    Monitor the status of the device's battery.

Camera
    Capture a photo using the device's camera.




Wednesday, September 20, 2017

Angular2

- Angular2 is an open source javascript library that is sponsored and maintained by Google.
- Angular2 applications are built around a design pattern called MVC.
- Angular2 applications are written in Typescript, which is a superset of  javascript.

Angular2 Development Tools

1. install nodejs
$node -v
v7.7.2

2. NPM is installed with Node.js but we can also update npm
$ npm install -g npm@3.10.9

3. install angular-cli
$ npm install -g angular-cli

4. creating and preparing project
create folder <project_name>
$ ng init or ng new <project_name>(if not exists)
project structure
- e2e
- node_modules
- src
    - app
    - assets
    - environments
    - index.html
   - main.ts
   - tsconfig.json // typescript compiler configuration file
- package.json // list of software packages required

5. build the contents
$ ng build

6. generate components
$ ng generate component <component_name>

7. start the server
$ npm start or $ ng serve => typescript compiler && HTTP Server(lite-server)

javascript opens a connection back to the server(http server) and waits for a signal to reload the page, which is sent when the server detects a change in any of the files in the directory.

Angular applications are typically written in TypeScript.
TypeScript is a superscript of JavaScript, but one of its main advantages is that it lets you write code using the latest JavaScript language specification with features that are not yet supported in all of the browsers that can run Angular applications.

TypeScript compiler generate browser-friendly JavaScript files automatically when a change to a TypeScript file is detected. (TypeScript files have the .ts extension)

Creating a Data Model
To create a data model for the application, a file called model.ts added to the /app folder

model.ts
var model = {
    user: "Adam",
    items: [{ action: "Buy Flowers", done: false },
    { action: "Get Shoes", done: false },
    { action: "Collect Tickets", done: true },
    { action: "Call Joe", done: false }]
};


model.js
var model = {
    user: "Adam",
    items: [{ action: "Buy Flowers", done: false },
        { action: "Get Shoes", done: false },
        { action: "Collect Tickets", done: true },
        { action: "Call Joe", done: false }]
};


Using ES6 Features in the model.ts File

export class Model {
    user;
    items;
    constructor() {
        this.user = "Adam";
        this.items = [new TodoItem("Buy Flowers", false),

                          new TodoItem("Get Shoes", false),
                      new TodoItem("Collect Tickets", false),
                      new TodoItem("Call Joe", false)]
    }
}
export class TodoItem {
    action;
    done;
    constructor(action, done) {
        this.action = action;
        this.done = done;
    }
}


The class keyword is used to define types that can be instantiated with the new keyword to create objects that have well-defined data and behavior.

The export keyword relates to JavaScript modules. When using modules, each TypeScript or JavaScript file is considered to be a self-contained unit of functionality, and the export keyword is used to identity data or types that you want to use elsewhere in the application. JavaScript modules are used to manage the dependencies that arise between files in a project and avoid having to manually manage a complex set of script elements in the HTML file.

Creating a Template
A way to display the data values in the model to the user. In Angular, this is done using a template, which is a fragment of HTML that contains instructions that are performed by Angular.

Including a data value in a template is done using double braces—{{ and }}—and Angular evaluates whatever you put between the double braces to get the value to display.

The {{ and }} characters are an example of a data binding, which means that they create a relationship between the template and a data value.

<h3 class="bg-primary p-a-1">{{getName()}}'s To Do List</h3>

the data binding tells Angular to invoke a function called getName() and use the result as the contents of the h3 element.

Creating a Component
An Angular component is responsible for managing a template and providing it with the data and logic it needs.

At the moment, I have a data model that contains a user property with the name to display, and I have a template that displays the name by invoking a getName property. What I need is a component to act as the bridge between them.

data model <- component -> template

import { Component } from "@angular/core";
import { Model } from "./model";
@Component({
    selector: "todo-app",
    templateUrl: "app/app.component.html"
})
export class AppComponent {
    model = new Model(); //property
    getName() { //function
        return this.model.user;
    }
}



The import keyword is the counterpart to the export keyword.

The first import statement is used in the listing to load the @angular/core module, which contains the key Angular functionality, including support for components. When working with modules, the import statement specifies the types that are imported between curly braces. In this case, the import statement is used to load the Component type from the module. The @angular/core module contains many classes that have been packaged together so that the browser can load them all in a single JavaScript file.
The second import statement is used to load the Model class from a file in the project. The target for this kind of import starts with ./, which indicates that the module is defined relative to the current file. Notice that neither import statement includes a file extension. This is because the relationship between the target of an import statement and the file that is loaded by the browser is managed by a module loader.

@Component decorator
which provides metadata about a class. As its name suggests, it tells Angular that this is a component. The decorator provides configuration information through its properties, which in the case of @Component includes properties called selector and templateUrl. 
selector property specifies a CSS selector that matches the HTML element to which the component will be applied: in this case, todo-app element, which added to the index.html file. When an Angular application starts, Angular scans the HTML in the current document and looks for elements that correspond to components. It will find the todo-app element and know that it should be placed under the control of this component.
templateUrl property is used to tell Angular how to find the component’s template, which is the app.component.html file in the app folder for this component.

When a new instance of the AppComponent class is created, the model property will be set to a new instance of the Model class. The getName function returns the value of the user property defined by the Model object.

Putting the Application Together
There are two types of module used in Angular development.
A JavaScript module is a file that contains JavaScript functionality that is used through the import keyword.
The other type of module is an Angular module, which is used to describe an application or a group of related features. Every application has a root module,
which provides Angular with the information that it needs to start the application called app.module.ts, which is the conventional file name for the root module.

import { NgModule } from "@angular/core";
import { BrowserModule } from "@angular/platform-browser";
import { FormsModule } from "@angular/forms";
import { AppComponent } from "./app.component";

@NgModule({
    imports: [BrowserModule, FormsModule],
    declarations: [AppComponent],
    bootstrap: [AppComponent]
})
export class AppModule { }


The purpose of the Angular module is to provide configuration information through the properties defined by the @NgModule decorator.

21 page

Creating a Two-Way Data Binding
At the moment, the template contains only one-way data bindings, which means they are used to display a data value but do nothing to change it. Angular also supports two-way data bindings, which can be used to display a data value and update it, too. Two-way bindings are used with HTML form elements.

The ngModel template expression creates a two-way binding between a data value and a form element.

Filtering Items
Adding Items
Event Binding

Saturday, September 16, 2017

Spring MVC vs Web Flow

Spring MVC is an implementation of MVC design pattern.
Spring Webflow is an implementation of a "web flow" state machine.

Spring Web flow sits on top of Spring MVC and allows you to define complex navigational flows i.e you can not use spring web flow without spring mvc.

In an MVC paradigm a request travels through a Controller, gets updated with something from the Model and ends up at the dictate View.

In a web flow paradigm a request may go through various paths and views depending on the rules. With this you will have fine control on entire flow.

Simple rule of thumb:-
If you have lots of independent single pages, which don't do much and don't interact, use plain old MVC.
If you have a set of pages that represent a workflow, use webflow to model the workflow.

Sunday, September 3, 2017

Spring Boot Actuator

Actuator is a very helpful library which allow you to monitor the application.
Mainly used to expose different types of information about the running application – health, metrics, info, dump, env etc.

To start using the existing actuators in Boot – we’ll just need to add the spring-boot-actuator dependency to the pom:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>


Endpoints - The boot comes with many built-in endpoints and, like with pretty much anything in Spring – you can also roll your own.
Most endpoints are sensitive – meaning they’re not fully public – while a handful is not: /health and /info.

Here are some of the most common endpoints Boot provides out of the box:

/health – Shows application health information (a simple ‘status’ when accessed over an unauthenticated connection or full message details when authenticated). It is not sensitive by default.
/info – Displays arbitrary application info. Not sensitive by default.
/metrics – Shows ‘metrics’ information for the current application. It is also sensitive by default.
/env – Exposes properties from Spring’s Configurable Environment.
/beans – Displays a complete list of all the Spring beans in your application.
/configprops – Displays a collated list of all @ConfigurationProperties.
/dump – Performs a thread dump.
/logfile – Returns the contents of the logfile (if logging.file or logging.path properties have been set). Only available via MVC. Supports the use of the HTTP Range header to retrieve part of the log file’s content.
/metrics – Shows ‘metrics’ information for the current application.
/mappings – Displays a collated list of all @RequestMapping paths.
/trace – Displays trace information (by default the last few HTTP requests).

Customizing Existing Endpoints - 
You can customize actuator to display or restrict information that should/shouldn’t be available.


Each endpoint can be customized with properties using the following format: endpoints.[endpoint name].[property to customize]
Three properties are available:

    id – by which this endpoint will be accessed over HTTP
    enabled – if true then it can be accessed otherwise not
    sensitive – if true then need the authorization to show crucial information over HTTP

For example, add the following properties will customize the /beans endpoint:

endpoints.beans.id=springbeans
endpoints.beans.sensitive=false
endpoints.beans.enabled=true

/health Endpoint

The /health endpoint is used to check the health/status of the running application. It’s usually used by basic monitoring software to alert you if the production goes down.

By default only health information is shown to unauthorized access over HTTP:

{
    "status" : "UP"
}


This health information is collected from all the beans implementing HealthIndicator interface configured in your application context.

Some information returned by HealthIndicator is sensitive in nature – but you can configure endpoints.health.sensitive=false to expose the other information like disk space, data source etc.

A Custom HealthIndicator


Spring Cloud

Spring Cloud is a framework for building robust cloud applications. The framework facilitates the development of applications by providing solutions to many of the common problems faced when moving to a distributed environment.


Saturday, September 2, 2017

Spring Hibernate JPA Integration

In hibernate framework, we provide all the database information hibernate.cfg.xml file.

But if we are going to integrate the hibernate application with spring, we don't need to create the hibernate.cfg.xml file. We can provide all the information in the applicationContext.xml file.

Advantage of Spring framework with hibernate

The Spring framework provides HibernateTemplate class, so you don't need to follow so many steps like create Configuration, BuildSessionFactory, Session, beginning and committing transaction etc.

So it saves a lot of code.

In this file, we are providing all the informations of the database in the BasicDataSource object. This object is used in the LocalSessionFactoryBean class object, containing some other informations such as mappingResources and hibernateProperties. The object of LocalSessionFactoryBean class is used in the HibernateTemplate class.
hibernate template
 Let's see the code of applicationContext.xml file.

    <?xml version="1.0" encoding="UTF-8"?> 
    <beans 
        xmlns="http://www.springframework.org/schema/beans" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:p="http://www.springframework.org/schema/p" 
        xsi:schemaLocation="http://www.springframework.org/schema/beans 
            http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> 
     
        <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> 
            <property name="driverClassName"  value="oracle.jdbc.driver.OracleDriver"></property> 
            <property name="url" value="jdbc:oracle:thin:@localhost:1521:xe"></property> 
            <property name="username" value="system"></property> 
            <property name="password" value="oracle"></property> 
        </bean> 
         
        <bean id="mySessionFactory"  class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> 
            <property name="dataSource" ref="dataSource"></property> 
             
            <property name="mappingResources"> 
            <list> 
            <value>employee.hbm.xml</value> 
            </list> 
            </property> 
             
            <property name="hibernateProperties"> 
                <props> 
                    <prop key="hibernate.dialect">org.hibernate.dialect.Oracle9Dialect</prop> 
                    <prop key="hibernate.hbm2ddl.auto">update</prop> 
                    <prop key="hibernate.show_sql">true</prop> 
                     
                </props> 
            </property> 
        </bean> 
         
        <bean id="template" class="org.springframework.orm.hibernate3.HibernateTemplate"> 
        <property name="sessionFactory" ref="mySessionFactory"></property> 
        </bean> 
         
        <bean id="employeeDao" class="com.javatpoint.EmployeeDao"> 
        <property name="template" ref="template"></property> 
        </bean> 
        
        </beans>   


Spring with JPA, using Hibernate as a persistence provider.

To use JPA in a Spring project, the EntityManager needs to be set up. 
This is the main part of the configuration – and it is done via a Spring factory bean – either the simpler LocalEntityManagerFactoryBean or the more flexible LocalContainerEntityManagerFactoryBean.

<!-- Simple implementation of the standard JDBC DataSource interface,
        configuring the plain old JDBC DriverManager via bean properties -->

<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
      <property name="driverClassName" value="com.mysql.cj.jdbc.Driver" />
      <property name="url" value="jdbc:mysql://localhost:3306/spring_jpa" />
      <property name="username" value="tutorialuser" />
      <property name="password" value="tutorialmy5ql" />
   </bean>


<!-- This produces a container-managed EntityManagerFactory;
         rather than application-managed EntityManagerFactory -->
<bean id="myEntityFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
      <property name="dataSource" ref="dataSource" />
      <property name="packagesToScan" value="org.baeldung.persistence.model" />
      <property name="jpaVendorAdapter">
         <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
      </property>
      <property name="jpaProperties">
         <props>
            <prop key="hibernate.hbm2ddl.auto">create-drop</prop>
            <prop key="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</prop>
         </props>
      </property>
   </bean>


<!-- This transaction manager is appropriate for applications that use a single JPA EntityManagerFactory for transactional data access.
        JTA (usually through JtaTransactionManager) is necessary for accessing multiple transactional resources within the same transaction. -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
      <property name="entityManagerFactory" ref="myEntityFactory" />
   </bean>
   


<!-- responsible for registering the necessary Spring components that power annotation-driven transaction management;
        such as when @Transactional methods are invoked -->

<tx:annotation-driven />  


entityManagerFactoryBean : LocalEntityManagerFactoryBean produces an application-managed EntityManagerFactory whereas LocalContainerEntityManagerFactoryBean produces a container-managed EntityManagerFactory. It supports links to an existing JDBC DataSource, supports both local and global transactions.

JpaTransactionManager : This transaction manager is appropriate for applications that use a single JPA EntityManagerFactory for transactional data access. JTA (usually through JtaTransactionManager) is necessary for accessing multiple transactional resources within the same transaction. Note that you need to configure your JPA provider accordingly in order to make it participate in JTA transactions. Of course, JtaTransactionManager does require a full JTA-supporting application server, rather than a vanilla servlet engine like Tomcat.


tx:annotation-driven : enable the configuration of transactional behavior based on annotations e.g. @Transactional. The @EnableTransactionManagement annotation provides equivalent support if you are using Java based configuration. To do this. simply add the annotation to a @Configuration class.

Next important classes are DAO classes which use the entity manager to perform CRUD operations using hibernate entities, and specifying methods which support transactions through @Transactional annotation. In our case, we have applied @Transactional annotation at class level, making all public methods transactional.


@Repository
@Transactional
public class EmployeeDAOImpl implements EmployeeDAO
{
    @PersistenceContext
    private EntityManager manager;



public List<EmployeeEntity> getAllEmployees()
    {
        List<EmployeeEntity> employees = manager.createQuery("Select a From EmployeeEntity a", EmployeeEntity.class).getResultList();
        return employees;
    }
..

}


@PersistenceContext expresses a dependency on a container-managed EntityManager and its associated persistence context. @Repository is usually applied on DAO layer. 


JPA in Spring Boot

The Spring Boot project is intended to make creating Spring applications much faster and easier. This is done with the use of starters and auto-configuration for various Spring functionalities, JPA among them.

To enable JPA in a Spring Boot application, we need the spring-boot-starter and spring-boot-starter-data-jpa dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
    <version>1.5.3.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>1.5.3.RELEASE</version>
</dependency>

The spring-boot-starter contains the necessary auto-configuration for Spring JPA, and the spring-boot-starter-jpa project references all the necessary dependencies such as hibernate-entitymanager.

Spring Boot configures Hibernate as the default JPA provider, so it’s no longer necessary to define the entityManagerFactory bean unless we want to customize it.

Spring Boot can also auto-configure the dataSource bean, depending on the database used. In the case of in-memory database of type H2, HSQLDB and Apache Derby, Boot automatically configures the DataSource if the corresponding database dependency is present on the classpath.

For example, if we want to use an in-memory H2 database in a Spring Boot JPA application, we only need to add the h2 dependency to the pom.xml file:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.195</version>
</dependency>


This way, we don’t need to define the dataSource bean, but we can do so if we want to customize it.

If we want to use JPA with MySQL database, then we need the mysql-connector-java dependency, as well as to define the DataSource configuration.

This can be done in a @Configuration class, OR by using standard Spring Boot properties.
The Java configuration looks the same as it does in a standard Spring project:

@Bean
public DataSource dataSource() {
    DriverManagerDataSource dataSource = new DriverManagerDataSource();

    dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
    dataSource.setUrl("jdbc:mysql://localhost:3306/myDb?createDatabaseIfNotExist=true");
    dataSource.setUsername("mysqluser");
    dataSource.setPassword("mysqlpass");

    return dataSource;
}

                                                                         OR


To configure the data source using a properties file, we have to set properties prefixed with spring.datasource:


spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/myDb?createDatabaseIfNotExist=true
spring.datasource.username=mysqluser
spring.datasource.password=mysqlpass

Spring Boot will automatically configure a data source based on these properties.

static vs volatile

Static variable is a class variable but not instance variable which is declared and initialized once in its life and shared by any number of instances.

Volatile keyword in Java is used as an indicator to Java compiler and Thread that do not cache value of this variable and always read it from main memory.

So if you want to share any variable in which read and write operation is atomic by implementation e.g. read and write in an int or a boolean variable then  you can declare them as volatile variable.





So, declaring a static variable in Java, means that there will be only one copy, no matter how many objects of the class are created. The variable will be accessible even with no Objects created at all. However, threads may have locally cached values of it.

=> If its used in single thread environment, even if the copy of the variable will be updated and there will be no harm accessing it as there is only one thread.

=> Now if static variable is used in multi-threading environment then there will be issues if one expects desired result from it. As each thread has their own copy then any increment or decrement on static variable from one thread may not reflect in another thread.



https://stackoverflow.com/questions/2423622/volatile-vs-static-in-java


Here are few differences between volatile and synchronized keyword in Java.

1. The volatile keyword in Java is a field modifier while synchronized modifies code blocks and methods.

2. Synchronized obtains and releases the lock on monitor’s Java volatile keyword doesn't require that.

3. Threads in Java can be blocked for waiting for any monitor in case of synchronized, that is not the case with the volatile keyword in Java.

4. Synchronized method affects performance more than a volatile keyword in Java.

5. Since volatile keyword in Java only synchronizes the value of one variable between Thread memory and "main" memory while synchronized synchronizes the value of all variable between thread memory and "main" memory and locks and releases a monitor to boot. Due to this reason synchronized keyword in Java is likely to have more overhead than volatile.

6. You can not synchronize on the null object but your volatile variable in Java could be null.

7. From Java 5 writing into a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire


In short, volatile keyword in Java is not a replacement of synchronized block or method but in some situation is very handy and can save performance overhead which comes with use of synchronization in Java.

Thursday, August 31, 2017

Encapsulation vs Abstraction

Encapsulation:-- Information(Data) hiding.
Encapsulation is used for hide the code and data in a single unit to protect the data from the outside the world. Class is the best example of encapsulation. As the name suggests, Encapsulation is "hide/enclose something".


Abstraction:-- Implementation hiding.
Abstraction refers to showing only the necessary details to the intended user. As the name suggests, abstraction is the "abstract form of anything".

class Foo{
    private int a, b;
    public Foo(){ }
    int add(){   
         return a+b;  
    }
}

Internal representation of any object of Foo class is hidden outside the class. --> Encapsulation.
Any accessible member (data/method) of an object of Foo is restricted and can only be accessed by that object only.

Implementation of method add() is hidden. --> Abstraction.

Implementation Difference Between Encapsulation and Abstraction

Abstraction is implemented using interface and abstract class while Encapsulation is implemented using private and protected access modifier.

 Real world example:-
Complex logic is in the circuit board which is encapsulated in a touchpad and a nice interface(buttons) is provided to abstract it out to the user.

So Abstraction is generalized term. i.e. Encapsulation is subset of Abstraction.

Wednesday, August 30, 2017

Polymorphism | Binding - OverLoading vs OverRidding

Polymorphism is the ability for data to be processed in more than one form. It allows the performance of the same task in various ways. It consists of method overloading and method overriding, i.e., writing the method once and performing a number of tasks using the same method name.

Connecting a method call to the method body is known as binding.

Static binding/Early binding/Compile time binding :-
When type of the object is determined at compiled time(by the compiler), it is known as static binding.

class Dog{ 
 private void eat(){System.out.println("dog is eating...");}
 public static void main(String args[]){ 
  Dog d=new Dog(); 
  d.eat(); 
 } 
}

OR

The binding which can be resolved at compile time by compiler is known as static or early binding. Binding of all the static, private and final methods is done at compile-time .

Static binding is better performance wise (no extra overhead is required). Compiler knows that all such methods cannot be overridden and will always be accessed by object of local class. Hence compiler doesn’t have any difficulty to determine object of class (local class for sure). That’s the reason binding for such methods is static.

public class NewClass
{
    public static class superclass
    {
        static void print()
        {
            System.out.println("print in superclass.");
        }
    }
    public static class subclass extends superclass
    {
        static void print()
        {
            System.out.println("print in subclass.");
        }
    }

    public static void main(String[] args)
    {
        superclass A = new superclass();
        superclass B = new subclass();
        A.print();
        B.print();
    }
}

Output:-
print in superclass.
print in superclass.

Since the print method of superclass is static, compiler knows that it will not be overridden in subclasses and hence compiler knows during compile time which print method to call and hence no ambiguity.







Dynamic binding/Late binding/Run time binding :-When type of the object is determined at run-time, it is known as dynamic binding. 

class Animal{ 
 void eat(){System.out.println("animal is eating...");} 

 
class Dog extends Animal{ 
 void eat(){System.out.println("dog is eating...");} 
 
 public static void main(String args[]){ 
  Animal a=new Dog(); 
  a.eat(); 
 } 
}

In Dynamic binding compiler doesn’t decide the method to be called. Overriding is a perfect example of dynamic binding. In overriding both parent and child classes have same method . Let’s see by an example

public class NewClass
{
    public static class superclass
    {
        void print()
        {
            System.out.println("print in superclass.");
        }
    }

    public static class subclass extends superclass
    {
        @Override
        void print()
        {
            System.out.println("print in subclass.");
        }
    }

    public static void main(String[] args)
    {
        superclass A = new superclass();
        superclass B = new subclass();
        A.print();
        B.print();
    }
}

Output:-
print in superclass.
print in subclass.

- Methods are not static in this code.
- During compilation, the compiler has no idea as to which print has to be called since compiler goes only by referencing variable not by type of object and therefore the binding would be delayed to runtime and therefore the corresponding version of print will be called based on type on object.


OverLoading vs OverRidding - Both are concepts of polymorphism.
- Method overloading is a form of static binding. Method overriding is a form dynamic binding.
- Overloading is applied in single class, but overriding is applicable for inherited class.
- Method overloading is always specific to method signature. It defines number of parameter, type of parameter and sequence of parameter.

Thursday, August 24, 2017

SQL

DDL - Data Definition Language: statements used to define the database structure or schema. Some examples:

    CREATE - to create objects in the database
    ALTER - alters the structure of the database
    DROP - delete objects from the database
    TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed
    COMMENT - add comments to the data dictionary
    RENAME - rename an object

DML - Data Manipulation Language: statements used for managing data within schema objects. Some examples:

    SELECT - retrieve data from the a database
    INSERT - insert data into a table
    UPDATE - updates existing data within a table
    DELETE - deletes all records from a table, the space for the records remain
    MERGE - UPSERT operation (insert or update)
    CALL - call a PL/SQL or Java subprogram
    EXPLAIN PLAN - explain access path to the data
    LOCK TABLE - controls concurrency

DCL - Data Control Language.
Some examples:

    GRANT - gives user's access privileges to database
    REVOKE - withdraw access privileges given with the GRANT command

TCL - Transaction Control:
statements used to manage the changes made by DML statements. It allows statements to be grouped together into logical transactions.

    COMMIT - save work done
    SAVEPOINT - identify a point in a transaction to which you can later roll back
    ROLLBACK - undo the modification I made since the last COMMIT
    SET TRANSACTION - Change transaction options like isolation level and what rollback segment to use
    SET ROLE - set the current active roles

DML are not auto-commit. i.e. you can roll-back the operations, but DDL are auto-commit.

 

ActiveMQ

ActiveMQ is an implementation of JMS. It is message broker designed for the purpose of sending messages between two applications, or two components inside one application.

Features:-
Enterprise integration : Allowing applications built with different languages and on different operating systems to integrate with each other.
Location transparency : Client applications don’t need to know where the service applications are located.
Reliable communication : the producers/consumers of messages don’t have to be available at the same time
Scaling : can scale horizontally by adding more services that can handle the messages if too many messages are arriving.
Asynchronous communication : a client can fire a message and continue other processing instead of blocking until the service has sent a response;
Reduced coupling : the assumptions made by the clients and services are greatly reduced as a result of the previous 5 benefits. A service can change details about itself, including its location, protocol, and availability, without affecting or disrupting the client.

This MOM(message-oriented middleware) has two models which are the point-to-point/queue model and publish-subscriber/topic model.

Difference between the two is the number of recipients.

In queue, you only have one receiver or consumer.
In topic you can have your message be disseminated/spread to a number of subscribers.

In queue you do not have to worry about timing because the sender will have the luxury to send messages whenever he or she wants to. And the same goes for the receiver; he or she also has the liberty of reading it whenever he or she wants.
And, In topic, the publisher has to be continuously active for a subscriber to receive the messages. Otherwise the message will be reallocated.

 

Hibernate

Hibernate is a ORM tool which offer lots of advantages over plain JDBC.

Database independent
You can work with any database you want like oracle,mysql,db2,sql server ,etc. Using hibernate you won't worry about writing database specific queries and syntax. It's provides HQL (Hibernate Query Language), which is compatible with any database server. You just need to write queries in HQL, at the end hibernate converts HQL to underlying database and executes it.

OOP concepts
In ORM, you will map a database table with java object called "Entity". So once you map these,you will get advantages of OOP concepts like inheritance, encapsulation,associations etc that are not available in the JDBC API.

Caching mechanism - which improves performance
1st level & 2nd level cache provided hibernate means you don't need to hit database for similar queries, you can cache it and use it from buffered memory to improve performance.

- First-level is a mandatory Session cache.
- Second-level is an optional cache. Hibernate has a lot of cache providers for this level, the most popular are: EHCache, OSCache, warmCache, JBoss Cache, etc.
- Query-level is an optional cache for query result sets.

Lazy loading
Supports Lazy loading (also called n+1 problem in Hibernate). Take an example-parent class has n number of child class. So When you want information from only 1 child class,there is no meaning of loading n child classes.This is called lazy loading (Load only thing which you want).

- Supports JPA annotations, means the code is portable to other ORM frameworks.
- Connection pooling
- Bulk updates
- Built-in support for explicit locking and optimistic concurrency control.
- Allows you to transform the ResultSet into Entities or DTO (Data Transfer Object).
- You do not need to handle exceptions.Hibernate allows database management (for example creating tables), JDBC can only work with existing DB tables.
- Hibernate has capability to generate primary keys automatically while we are storing the records into database.
- Getting pagination in hibernate is quite simple.
- Hibernate supports relationships like One-To-Many,One-To-One, Many-To-Many-to-Many, Many-To-One

In terms of disadvantages of using other persistent layer:-
Boilerplate code issue
For Dao layer, need to write same code in several files in the same application, but spring data jpa has eliminated this. -  providing JPARepository

Hibernate Inheritance Mapping
Compared to JDBC we have one main advantage in hibernate, which is hibernate inheritance.  Suppose if we have base and derived classes, now if we save derived(sub) class object, base class object will also be stored into the database.

Hibernate supports 3 types of Inheritance Mappings:

    Table per class hierarchy(Single Table Strategy)
    Table per sub-class hierarchy(Join Strategy)
    Table per concrete class hierarchy




- In the above case, we can map the whole hierarchy into single table, here we use one more discriminator column i.e. TYPE.
- Performance wise better than all strategies because no joins or sub-selects need to be performed.
- Tables are not normalized.

@Inheritance(strategy=InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(
  name="TYPE",
  discriminatorType=DiscriminatorType.STRING
  )



 - In the above case, we have a single table for each class in the hierarchy but subclass mapped tables are related to parent class mapped table by primary key and foreign key relationship.
- It’s highly normalized but performance is not good.

@Inheritance(strategy=InheritanceType.JOINED)

- In the above case, we have one table for each concrete class in the hierarchy. But duplicate column is added in subclass tables.
- slightly more normalized than table per class strategy
- To support polymorphism either container has to do multiple trips to database or use SQL UNION kind of feature.

@Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)



Association Mappings between Hibernate Entities

one-to-one association

1. Using foreign key association:-
a foreign key column is created in owner entity.

if we make Employee owner, then a extra column “ACCOUNT_ID” will be created in Employee table. This column will store the foreign key for Account table.

 

To make such association, refer the AccountEntity class in EmployeeEntity owner class as follow:
//EmployeeEntity
@OneToOne
@JoinColumn(name="ACCOUNT_ID", referencedColumnName=”ID”)
private AccountEntity account;
    
In a bidirectional relationship, one of the sides (and only one) has to be the owner: the owner is responsible for the association column(s) update. To declare a side as not responsible for the relationship, the attribute mappedBy is used. mappedBy refers to the property name of the association on the owner side.
//AccountEntity
@OneToOne(mappedBy="account")
private EmployeeEntity employee;

Above “mappedBy” attribute declares that it is dependent on owner entity for mapping.
 
one-to-many association
Eg:- In any company an employee can register multiple bank accounts but one bank account will be associated with one and only one employee.

 


many-to-many association


What is the difference between FetchType.LAZY and FetchType.EAGER in Java persistence?
EAGER = fetch the data immediately : fetched fully at the time their parent is fetched. So if you have Course and it has List<Student>, all the students are fetched from the database at the time the Course is fetched.
LAZY = fetch the data when needed OR on demand: LAZY on the other hand means that the contents of the List are fetched only when you try to access them. For example, by calling course.getStudents().iterator(). Calling any access method on the List will initiate a call to the database to retrieve the elements. This is implemented by creating a Proxy around the List (or Set). So for your lazy collections, the concrete types are not ArrayList and HashSet, but PersistentSet and PersistentList (or PersistentBag)
One big difference is that EAGER fetch strategy allows to use fetched data object without session. Why?
In case of lazy loading strategy, lazy loading marked object does not retrieve data if session is disconnected (after session.close() statement). All that can be made by hibernate proxy. Eager strategy lets data to be still available after closing session.
 
 
 
 

Wednesday, August 23, 2017

Apache Solr

Apache Solr is an open source search engine built upon a Java search library called Lucene. It supports REST like API for performing various operations like update, query etc.

Creating fields for Indexing Document-
This can be achieved in 2 ways-
    Using Solr UI.(which modifies the managed-schema.xml by adding fields to it)
    Adding a new schema.xml

Indexing Document-
Now that the fields have been added by using either of the two methods mentioned above, we will index the document. We will create a new test.xml file which contains the person data to be indexed.

Querying the Indexed Document-
Now the data has been indexed.

Apache Solr vs Elasticsearch
Both Solr and Elasticsearch are popular open source search engines built on top of Lucene. Both have vibrant communities and are well documented.

The difference is in the way each builds a wrapper and implements features on top of Lucene.

Logging Framework


Monday, August 21, 2017

Polymorphism

Polymorphism
- is a concept where one object can have many forms. The most common use of polymorphism in Oops occurs when a parent class reference is used to refer to a child class object.
- static
- dynamic

Inheritance
- is a mechanism in which one object acquires all the properties and behaviors of a parent object. The idea behind inheritance in Java is that you can create new classes that are built upon existing classes and same properties needed in multiple class you need not need to write multiple times.

Saturday, August 12, 2017

Microservice - independently deployable applications

In monolithic software, we mainly use a three-tier architecture:

    Presentation layer
    Business layer
    Data access layer

Say a traditional web application client (a browser) posts a request. The business tier executes the business logic, the database collects/stores application specific persistence data, and the UI shows the data to the user.

However, there are several problems with this type of system. All code (presentation, business layer, and data access layer) is maintained within the same code base. Although logically we divide the services like JMS Service and Data-Access Service, they are on the same code base and runs under single process context.



Microservice architecture / Microservices tells us to break a product or project into independent services so that it can be deployed and managed solely at that level and doesn't depend on other services.

On what basis do I break down my project into independent services?

So If the project has Inventory, Order, Billing, Shipping, and UI shopping cart modules, we can break each service down as an independently deployable module. Each has its own maintenance, monitoring, application servers, and database. So with microservices, there is no centralized database — each module has its own database.


Benefits:-
Each module is independent so you can choose programming lang. which best fits for module.
Each module has it's own database so you free to choose Nosql or relational. So application is polygot in nature.

So, "The microservice architectural style is an approach to developing a single large application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery."

Challenges:-
- hard to debug and trace the issues
- greater need for end to end testing

https://www.dineshonjava.com/microservices-with-spring-boot/
 

Spring boot


Spring boot - simplifies spring application development

It is not a framework but it is approach to develop spring based application with very less configuration.

Features:-
Auto-Configuration:-
- works by analyzing the classpath. Can automatically configure common configuration scenarios. If you forget a dependency, Spring Boot can’t configure it.

=> Spring Boot automatically configures required classes depending on the libraries on its classpath.
– Sets up a JPA Entity Manager Factory if a JPA implementation is on the classpath.
– Creates a default Spring MVC setup(configures  DispatcherServlet & ContextLoaderListener), if Spring MVC is on the classpath.


Starter Dependencies:-
- It offers help with project dependency management. The starter brings you required dependencies as well as some predefined configuration bits.

=> In simple words, if you are developing a project that uses Spring Batch for batch processing, you just have to include spring-boot-starter-batch that will import all the required dependencies for the Spring Batch application. This reduces the burden of searching and configuring all the dependencies required for a framework.

CLI:- 
- Lets you write complete applications with just application code, but no need for a traditional project build.

Actuator:-
- Gives you insight into what’s going on inside of a running Spring Boot application OR expose different types of information about the running application – health, metrics, info, dump, env etc.

So, Spring Boot can take the burden of configuration off your hands.


Q:- Suppose your application want to interact with DB?
 - Spring Data libraries on class path
 - define properties in the application.properties file exists in classpath of application
database.host=localhost
database.user=admin
then it automatically sets up connection to DB along with the Data Source class.

Q:- How to reload changes on Spring Boot without having to restart server?
Include following maven dependency in the application.
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>springloaded</artifactId>
 <version>1.2.6.RELEASE</version>
</dependency>


- Spring Boot includes support for embedded Tomcat, Jetty, and Undertow servers. By default the embedded server will listen for HTTP requests on port 8080.

- By adding logback.xml file to the application we can override the default logging configuration providing by the Spring Boot. This file place in the classpath (src/main/resources) of the application for Spring Boot to pick the custom configuration.

- Automatic restart
Applications that use spring-boot-devtools will automatically restart whenever files on the classpath change. This can be a useful feature when working in an IDE as it gives a very fast feedback loop for code changes. By default, any entry on the classpath that points to a folder will be monitored for changes

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>

This can be achieved using DEV Tools. With this dependency any changes you save, the embedded tomcat will restart. Spring Boot has a Developer tools (DevTools) module which helps to improve the productivity of developers. One of the key challenge for the Java developers is to auto deploy the file changes to server and auto restart the server. Developers can reload changes on Spring Boot without having to restart my server. This will eliminates the need for manually deploying the changes every time. Spring Boot doesn’t have this feature when it has released it’s first version. This was a most requested features for the developers. The module DevTools does exactly what is needed for the developers. This module will be disabled in the production environment.

- Configuration file 
application.properties file is very important where we would over write all the default configurations. Normally we have to keep this file under the resources folder of the project.

- what if I want to use Jetty server instead of tomcat?
spring-boot-starter-web which pull spring-webmvc, jackson-json, validation-api and spring-boot-starter-tomcat automatically and when we run the main() method it started tomcat as an embedded container so that we don’t have to deploy our application on any externally installed tomcat server.

Simple, exclude spring-bootstarter-tomcat from spring-boot-starter-web and include spring-boot-starter-jetty.


Magic behind the SpringBoot’s AutoConfiguration:-

SpringBoot provides various AutoConfiguration classes in spring-boot-autoconfigure-{version}.jar which are responsible for registering various components/beans.

@EnableAutoConfiguration annotation enables the auto-configuration of Spring ApplicationContext by scanning the classpath components and registers the beans that are matching various Conditions using @Conditional feature.
 
For example consider org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration class.




Web Development

Design Phase:- Below all these represent different stages of the UX/UI design flow:- Wireframes represent a very basic & visual repr...