2FA: Two Factor Authentication


Two-factor authentication (also known as 2FA or 2-Step Verification) is a technology that enables confirmation of a user’s claimed identity by utilizing a combination of two different user factors.

Factors includes

  • Something which user knows (Knowledge)

Ex: User Ids, Passwords, ATM PINS, Security Images etc

  • Something which user has (Possession)

Ex: ATM Cards, Mobile Devices, RFIDs etc

  • Something which user is (Inherent)

Ex: Finger Prints, Typing Speed etc.

Two Factor authentication uses a combination of any two of the above three factors.

Why 2FA?

  • Credential based authentication is not enough powerful to protect against identity theft
  • Since a password is more likely to be lost or forgotten, many people remember them by noting down or choosing weak password therefore exposing them to hackers
  • Two-factor authentication is one of the best ways to protect against remote attacks such as phishing, credential exploitation and other attempts to takeover accounts.
  • By choosing two different channels of authentication, you can protect user logins from remote attacks that may exploit stolen credentials.
  • Without the physical device, remote attackers can’t pretend to be one, he/she is not
  • With technological advance 2FA is easy to implement and cost effective

2FA can be implemented without any extra hardware cost by the provider.

Different Approaches

The most popular method for enabling the use of 2FA is through the addition of something you have, typically in the form of a piece of hardware or a software application on a smartphone, that is carried by the person at all times that generates a random One-Time Passcode (OTP).

Approaches Include:

  • Hardware Devices like RFIDs, USB Connectors etc.
  • OTPs delivered through SMS
  • In House Smart Phone App to send Push Notifications
  • Time Based – OTP (TOTP) through open source Smart Phones Apps

Pros & Cons

Approach Pros Cons
Hardware Devices like RFIDs, USB Connectors etc. •Many Service Providers Available

•Do not require a Smart Phone

•User has to carry the device all time

•Cost associated with distribution and maintenance

•Few incidents of hacks in the past

OTPs delivered through SMS •Many SMS Service Providers Available

•Do not require a Smart Phone

•Not required to carry a separate hardware device all time

•Cost associated for SMS service

•OPT delivery depends on the Service Provider

•Text messages to mobile phones using SMS are insecure and can be intercepted

In House Smart Phone App to send Push Notifications •In house mobile app with no interaction with other service providers

•Secure and Reliable

•Requires a Smart Phone

•Mobile app development and maintenance

Time Based – OTP (TOTP) through open source Smart Phones Apps •Time Based OTPs are independently generated in App without any interaction with the Web Application

•As they are constantly changed, dynamically generated OTPs are safer to use than fixed (static) log-in information

•Easy to implement in any web application without any extra hardware or software

•Requires a Smart Phone

•OTPs are usually based on Time which requires the web application and mobile app to not have a time difference of more than 30 seconds


Time-based One-time Password Algorithm (TOTP) is an algorithm that computes a one-time password from a shared secret key and the current time. It has been adopted as Internet Engineering Task Force standard RFC 6238, is the cornerstone of  Initiative For Open Authentication (OATH), and is used in a number of  two-factor authentication  systems.2FA


  • With the continued improvements in mobile technology, the ability to use smart phones as a second factor of authentication is becoming more trustworthy.
  • Many open source libraries are available to implement TOTP in web applications to provide 2FA
  • Free apps like Google Authenticator, Authy can be used by users to generate TOTP on the fly
  • 2FA can be easily enhanced to Multi Factor Authentication with use of other mobile information like location, IP address, voice recognition etc.


  • 2 Factor Authentication:


  • TOTP



  • Implementation of TOTP in Java EE based applications





On-the-fly class reloading in java

Problem of redeploying during development

During development time of any web based java project the biggest headache for the developers is redeploying the code base to the server every time they make any change.
When a developer frequently makes small changes to an application with a long start up time that slows down the development process and especially if the project is using remote machine to de development, it takes considerable time to do redeployment.
It has been found in several statistical studies that in average 15 mints time get wasted by the developers in every 1hr to redeploy/publish their change. Which is ¼ th of the development effort. Thus minimizing this wastage of time development productivity can be enhanced significantly.
In this blog some of the techniques/tools have been discussed to show how this problem can be resolved or minimized to increase the productivity of the development team.
In this article I will discuss such technologies that enable the developers to, resume it directly from where it was suspended instead of stopping and restarting the server, after modifying and compiling the program.

Normal process in development environment

The normal process a developer follows after making any change in the java files of the workspace to make it available to the server, build the project and then publish the project.


Build command is responsible for compiling the java source code and generating classes. The java builder is notified of changes to the resources in a workspace and can automatically compile java code.


Publishing involves copying files (projects, resource files, and server configurations) to the correct location for the server to find and use those.
Hence every time developer changes the code in workspace , these two above steps need to be performed,though the build process does not take much time but the publish process may be time consuming.

Understanding certain key concept:

Before going to explore different way to solve the problem lets discuss some key concept that will make it easier to understand that how the problem is handled by different tool/technique.

Java class loader

Class loaders are a fundamental module of the Java language. A class loader is a part of the Java virtual machine (JVM) that loads classes into memory; a class loader is responsible for finding and loading class files at run time.
Java class loaders do not have any standard mechanism to undeploy or unload a set of classes, nor can they load new versions of classes. In order to make updates to classes in a running virtual machine, the class loader that loaded the changed classes must be replaced with a new class loader.

Web Module Class loader

Web module class loaders load the contents of the WEB-INF/classes and WEB-INF/lib directories. Web module class loaders are children of application class loaders. Each class loader is a child of the previous class loader. That is, the application module class loaders are children of the WebSphere extensions class loader, which is a child of the CLASSPATH Java class loader. Whenever a class needs to be loaded, the class
loader usually delegates the request to its parent class loader. If none of the parent class loaders can find the class, the original class loader attempts to load the class. Requests can only go to a parent class loader; they cannot go to a child class loaderclass loader


A JVM provides a run-time environment in which Java bytecode can be executed. The JVM loads classes and other resources into the classpath exactly once (unless running in debug mode) from the directories or Jars specified in the CLASSPATH environment variable using classLoader. Once a resource has been loaded by a ClassLoader instance, it remains in memory until the ClassLoader is garbage collected.Class loader1

Different approaches/technique/tools as solution

In this section I will discuss on certain technique and tools that can be used to minimize the redeployment during development time.


JRebel is the most popular tool in this area, that lets Java developers instantly update code (i.e. add a new feature, fix a bug, etc) and see those changes reflected in their application under development without restarting the application server.

How Jrebel works:

JRebel uses “Rebellion Technology” to instantly reload changes made to a class structure, making a full application redeploy unnecessary. JRebel uses class rewriting and JVM integration to version individual classes. Plus it integrates with application servers to redirect class/resource and web server lookups back to the workspace

Jerebel Plug-in support for IDE: Jrebel provides plug-in for the following IDE

IntelliJ IDEA
JRebel supports various app servers/containers
Oracle WebLogic
Google AppEngine
SAP NetWeaver
SpringSourceDM Server (Eclipse Virgo)
Oracle OC4J
Mulesoft tcat server

Installation of plug-in and use:
Showing step by step installation process is out of the scope of this document but below is the url that provides information regarding installation process.

Jrebel for Eclipse

Jrebel for NetBeans

Dynamic Code Evolution VM

DCE is a technique a programmer can use to modify his Java application without restarting directly during runtime. In Debugging Mode this is a very interesting capability because modifications can be tested immediately without restarting the whole application. This increases productivity, especially in large projects.
DCE, is based on the Java HotSpot VM, which already gives the flexibility to swap methods on classes during runtime. DCE is extending this basic functionality and pushing it further by making it possible to add and remove methods and fields to classes. It is also possible to modify supertypes , add/remove and use completely new classes and so forth.

Following section describes some changes that developers frequently make on existing classes and how JVM responds to those changes.

Swapping Method Bodies:
Replacing the bytecodes of a Java method is the simplest possible change. No other bytecodes or type information data depend on the actual implementation of a method. Therefore, this change can be done in isolation from the rest of the system.

Adding or Removing Methods:
When changing the set of methods of a class, the virtual method table that is used for dynamic dispatch needs to be modified. Additionally, a change in a class can have an impact on the virtual method table of a subclass .
The virtual method table indexes of methods may change and make machine invalid Machine code can also contain static linking to existing methods that must be invalidated or recalculated.

Adding or Removing Fields:
Previous two example changes only affected the metadata of the VM. Now the object instances need to be modified according to the changes in their class or superclasses. The VM needs to convert the old version of an object to a new version that can have different fields and a different size. Similarly to virtual method table
indexes, field offsets are used in various places in the interpreter and in the compiled machine code. They need to be correctly adjusted or invalidated.

How it works
The technique is implemented by modification to the Java HotSpot VM, with an interpreter and two just-in-time compilers( on client compiler and one server compiler). The implementation is based on the exist-ing mechanism for swapping method bodies and extends it to allow arbitrary changes to loaded types. The approach focuses on implementing code evolution in an existing VM while keeping the
necessary changes small.First, the algorithm finds all affected classes and sorts them based on their subtype relationships. Then, the new classes are loaded and added to the type universe of the VM forming a side universe. A modified full garbage collection performs the actual version change. After invalidating state that is no longer consistent with the new class versions, the VM continues executing the program.

Below figure gives an overview of the modifications to the VM that are described in the following subsections.DCE VM

Finding effected types
When applying more advanced changes than just swapping method bodies, classes can be indirectly affected by the redefinition step. A field added to a class is implicitly also
added to all its subclasses. Adding a method to a class can have effects on the virtual method tables of its subclasses.
Therefore, the algorithm needs to extend the set of redefined classes by all their subtypes.

􀀀 Build side universe
This technique keeps both the old and the new classes in the system. This is necessary to be able to keep executing old code that depends on properties of the old class. It would also open the possibility to keep old and new instances in parallel on the heap. Additionally, it is the only way to solve the problem of cyclic dependencies between code evolution changes.

􀀀 Swapping pointers
When updating a class C to C’ it must be ensured that all instances of class C are updated to be instances of class C’. The instance of an object on the heap contains a reference to its class. The Java HotSpotTM VMdoes not keep track of the instances of a given class, therefore a heap traversal is necessary to find all existing instances.
Additionally, other parts of the system (e.g., native code) can have references to the old class that need to be updated too.

􀀀Updating Instance
For updating instances, a strategy is required to initialize the fields of the new instance. This technique uses a simple algorithm that matches fields if their name and type are the same. For the matching fields, values from the old instance to the new instance are copied. All other fields are initialized with 0, null, or false.
The information is calculated once per class and temporarily attached to the class meta object. The modified garbage collector reads the information and performs the memory copy or clear operations for each instance.

Showing step by step installation process is out of the scope of this document but below is the url that provides information regarding installation process.

DCE VM installation


Though DCE and tools like JavaRebel provides the same functionality to developers. But there are differences of both provide their service. While JavaRebel hooks into an existing VM, DCE itself is a VM which makes it possible to do a lot of modification during runtime while debugging an application.
But at the end of the day both increases developer productivity, by saving a lot of time.

Java profiling with Eclipse TPTP

What is profiling?

Profiling is the dynamic application analysis i.e. analysing the behaviour of the application during its execution.

Few of the important information that a profiler captures are:

  • Monitor memory usage and leaks in any java applications
  • Monitor method call duration
  • Display info in graphs, reports, tree views, etc

In this blog I will discuss about Eclipse TPTP which is one of the very popular toll for Java profiling. The main objective of the blog is to show how configure eclipse TPTP in your local to do profiling during your development.

It is a preventive measure and ensure better performance of your code in production or live environment.

What is Eclipse TPTP?

The Eclipse Test and Performance Tools Platform (TPTP) is an open sourec platform and helps the developers in performance measurement  of their developed code.

How to configure?

  1. Download the package from https://eclipse.org/tptp/home/downloadsThis package contains Eclipse along with its plug-ins and also the profiler. If you need only the profiler do the following.a.Unzip the package

    b.Copy the features and plugins folder into a separate folder say agentController


  2. Configure Agent Controller

Go to Control Panelcontrol panel

Click on System and Security

system security

Click on System


Click on Advanced System settingsadvanced

Click on Environment Variable

enviornment v

From System variable select “Path” and then edit

path v

Add jdk or jre bin path (example: C:\Program Files\IBM\SDP_RSA803\jdk\bin\)

Open a DOS command prompt window.


Navigate to the Agent Controller bin directory.

For example: If you have copied the agentController folder in C drive then


Enter SetConfig and press the Enter key


You will get message as shown below


Enter the path that you set in environment Variable
If you added path of environment variable as C:\ProgramFiles\IBM\SDP_RSA803\jdk\bin\
Then now enter C:\Program Files\IBM\SDP_RSA803\jdk\bin\javaw.exe and press Enter

You will get message as shown


Press enter
You will get following messagedos6

Configuration complete

3. Verify configuration

Type ACServer as shown and press Enter


Go to task manager and see ACServer is running

task manager

Run profiler on Websphere

Start your server in profile mode


In few seconds following window will appear


Click next and select the option you want to analyze

profile mode

Click Finish
Once server is started , switch to Profiling perspective


Run your application and then go to RAD/Eclipse Profiling perspective , go to execution tab

execution tab

Drill down to see statistics of each method


Double click the method to see detail. It will show the calling method for this method and also the methods this current method calls, time taken by this method for execution, number of times the method get called


Customize Filter:
If you are interested track only certain packages and methods do it by setting filter
Right click on server and select profile


Click Next

profile mode

Double click on Java Profiling- JRE 1.5 or newer


Create new filter by clicking Add at top section
Crate new exclusion rules in below part by clicking Add



Important statistics

  • Average Base Time: This is the average time that a method took to complete. So, on average, this is how long a single invocation of that method took to finish (as noted above, this excludes the time taken by child methods called by this method or, more specifically, excluding the time of unfiltered child methods)


  • Base Time: This is the total amount of time that a method took to complete. This is an amalgamation of all of the time that was spent in this method (excluding calls to other unfiltered methods.)


  • Cumulative CPU Time: Cumulative CPU Time represents the amount of CPU time spent executing a specified method. However, the granularity of the data provided by the JVM in this regard is coarser then might be desirable. Consequently, CPU time maybe reported as zero if the time is less than a single platform-specific unit as reported by the JVM. Also, CPU time will not take into account other types of performance bottlenecks, such as those involving communication type and I/O accesstime. As a result, base time is often favored as a metric for performance bottleneck reduction.

Apple Watch Trend,Business Case and More


  • After tablet boom , wearable devices is the next big thing in the tech industry
  • Among the wearable devices Wrist wear is leading the market
  • Smartwatch is the most popular among Wrist wear devices
  • Major vendors like Samsung, LG, Motorola and lately Apple has launched their products
  • Fitbit has retained the lead in the global wearable market in the second quarter of 2015
  • Apple managed to sell 3.6 million Apple Watches in its first quarter on the market
  • The most wanted feature in a smartwatch is activity tracking



Busniess Use Case

Financial Sector

  • Deposit your cheques
  • Quick Balance
  • Review transaction history
  • Easy bill payments
  • Transfers funds
  • Trade stocks and options
  • Market Watch News Sharing
  • ATM Locator


  • Quick Search
  • Product information and reviews
  • Check out
  • Apple Pay
  • Glance to see the store hours
  • Locate Store


  • Report Loss
  • Claim status check
  • Claim payment notification
  • Road side assistance call
  • Driving Score and Experience
  • Get Insurance Card
  • Locator

Health and Fitness – Connected Wellness

  • Monitor Health Statistics- Heart rate, Blood Pressure
  • Medication Reminder
  • Medication refill notifications
  • Track physical activities
  • Health Advisories
  • View test and lab results
  • Schedule appointments
  • Search hospital and Specialists

App Architecture

  • The app is consists of two major part WatchKit app and WatchKit extension
  • A WatchKit app is a user launch able app that gets deployed on the Apple Watch
  • The WatchKit app contains only the storyboards and resource files associated with your app’s user interface
  • A WatchKit app acts as the public face of your app but it works in tandem with WatchKit extension
  • The WatchKit extension contains the code for managing content, responding to user interactions, and updating your user interface
  • The WatchKit extension runs on iPhone( in WatchOS 2, the extension is also deployed in watch)App architecture
  • The user can launch Watch app from the Home screen, interact with glance, or view notifications using custom UI
  • Each of these interactions launches Watch app and the corresponding WatchKit extension
  • Watch app and WatchKit pass information back and forth until the user stops interacting with your app, at which point iOS suspends the extension until the next user interaction
  • WatchKit extension remains running only while the user is interacting with app on Apple Watch
  • When the user exits app explicitly or stops interacting with Apple watch, iOS deactivates the current interface controller and eventually suspends execution of your extensioncommunication diagram

Apple Push Notification service(APN)

  • Apple Push Notification service (APNs) propagates remote notifications to devices having apps registered to receive those notifications.
  • Each device establishes an accredited and encrypted IP connection with the service and receives notifications over this persistent connection.
  • Providers connect with APNs through a persistent and secure channel
  • Provider need SSL certificates from Member Center
  • Each certificate is limited to a single app, identified by its bundle ID
  • Providers uses an interface which is based on streaming TCP socket design for sending remote notificationsAPN communication

 APN security

  • APNs uses two levels of trust for providers, devices, and their communications. These are known as connection trust and token trust.
  • Connection trust establishes certainty that the APNs connection is with an authorized provider with whom Apple has agreed to deliver notifications.
  • To deliver to correct device APN uses Token Trust. A device token is an opaque identifier of a device that APNs gives to the device when it first connects with it. The device shares the device token with its providerAPN security

 APN app Registration

  • Apps must register to receive remote notifications.
  • The system receives the registration request from the app, connects with APNs, and forwards the request
  • APN generate device token and send it back to Device.
  • Device return the token to app.
  • App share the token with the providerapp registration

How to create workflow with SharePoint designer

What is SharePoint Designer:

Microsoft SharePoint Designer (SPD), formerly known as Microsoft Office SharePoint Designer, is a specialized HTML editor and web design freeware for creating or modifying Microsoft SharePoint sites, workflows and web pages.

What it does:

  • Create sites & subsites
  • Create a list or library
  • Modify the site layouts with custom coding
  • Create workflows for sites, lists, and libraries


In this Blog I will mainly focus on how to create workflow for Sites, Libraries and list using SharePoint designer.


SharePoint Designer 2013 is a free download. To download and install SharePoint Designer 2013 follow these steps:

  • Open your web browser and navigate to the Microsoft Download Center: http://www.microsoft.com/download.
  • Type SharePoint Designer 2013 in the search field.
  • Click the link for “SharePoint Designer 2013”.
  • Read the overview, system requirements, and installation instructions. Make sure your system is compatible.
  • Select your platform type: 64-bit (x64) or 32-bit (x86) as shown in the figure.
  • Follow the instructions to install SharePoint Designer 2013.

Once installed open the SharePoint designer and it should look like:

sharepoint designer1

Workflow statement

If a document is uploaded in the shared document library which contains in its name ‘Asset’ then initiate a review process and assign a ‘Task’ for review. Also move the asset to ‘WorkFlow ‘ Library.

Creating work flow

  1. Open SharePoint designer
  2. Open the SharePoint site for which work flow need to be createdsharepoint23. Synch SharePoint designer with the SharePoint site
  3. sharepoint3
    • Provide the url just copied and open
    • It will take to following page with site information
  4. sharepoint4
  5. Chose list of libraries from the left panel , it will show the libraries available in that sitesharepoint5
  6. Chose work flow form left hand pane , by default it goes to list-workflow but it can be changed by selecting top pane optionssharepoint6
  7. Go to list work flow and chose the Library on which the work flow will run, in this case ‘Shared Document’. Provide name and description of the workflow.sharepoint7
  8. Create condition and corresponding action to design the workflowsharepoint8
    • Copy item to WorkFlowDemo library id the name contains Assetsharepoint9
    • Assign a task to review the assetsharepoint10sharepoint11
    • Click on To-do-item to give a name and description to the tasksharepoint12
  9. Configure Workflow properties
    1. Click on workflow on left hand pane and new workflow will be visible
    2. Click on the new workflow the following page will be presentedsharepoint13
    3. Chose ‘Start workflow automatically when an item is created’
  10. Save and publish the workflow from top menu

    Run workflow

    1. Log in to your SharePoint site
    2. Go to the library on which the workflow is designed and add a document that contains ‘Asset’ in it namesharepoint14
    3. Verify the same has been copied to other Library( here ‘WorkFlowDemo’)shareoint15
    4. Go to task and verify that a task has been createdsharepoint16


      SharePoint designer has capability to create complex workflow, task assignment and notification mechanism. This document has shown how to create a simple workflow on list using SharePoint designer. But SharePoint designer can be used to create workflow on other items like task list or in the entire site. It is a very powerful tool and can provide capability to create business flow around the document management.

Web application firewall

What is WAF?

A web application firewall (WAF) is an appliance, server plugin, or filter that applies a set of rules to an HTTP conversation. Generally, these rules cover common attacks such as cross-site scripting (XSS) and SQL injection. By customizing the rules to your application, many attacks can be identified and blocked. The effort to perform this customization can be significant and needs to be maintained as the application is modified

Why WAF?

WAFs are designed to protect web applications/servers from web-based attacks that Network Firewall cannot prevent. They sit in-line and monitor traffic to and from web applications/servers. WAFs interrogate the behavior and logic of what is requested and returned. WAFs protect against web application threats like SQL injection, cross-site scripting, session hijacking, parameter or URL tampering and buffer overflows. They do so by analyzing the contents of each incoming and outgoing packet.

WAFs are typically deployed in some sort of proxy fashion just in front of the web applications, so they do not see all traffic on our networks. By monitoring the traffic before it reaches the web application, WAFs can analyze requests before passing them on. This is what gives them such an advantage.

WAFs not only detect attacks that are known to occur in web application environments, they also detect (and can prevent) new unknown types of attacks. By watching for unusual or unexpected patterns in the traffic they can alert and/or defend against unknown attacks. For example- if a WAF detects that the application is returning much more data than it is expected to, the WAF can block it and send alert.

Security Model

A WAF typically follows either a positive or negative security model when it comes to developing security policies for your applications.  A positive security model only allows traffic to pass which is known to be good, all other traffic is blocked.  A negative security model allows all traffic and attempts to block that which is malicious.  Some WAF implementations attempt to use both models, but generally products use one or the other. A WAF using a positive security model typically requires more configurations and tuning, while a WAF with a negative security model will rely more on behavioural learning capabilities.

Why WAF in trusted domain?

  • By configuring more aggressive rules in the internal WAF, we can eliminate the burden of re-authentication and re-creating security patterns through the processing tree.
  • Provides a central WAF rules repository.
  • If the web server is under attack from the outside, it can further compromise internal machines. This is a quite common scenario we saw in incident response.
  • If users are allowed at some point to upload/modify content hosted on the web server: that content has to be secured/checked. For instance, malicious content may be inserted into the web server content (like links to exploitation codes, etc.), and then be transparently/automatically accessed by any clients browsing the web server.
  • Critical servers can be closely monitored when they are isolated behind an internal WAF. Any malicious activity would be much easier to detect.
  • Added security to the internal machines, connected through VPN. For example, a laptop pc from the airport accessing the Internet might VPN into our Enterprise as well.
  • Malicious Insider

    WAF Selection criteria:

    • Protection against OWASP top ten
    • Very few false positives (i.e., should NEVER disallow an authorized request)
    • Strength of default (out-of-the-box) defenses
    • Power and ease of learn mode
    • Types of vulnerabilities it can prevent
    • Detects disclosure and unauthorized content in outbound reply messages, such as credit-card and Social Security numbers
    • Both positive and negative security model support
    • Simplified and intuitive user interface
    • Cluster mode support
    • High performance (milliseconds latency)
    • Complete alerting, forensics, reporting capabilities
    • Web services\XML support
    • Brute force protection
    • Ability to active (block and log), passive (log only) and bypass the web traffic
    • Ability to keep individual users constrained to exactly what they have seen in the current session
    • Ability to be configured to prevent ANY specific problem (e.g., emergency patches)
    • Form factor: software vs. hardware (hardware generally preferred)


Web application and External threats


Web-based applications and services have changed the landscape of information delivery and exchange in today’s corporate, government, and educational arenas. Ease of access, increased availability of information, and the richness of web services have universally increased productivity and operational efficiencies. These increases have led to heavier reliance on web-based services and greater integration of internal information systems and data repositories with web-facing applications.

While motivations of attackers against a victim’s corporate and organizational assets remain the same (e.g., financial, intellectual property (IP), identity theft, services disruption, or denial of service), web applications enable a whole new class of vulnerabilities and exploit techniques such as SQL injection, cross-site scripting (XSS), and cross-site request forgery.

One technology that can help in the security of a web application infrastructure is a web application firewall.  A web application firewall (WAF) is an appliance or server application that watches http/https conversations between a client browser and web server at layer 7.  The WAF then has the ability to enforce security policies based upon a variety of criteria including signatures of known attacks, protocol standards and anomalous application traffic.

Web Application Security

Web application security is a branch of Information Security that deals specifically with security of websites, web applications and webservices. At a high level, Web application security draws on the principles of application security but applies them specifically to Internet and Web systems.

Different aspects of web security

  • Authentication : Ensure that only authorized entities may consume a Web Service . Web services need to authorize web service clients the same way web applications authorize users. A web service needs to make sure a web service client is authorized to: perform a certain action (coarse-grained); on the requested data (fine-grained).A web service should authorize its clients whether they have access to the method in question. Following authentication, the web service should check the privileges of the requesting entity whether they have access to the requested resource. This should be done on every request. Ensure access to administration and management functions within the Web Service Application is limited to web service administrators. Ideally, any administrative capabilities would be in an application that is completely separate from the web services being managed by these capabilities, thus completely separating normal users from these sensitive functions.
  • Non-repudiation : Prevent a web services consumer from denying having performed a particular transaction.
  • Confidentiality: Ensure that SOAP messages traversing networks are not viewed or modified by attackers. WS-Security and WS-Secure Conversation provide the confidentiality services necessary. Messages containing sensitive data must be encrypted using a strong encryption cipher. This could be transport encryption or message encryption. Messages containing sensitive data that must remain encrypted at rest after receipt must be encrypted with strong data encryption, not just transport encryption.
  • Message Integrity: This is for data at rest. Integrity of data in transit can easily be provided by SSL/TLS.When using public key cryptography, encryption does guarantee confidentiality but it does not guarantee integrity since the receiver’s public key is public. For the same reason, encryption does not ensure the identity of the sender.For XML data, use XML digital signatures to provide message integrity using the sender’s private key. This signature can be validated by the recipient using the sender’s digital certificate (public key).
  • Protection of resources: Ensure that individual Web services are adequately protected through appropriate identification, authentication, and access control mechanisms. There is a plethora of standards available for controlling access to Web services.
  • Negotiation of contracts: To truly meet the goals of SOA and automate business processes, Web services should be capable of negotiating business contracts as well as the QoP and QoS of the associated transactions. While this remains a hard problem, standards are emerging to address portions of contract negotiation—particularly in the QoP and QoS field.
  • Trust management: One of the underlying principles of security is ensuring that all entities involved in a transaction trust one another. To this end, Web services support a variety of trust models that can be used to enable Web services to trust the identities of entities within the SOA.
  • Security properties: All Web service security processes, tools, and techniques rely on secure implementation. A vulnerable Web service may allow attackers to bypass many—if not all—of the security mechanisms.
  • Transport Confidentiality : Transport confidentiality protects against eavesdropping and man-in-the-middle attacks against web service communications to/from the server. All communication with and between web services containing sensitive features, an authenticated session, or transfer of sensitive data must be encrypted using well configured TLS. This is recommended even if the messages themselves are encrypted because SSL/TLS provides numerous benefits beyond traffic confidentiality including integrity protection, replay defences, and server authentication.
  • Server Authentication: SSL/TLS must be used to authenticate the service provider to the service consumer. The service consumer should verify the server certificate is issued by a trusted provider, is not expired, is not revoked, matches the domain name of the service, and that the server has proven that it has the private key associated with the public key certificate (by properly signing something or successfully decrypting something encrypted with the associated public key).
  • Schema Validation: Schema validation enforces constraints and syntax defined by the schema. Web services must validate SOAP payloads against their associated XML schema definition (XSD).The XSD defined for a SOAP web service should, at a minimum, define the maximum length and character set of every parameter allowed to pass into and out of the web service. The XSD defined for a SOAP web service should define strong (ideally white list) validation patterns for all fixed format parameters (e.g., zip codes, phone numbers, list values, etc.).
  • Output Encoding: Web services need to ensure that output sent to clients is encoded to be consumed as data and not as scripts. This gets pretty important when web service clients use the output to render HTML pages either directly or indirectly using AJAX objects.

    Different Security Threats

    • Distributed Denial of Service (DDoS) – DoS / DDoS attacks have increased in popularity. They are easy to employ and highly effective. Often, the attacker has to do little to cause your website harm. The goal is to disrupt your business by taking your website off-line.
    • Volume Based Attacks – Overload your web servers and application platforms resource.
    • Protocol Based Attacks – The internet is all based on protocols; it’s how things get from point A to point B. This type of attack can include things likes Ping of Death, SYN Flood (SYNchonize and ACKnowledge message), Packet modifications and others.
    • Layer 7 application attack (HTTP Flood Attack) – is when an attacker makes use of standard GET / POST requests in effort to overload your web servers response ability. They can generate thousands of requests a second. This attack can occur over HTTP or HTTPS and much easier to implement.
    • Simple Service Discovery Protocol (SSDP Attack) – It often targets traditional SSDP ports, (1900) and destination port 7 (echo). SSDP is usually used by plug and play devices
    • User Datagram Protocol (UDP Attack ) – will randomly flood various ports on your web server, also known as Layer 3 / 4 attacks. This forces the web server to respond.
    • Domain Name Server Amplification (DNS Attack) – It occurs at Layer 3 / 4. They make use of publicly accessible DNS servers around the world to overwhelm your web server with DNS response traffic.
    • Backdoor Injections (SQL Injection Attacks) – Injection flaws, such as SQL, OS, and LDAP injection occur when un-trusted data is sent as part of a command or query. The attacker’s hostile data can execute unintended commands and pollute data.
    • Cross Site Scripting(XSS)– XSS flaws occur whenever an application takes un-trusted data and sends it to a web browser. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions or redirect the user to malicious sites.
    • Broken Authentication– Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens.

Software Quality Matrix

Quality Attributes

Architecture/ Design Attribute Impact
Abstraction A high value of this metric lead to more reusable components and a lower development effort.
Coupling Strong coupling  complicates a system since a module is harder to understand, this increase maintenance and enhancement cost.
Inheritance A greater value indicates complex system and increases testing effort.
Cohesion Low cohesion increases complexity, by developing unrelated methods in same class. It increases maintenance effort.
Implementation Maintainability A high value of this metric indicates faster enhancement land low testing effort.
Usability The more is the reusability the less is the development effort which save time and money.
Coding Standards Following coding standards reduce security threats, resource usage and enhance performance.

Quality Metric and Explanetion

Area Meaning and Measure Why Attribute
Abstractness It is calculated by the number of abstract classes (and interfaces) divided by the total number of types in a package. A high value of this metric lead to more reusable components and a lower value mean a concrete solution Abstraction
Specialization Index(SI) It is measured by the number of method overridden method in the subclass. The higher the value of SI is, the less reusable the class  becomes.
Depth of Inheritance Tree It is the maximum length from the node to the root

of the tree.

The deeper a class in the hierarchy is, the

greater the number of methods is likely to inherit, making it more complex to test and maintain.

Number of Children It is the number of immediate subclasses subordinate to a class in the class hierarchy. If a class has a large number of children, it may be a case of misuse of sub-classing, and may require more testing .
Weight Method per class This is measured by providing weight to complex methods depending on their complexity and adding the weight for all methods of that class. Classes with a larger weight are likely to

be more application specific, and thus limiting the possibility of reuse

Nested Block Depth Measured by counting the cascaded inner blocks. More nested blocks lead to worse readability and more complex solutions
Area Meaning and Measure Why Attribute
Lack of Cohesion It is defined as the number of method pairs that do not have common attributes (defined at the class level) minus the number of method pairs that do. Low cohesion increases complexity, by developing unrelated methods in same class. It increases maintenance effort. Cohesion
Number of Operation added by Subclass Measured by the number of new methods and attributes added to subclass When the value of NOA increases,  the class  may contain unrelated functionalities.
Afferent Coupling The number of classes outside a package that depend on classes inside the package High number of this metrics indicates that a change may have ripple effect throughout the application and will require more testing effort. Coupling
Response for a Class The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. If a large number of methods is invoked in response to a message, testing and debugging of the class become more complicated.

Different Tool comparison

Parameters Static Code Analysis UML Analysis
SONAR Metrics CKJM SDMetrics
Number of Children No Yes Yes Yes
Number of Operation added by Subclass No Yes Yes Yes
Specialization Index No Yes Yes No
Response for a Class No No Yes No
Weight Method per class No Yes Yes No
Effective Coupling No Yes Yes Yes
Nested Block Depth No Yes No No
Depth of Inheritance Tree No Yes Yes Yes
Lack of Cohesion No Yes Yes Yes
Afferent Coupling No Yes Yes Yes
Abstractness No Yes Yes Yes
Complexity Yes Yes No Yes
NCSS Method Count Yes Yes No No
Code Duplication Yes No No No
Documentation Yes No No No
Java Coding Standards Yes No No No
Junit Coverage Yes No No No

Web Application Optimization techniques

What is Web Application Optimization

A web application comprises of various modules and individual components, each having a functionality of their own. These modules and components process a piece of information according to the code and provide output to other components. This inter-connection between various components helps us set up the overall functionality and makes a web application.
Web application optimization deals with the fine tuning of these components, as a single entity and as a whole system, to process the data faster or at least to make it appear faster. Optimization of a web application enhances user experience, giving them a reason to revisit the application for their need or purpose.

Why Optimize?

1. Reduced Response time
2. Lesser amount of data transfer
3. Less load on server

Optimization at different area

Database Optimization
It is one of the sub-category of application layer optimization, where all the database related elements are boosted. This helps to decrease the time of working upon the data and provides faster data processing to a user. Different techniques like Indexing, query optimization, query caching etc. are performed to boost up the performance of the application.

Application Server Optimization
Application server is the location or server from where the application and services are hosted and are made available to a user. If access and request handling from this component becomes faster, the application will work faster. Code caching and code refactoring are such techniques of application server optimization.

Presentation Layer Optimization
This layer ensures that all the data, which is sent to the user, is in the correct format and is minimal. Techniques such as Cache controlling which controls the behavior of browser cache and proxy cache are used to boost up the speed of data formatting and encapsulation for further delivery.

Different Optimization Technique
Among the three areas discussed above, only optimization technique of Presentation Layer is discussed in this article.

Browser Caching
Most web pages include resources that does not change frequently, such as CSS files, image files, JavaScript files, and so on. These resources take time to download over the network, which increases the time it takes to load a web page. HTTP caching allows these resources to be saved, or cached, by a browser. Once a resource is cached, a browser can refer to the locally cached copy instead of having to download it again on subsequent visits to the web page. Thus caching is a double win: it reduces round-trip time by eliminating numerous HTTP requests for the required resources, and also substantially reduces the total payload size of the responses.

1. Set Expires to a minimum of one month, and preferably up to one year, in the future. Do not set it to more than one year in the future, as that violates the RFC guidelines. Setting caching aggressively does not “pollute” browser caches: as far as we know, all browsers clear their caches according to a Least Recently Used algorithm; no browser waits until resources expire before purging them.

2. Set the Last-Modified date to the last time the resource was changed. If the Last-Modified date is sufficiently far enough in the past, chances are the browser won’t re-fetch it.

Minimize HTTP Requests
80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.
One way to reduce the number of components in the page is to simplify the page’s design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

1. Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single style sheet. Combining files is more challenging when the scripts and style sheets vary from page to page, but making this part of your release process improves response times.
2. CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Minimize redirects
Sometimes it’s necessary for your application to redirect the browser from one URL to another. Whatever the reason, redirects trigger an additional HTTP request-response cycle and add round-trip-time latency. It’s important to minimize the number of redirects issued by your application. The best way to do this is to restrict the use of redirects to only those cases where it’s absolutely technically necessary, and to find other solutions where it’s not.

1. Never reference URLs in your pages that are known to redirect to other URLs. The application needs to have a way of updating URL references whenever resources change their location.

2. Never require more than one redirect to get to a given resource. For instance, if C is the target page, and there are two different start points, A and B, both A and B should redirect directly to C; A should never redirect intermediately to B.

3. If it is a must to use redirect mechanism, prefer the server-side method over client-side methods. Browsers are able to handle HTTP redirects more efficiently than meta and JavaScript redirects. For example, JS redirects can add parse latency in the browser, while 301 or 302 redirects can be processed immediately, before the browser parses the HTML document.

Data Compression
In short, contents travel from server side to client side (vice versa) whenever a HTTP request is make. The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by data compression. It’s true that the end-user’s bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

1. Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you’re likely to see is deflate, but it’s less effective and less popular.

2. Gzipping generally reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

3. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Off-Heap caching of static data
Generally the data which does not changes frequently and are used extensively are kept in cache to speed up the application performance and to reduce call to external storage system. Caching files in memory reduces the amount of memory available on the system to allocate memory for the running threads. If the cache size is large and all data are cached using in-memory mechanism then too much of memory will be used to keep cache data. System will be forced to keep other meta information to disk and any need to swap memory to execute the program which may degrade performance.
The on-heap store refers to objects that will be present in the Java heap (and also subject to GC). On the other hand, the off-heap store keeps serialized objects outside the heap (and also not subject to GC).

1. Ehcache is an open source, standards-based cache for boosting performance and simplifying scalability. It’s the most widely-used Java-based cache because it’s robust, proven, and full-featured. Ehcache scales from in-process, with one or more nodes, all the way to mixed in-process/out-of-process configurations with terabyte-sized caches.
2. BigMemory permits caches to use an additional type of memory store outside the object heap, called the “off-heap store.” It’s available for both distributed and standalone use cases. Only Serializable cache keys and values can be placed in the store, similar to DiskStore. Serialization and de-serialization take place on putting and getting from the store. The theoretical difference in the de/serialization overhead disappears due to two effects.
The MemoryStore holds the hottest subset of data from the off-heap store, already in de-serialized form.

JavaScript Optimizations
Here are some guidelines for improving the impact that JavaScript files have on your site’s performance:

1. Merge .js files. As per the basic rules, it is desirable that the JavaScripts make as few requests as possible; this means that it is better to have as lesser number.js file as possible. Files can be merged depending on when the js functions are called, one that holds functionality that’s needed as soon as the page loads, and another for the functionality that can wait for the page to load first.

2. Minify or obfuscate scripts. Minifying means removing everything that’s not necessary — such as comments and whitespace. Obfuscating goes one step further and involves renaming and rearranging functions and variables so that their names are shorter, making the script very difficult to read. Obfuscation is often used as a way of keeping JavaScript source a secret, although if your script is available on the Web, it can never be 100% secret.
Changing the code in order to merge and minify should become an extra, separate step in the process of developing site. During development, it is better to use as many .js files as required, and then when the site is ready to go live, substitute “normal” scripts with the merged and minified version.

3. Place scripts at the bottom of the page. The third rule of thumb to follow regarding JavaScript optimization is that the script should be placed at the bottom of the page, as close to the ending </body> tag as possible. The reason is, due to the nature of the scripts, browsers block all downloads when they encounters a <script> tag. So until a script is downloaded and parsed, no other downloads will be initiated.Placing the script at the bottom is a way to avoid this negative blocking effect. Another reason to have as few<script> tags as possible is that the browser initiates its JavaScript parsing engine for every script it encounters. This can be expensive, and therefore parsing should ideally only occur once per page.

4.Remove duplicates. Another guideline regarding JavaScript is to avoid including the same script twice. The duplicate script would cause the browser’s parsing engine to be started twice and possibly (in some IE versions) even request the file for the second time. Duplicate scripts might also be an issue while using third party libraries.