What is Web Application Optimization
A web application comprises of various modules and individual components, each having a functionality of their own. These modules and components process a piece of information according to the code and provide output to other components. This inter-connection between various components helps us set up the overall functionality and makes a web application.
Web application optimization deals with the fine tuning of these components, as a single entity and as a whole system, to process the data faster or at least to make it appear faster. Optimization of a web application enhances user experience, giving them a reason to revisit the application for their need or purpose.
1. Reduced Response time
2. Lesser amount of data transfer
3. Less load on server
Optimization at different area
It is one of the sub-category of application layer optimization, where all the database related elements are boosted. This helps to decrease the time of working upon the data and provides faster data processing to a user. Different techniques like Indexing, query optimization, query caching etc. are performed to boost up the performance of the application.
Application Server Optimization
Application server is the location or server from where the application and services are hosted and are made available to a user. If access and request handling from this component becomes faster, the application will work faster. Code caching and code refactoring are such techniques of application server optimization.
Presentation Layer Optimization
This layer ensures that all the data, which is sent to the user, is in the correct format and is minimal. Techniques such as Cache controlling which controls the behavior of browser cache and proxy cache are used to boost up the speed of data formatting and encapsulation for further delivery.
Different Optimization Technique
Among the three areas discussed above, only optimization technique of Presentation Layer is discussed in this article.
1. Set Expires to a minimum of one month, and preferably up to one year, in the future. Do not set it to more than one year in the future, as that violates the RFC guidelines. Setting caching aggressively does not “pollute” browser caches: as far as we know, all browsers clear their caches according to a Least Recently Used algorithm; no browser waits until resources expire before purging them.
2. Set the Last-Modified date to the last time the resource was changed. If the Last-Modified date is sufficiently far enough in the past, chances are the browser won’t re-fetch it.
Minimize HTTP Requests
80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.
One way to reduce the number of components in the page is to simplify the page’s design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.
1. Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single style sheet. Combining files is more challenging when the scripts and style sheets vary from page to page, but making this part of your release process improves response times.
2. CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.
Sometimes it’s necessary for your application to redirect the browser from one URL to another. Whatever the reason, redirects trigger an additional HTTP request-response cycle and add round-trip-time latency. It’s important to minimize the number of redirects issued by your application. The best way to do this is to restrict the use of redirects to only those cases where it’s absolutely technically necessary, and to find other solutions where it’s not.
1. Never reference URLs in your pages that are known to redirect to other URLs. The application needs to have a way of updating URL references whenever resources change their location.
2. Never require more than one redirect to get to a given resource. For instance, if C is the target page, and there are two different start points, A and B, both A and B should redirect directly to C; A should never redirect intermediately to B.
In short, contents travel from server side to client side (vice versa) whenever a HTTP request is make. The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by data compression. It’s true that the end-user’s bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.
1. Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you’re likely to see is deflate, but it’s less effective and less popular.
2. Gzipping generally reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.
3. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.
Off-Heap caching of static data
Generally the data which does not changes frequently and are used extensively are kept in cache to speed up the application performance and to reduce call to external storage system. Caching files in memory reduces the amount of memory available on the system to allocate memory for the running threads. If the cache size is large and all data are cached using in-memory mechanism then too much of memory will be used to keep cache data. System will be forced to keep other meta information to disk and any need to swap memory to execute the program which may degrade performance.
The on-heap store refers to objects that will be present in the Java heap (and also subject to GC). On the other hand, the off-heap store keeps serialized objects outside the heap (and also not subject to GC).
1. Ehcache is an open source, standards-based cache for boosting performance and simplifying scalability. It’s the most widely-used Java-based cache because it’s robust, proven, and full-featured. Ehcache scales from in-process, with one or more nodes, all the way to mixed in-process/out-of-process configurations with terabyte-sized caches.
2. BigMemory permits caches to use an additional type of memory store outside the object heap, called the “off-heap store.” It’s available for both distributed and standalone use cases. Only Serializable cache keys and values can be placed in the store, similar to DiskStore. Serialization and de-serialization take place on putting and getting from the store. The theoretical difference in the de/serialization overhead disappears due to two effects.
The MemoryStore holds the hottest subset of data from the off-heap store, already in de-serialized form.
Changing the code in order to merge and minify should become an extra, separate step in the process of developing site. During development, it is better to use as many .js files as required, and then when the site is ready to go live, substitute “normal” scripts with the merged and minified version.