Friday, December 24, 2010

Loading User Control Dynamically in ASP.NET


Hi Guys,
             
                 Recently i have got a requirement to load the user control dynamically in to a  page.When i drag and drop the user Control  in design surface run the app it is  working fine but when i load the User Control dynamically(Programmatically) .I am unable to find the Control while post back .So i have started dig the functionality and  finally i have Fixed the issue .Here is the solution .
Step#1              
                             Write the method which take user Control name as  a parameter .Call this method wherever it is required   Add the placeholder control to your page and name it as "plchUserCtrlContainer"

private void LoadUserControl(string urcName)    
{
string path = Server.MapPath(@"~\Controls\" + urcName + ".ascx");
if (System.IO.File.Exists(path))
{
Session[
"FormName"] = urcName;
Control ucMainControl = LoadControl(@"
~\Controls\" + urcName + ".ascx");
if (ucMainControl != null)
{
plchUserCtrlContainer.Controls.Clear();
ucMainControl.ID = Session["FormName"].ToString();
plchUserCtrlContainer.Controls.Add(ucMainControl);

}
}
}



Step#2       
        Write Page OnInit event functionality as i have specified below .



override protected void OnInit(EventArgs e)
{

Session["FormName"] = Request["FormName"].ToString();
Control ucMainControl = LoadControl(@"~\Controls\" + Session["FormName"] + ".ascx");
plchUserCtrlContainer.Controls.Clear();
ucMainControl.ID = Session["FormName"].ToString();
plchUserCtrlContainer.Controls.Add(ucMainControl);

}



Step#3       
         You  can find the control using following  line of code




protected void btnsave_Click(object sender, EventArgs e)
{

Control assControl = (Control)FindControl(Session["FormName"].ToString());
}

Happy coding …..

Monday, November 15, 2010

Entity frame work vs. Enterprise Library

 

                           In .NET Framework 4.0 having huge improvements in entity-framework. I have specified some of them  below .

Sl.No

Entity Framework 4.0

Enterprise Library 5.0

1.

compiled cached queries :It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree.

Explanation :The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step.

NO Support .Use Store Producers instead of hoc queries to improve the performance .

2.

Model-First: Create your Entity Data Model and from that model generate DDL to create the database.

Model –Last :Create your database first and Implement your Entities .

3.

Developing vise verse possible (Entity to Tables ) and (Tables to Entity)

No Support

4.

MVC with the Entity Data Source Control and the Entity Framework to quickly create data driven web.

Need to implement Business Entities manually (takes more time ).

5.

LINQ Support

LINQ Support

6.

Multiple related tables to a single entity and reduce code complexity.

No Support

7.

Simplify Application Maintenance:By using entities in place of traditional data access code, changes to the underlying database schema do not always require changes to the application code.

Extend your applications to use more powerful editions of SQL Server or replace the database that

 
8.

Automatically creates Complex types -Functions where as Enterprise library we have to create custom type

Need to implement manually

9.

No need to write any code for Business entities

DAL framework, its very hard to come back and change later,no matter your architecture, because you'll have to redo a lot of code, be it well isolated or not

10.

Returns Strong types

Returns Dataset/Data table which make heavy (needs to covert to strong types )

11.

Re commanded development strategy Either simply need to be put away in a dedicated layer so that swapping them out and replacing them becomes way easier and makes your application more maintainable. This is simply applying good architectural designs.

 
12.

Little bit slower when compare to Enterprise Library (store procedure mapping by default hock queries )(we can tune to improve the performance [Support only 4.0)

 
13.

Custom Code-Gen Leverage T4, Text Template Transformation Toolkit, an easy, flexible and powerful Code Generation tool that is fully integrated into the Visual Studio experience to customize the generation of POCO or Entity Classes.

Required Third party tools to generate business entities

14.

Easily maintainable and changeable

Takes more time for changes

15.

Model Driven Development Separate the logical storage model of data from the way you model data within the application. The Entity Framework provides a mapping between the application data model and the relational database model .

No support for MDD

16.

Advantages:Support POCO templates (plain old CLR objects)

 

you can get more information on Entity Frame work  in following links  :

http://msdn.microsoft.com/en-us/magazine/cc188702.aspx

http://msdn.microsoft.com/te-in/magazine/cc163766%28en-us%29.aspx

http://channel9.msdn.com/Blogs/wriju/Using-Stored-Procedure-in-ADONET-Entity-Framework-40

http://msdn.microsoft.com/en-us/data/ff191186.aspx

http://msdn.microsoft.com/en-us/data/ef.aspx

http://channel9.msdn.com/Blogs/matthijs/C-40-and-beyond-by-Anders-Hejlsberg

http://msdn.microsoft.com/en-us/library/bb896279.aspx

http://jalpesh.blogspot.com/2010/08/entity-framework-40-bind-stored.html

Recommenced Entity frame work Books:

1.Entity Framework 4.0 Recipes: A Problem-Solution Approach- By Larry Tenny, Zeeshan Hirani

2.Pro Entity Framework 4.0- By Jim Wightman, Scott Klein

Internationalization and best practices to go for globalization vs. localization.


                             

                             Now a days every application clients wants to implement internationalization .Many people don't know what are the better practices for implementing globalization vs.localization .I have specified some of them apply according to your project requirement .you can find the basic tutorial in msdn here .

1.label's are heavy :                              Using label in every magic string will become very heavy .Because for every label ASP.NET engine needs to generate span tag.

Recommended  way : Literal control is best .Because it is light weight and will not generate any span tags.

2.Localization will cause slow response:

                         for every resource file asp.net engine compiles and  creates separate dll  in runtime.
      
Recommended way:Don't use too many local resources files .Instead of localization better to use globalization (wherever required )
 
3.Implicit Declaration vs. Explicit declaration

                         If you have value which is using more than one place then use Explicit declaration with Global resources .
 

4. Assembly vs. satellite Assembly

                             When you make a change to a default resource file, either local or global  ASP.NET recompiles the resources and restarts the ASP.NET application . This can affect the overall performance of your site. If you add satellite resource files,  it does not cause a recompilation of resources, but the ASP.NET application will restart.don't required any special mechanism.Just follow the naming standard's automatically in runtime CLR creates  satellite assembly . 

Tuesday, October 26, 2010

Get product by category using commerce server

                         Recently we got a problem with in our application that is some product's bread crumb not displaying properly .I have investigated and find the problem.
Problem:       The problem is when product is associated with two categories it is always fetching from primary parent  category .
       
Example :
             when product associated with two categories A and category B .It always fetch from category A even though we request for category B product .
                    If you don't have much knowledge on Commerce server API then get from knowledge  from here.....
                             
  Runtime Commerce Server APIs (those that are used to retrieve info for use in a e-commerce website)
    * ProductCatalog: Getting products from a catalog along with its child products
    * CatalogContext.GetCatalogs: Getting a set of catalogs from the current context
    * CatalogContext.GetCategory: Getting the details of a category including child categories
    * CatalogContext.GetProduct: Getting the details of a product and product family including the variants
    * CatalogSearch: Searching the catalog with the basic properties specified
   
Here is the code for Fetching product based on product id and Category
/// <summary>
       /// Method:GetProduct
       /// Description: This method is used to get the Product Information based on productId and CategoryName .
       /// </summary>
       /// <returns>datatable</returns> 
       public static DataTable  GetProductByCategory(string currentCatalog, string categoryName, string productId, string language)
       {
           ProductCatalog virtualCatalog = CommerceContext.Current.CatalogSystem.GetCatalog(currentCatalog);
           Category virtualCatalogCategory = virtualCatalog.GetCategory(categoryName);
           DataSet dsProducts =  virtualCatalogCategory.GetProducts();  
           DataView dvFilteredProducts = dsProducts.Tables[0].DefaultView;
           dvFilteredProducts.RowFilter = "ProductId='" + productId + "'";
           return dvFilteredProducts.ToTable();
       }

Friday, October 15, 2010

Getting Browser's height and width using JavaScript which works in (IE,Firefox browsers )

                                            Getting the Browser's height and width is quite useful in many scenarios .Recently i  have got a requirement in one of the  project that is setting the background image based on  the browser width and height . But the problem is there are so many different types of browsers out there and you don't know which one the end user will be using. Another issue is the height and the width of different laptops and desktops available in the market which are having different resolutions .

                                        So most probably you are fine tuning your application based on the server or the PC or the laptop that you are using to develop. It will look perfectly fine until one day you decide to demo to your colleagues using a different laptop. All the layouts and alignments will be wrong. This happens if you hard code the height and width of the objects that you require in your site.

                                                  The best way for such applications is to get the height and the width of the client dynamically (using JavaScript) and then setting the height and the width of the objects dynamically. You can use the following JavaScript example to get the height and the width of the client.

                                             In my project requirement I need to set the back ground  right side  image based on browser width and height.So i have done this .please modify code according to your requirement.

<script type="text/javascript" language="javascript">
       window.onresize = SetGal
        function SetGal() {
       // debugger;
           var myWidth;
           var myHeight;
 
           if (typeof (window.innerWidth) == 'number') {
 
              //alert('non ie');
 
               myWidth = window.innerWidth;
               myHeight = window.innerHeight;
 
           }
           else if (document.documentElement && (document.documentElement.clientWidth || document.documentElement.clientHeight))
            {
 
                //alert('myWidth');
 
               myWidth = document.documentElement.clientWidth;
               myHeight = document.documentElement.clientHeight;
 
           }
           else if (document.body && (document.body.clientWidth || document.body.clientHeight))
            {
 
               myWidth = document.body.clientWidth-80;
               myHeight = document.body.clientHeight;
 
           }
           //alert('gal_bg_2');
          // alert(myWidth);
           //  alert(myHeight);
           
           
      document.getElementById("gal_bg_2").style.height = myHeight + 'px';  // this element i am dynamicall setting based on browser width and height 
      document.getElementById("gal_bg_2").style.width = myWidth + 'px'; 
      
       }
 
   </script>

Wednesday, October 6, 2010

Prevent browser caching of web pages in asp.net which works in all browsers (IE/Firefox..)

In this article i am going to explain how to prevent the browser caching of web pages in asp.net .It is the one of the biggest issue
every developer face . 

      Why browser caching ?
                           To speed up user experience on the web, most browsers implement a technology called caching. Caching allows information such as WebPages, images, and so on, to be saved on a user’s computer. If the user calls for a previously requested webpage, the browser is able to access the information more quickly by recalling it from a cache, rather than making another request to the site itself.
                         One side it is a advantage but when you display sensitive information it will be a big drawback .Recently we have found one problem in our current project .that is when user log in and after some operations sing out .if user click on back button it is still displaying logged in user .Hmmm..... We have tried different ways to handle the issue .But  we have faced issues with Firefox .

                        So i have decided to write logic in master page load event .And i have added some login in logout page .Here is the code.

Place  this code in master page  in load event
HttpContext.Current.Response.Cache.SetAllowResponseInBrowserHistory(false);  
            HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache);  
            HttpContext.Current.Response.Cache.SetNoStore();
            Response.Cache.SetExpires(DateTime.Now.AddSeconds(60));
            Response.Cache.SetValidUntilExpires(true);
 




         In Logout page Load   add this code  
  


Response.AddHeader("Pragma", "no-cache");
            Response.CacheControl = "no-cache";
            Response.Cache.SetAllowResponseInBrowserHistory(false);
            Response.Cache.SetCacheability(HttpCacheability.NoCache);
            Response.Cache.SetNoStore();
            Response.Expires = -1;
            Session.Abandon();
            ClientScript.RegisterClientScriptBlock(this.GetType(),"signout", "DisableHistory()", true );


  
write this code in logout mark up page






function DisableHistory() {
 
           window.history.forward(1);
       }
       function RedirectToHome() {
 
           setTimeout("window.location = 'Index.aspx'",0);
       }
     
   </script>
 

call this  RedirectToHome method in body onload of logout page 




<body onload ="RedirectToHome();">
 
Run the application.Have a fun …
 
 
 

Thursday, September 23, 2010

Retain TreeView Navigation state after postback in asp.net

                                                 I have read one of the article in  in Microsoft site about how  to   Persistent TreeViews navigate among pages .But it is not fit for  my application .The  code is very heavy logic .The application is going to slow (Some time may not respond)because of Maintain the state of tree view control using sessions (Which is burden on Web server ).Every  post back  the tree view control  view-state going to increase.hmmm.... .
               
                                                I'm trying to keep my Master Page as "light" as possible without using a lot of code, because I don't want to slow down the entire site.And Tree-view will be converted to post back every time you click a node .

                                                I'm sure it can be simpler. Another idea was to put Tree View in the i Frame - similar to msdn web site, but that creates some issues with design and SEO.
                   
                                        So i started to refactor the code and architecture as per my project requirement .
               
                   
Here is the advantages of new code .
               
New Improvements


1.    No more round trips between client and server when selection node changed  .
2.    No state maintain in web server.
3.    No more loops for saving and restoring the view state in session
4.    Tree view without view state (Big advantage)
5.    More than six methods/events are removed
6.    Implemented client side redirection instead of server side 


Note: In my application i have created userControl for leftnavigation and in that i have a tree view control

Leftnavigation treeView mark up page

 
 <asp:TreeView ID="TreeviewNavigation" runat="server" NodeWrap="true" EnableViewState="false"

            ExpandDepth="0" ShowLines="true" HoverStyle="border=solid 1px;color=black;background:white;font-size=16;font-weight:bold">

            <SelectedNodeStyle CssClass="SelectedNode"></SelectedNodeStyle>

        </asp:TreeView>





In code behind logic

       protected void PagePreRender(object sender, EventArgs e)

        {

            try

            {

                FillLeftNavigation();

                ExpandTreeView(TreeviewNavigation.Nodes);

            }

            catch (Exception ex)

            {

               // Exception handling logic

            }

        }







       public void FillLeftNavigation()

        {

           

            TreeviewNavigation.ShowExpandCollapse = true;

            // set the default state of all nodes.

            TreeviewNavigation.CollapseAll();

        }



        private void ExpandTreeView(TreeNodeCollection nodes)

        {

            foreach (TreeNode node in nodes)

            {

                //Modifying the url string according to tree view not value format

             string reqNodeValue = HttpContext.Current.Request["ReqVal"];   // Here we are sending treeview selected node value through Query string

               

                 if (node.Value.ToString() == reqNodeValue)                 //Compare the value with tree view nodes

                {

                    node.Selected = true;                                   //If value is equal expand it

                    ExpandParent(node.Parent);

                }



                if (node.ChildNodes.Count > 0)

                {

                    ExpandTreeView(node.ChildNodes);

                }

            }

        }



        private void ExpandParent(TreeNode node)

        {

            if (node != null)

            {

                node.Expand();

                ExpandParent(node.Parent);

            }

        }

Treeview node without firing the _doPostBack script

                                        
                                              I have a treeview and in the page load event I populate the treeview completely. I do not have any server side event for the treeview. Butstill when the user click on a node, the treeview page post back. How can I disablle this annoying autopostback? There seems no AutoPostBack= false property for a asp:treeview.I have tryied different ways to handle it but no use.There is a simple way to handle it .Here is the code for remove the _doPostback client side script.

                                              Use navigate url properti in treeview control to remove the _doPostBack.I have given sample code use this .you can specify valid url based on your requirement

Tuesday, September 21, 2010

Data at the root level is invalid. Line 1, position 1



Problem:

The problem is that strings are stored internally as UTF-16 in .NET however the encoding specified in the XML document header may be different. E.g.:


<?xml version="1.0" encoding="utf-8"?>



              Each Unicode character in a string is defined by a Unicode scalar value, also called a Unicode code point or the ordinal (numeric) value of the Unicode character. Each code point is encoded using UTF-16 encoding, and the numeric value of each element of the encoding is represented by a Char object.

This means that when you pass XmlDocument.LoadXml() your string with an XML header, it must say the encoding is UTF-16. Otherwise, the actual underlying encoding won't match the encoding reported in the header and will result in an XmlException being thrown.


Solution:

The solution for this problem is to make sure the encoding used in whatever you pass the Load or LoadXml method matches what you say it is in the XML header. In my example above, either change your XML header to state UTF-16 or to encode the input in UTF-8 and use one of the XmlDocument.Load methods instead of XmlDocument.LoadXML method.

XmlDocument xmlDocument = new XmlDocument();
xmlDocument.Load(ConfigurationManager.AppSettings["CatalogXMlRootPath"] + Catalog + ".xml");
return xmlDocument;


Check list:

  • Check the given xml file having any unclosed tags
  • Check wheather xml file starting line having any empty line.
  • have you given enough permission to xml folder
  • Use xmldocument.Load instead of LoadXMl method

Monday, September 13, 2010

Showing default image when image not found

                      
 Problem:

         Recently we have found one problem in our  application when  image is not available the   firefox not showing any thing .In IE just showing Red cross mark Which is not good from client prospective  .
           
                                 So What I want to do is have a "default" image that is displayed to anyone who requests a image that is no longer in a specific directory. (basically any 404 request in a specific directory).

Solution:
                                Don't write any server side   logic for  reading external web server image  file .You may or may not have permission to read .Just use java script "onerror " event (I have given sample code below )It will solve your problem.

                   
                   
Note:
              It will work for any web server .No need to bother about the external web server permission and IpAddress .Let me know if you need any help or clarification .


Example:


 <asp:Image ID="imgHeaderLogo" runat="server" onerror="this.src='http://www.google.com.sg/images/default.png';" />



Explanation:

 In this.src you can specify required dimensional default image based on your requirement .

Thursday, August 26, 2010

Move viewstate to bottom of the page

                                     Recently i have got a problem with in my application..That  is index page having 44,0000(nearly 9 pages ) characters length of viewstate .Hmmm.....I would like to move all the .net stuff __VIEWSTATE to the bottom of the page.

      

                                        The reason is That  page would be much more search engine friendly and it would render faster in the browser.So i have implemented fuctionality to render the viewstate on bottom of the page .I have overrided the "RenderControl" event in master page.here is the code



///

/// Code for moving the Viewstate to bottom of the page

///


///


public override void RenderControl(HtmlTextWriter writer)

{

using (System.IO.StringWriter stringWriter = new System.IO.StringWriter())

{

using (HtmlTextWriter htmlWriter = new HtmlTextWriter(stringWriter))

{

base.Render(htmlWriter);

string html = stringWriter.ToString();

int beginPoint = html.IndexOf("<input type=\"hidden\" name=\"__VIEWSTATE\"")

{

int endPoint = html.IndexOf("/&gt;", beginPoint) + 2;

string viewstateInput = html.Substring(beginPoint, endPoint - beginPoint);

html = html.Remove(beginPoint, endPoint - beginPoint);

int formEndStart = html.IndexOf("") - 1;

if (formEndStart &gt;= 0)

{

html = html.Insert(formEndStart, viewstateInput);

}

}



writer.Write(html);

}

}



}





Copy this code and paste into master page .run the application .Now you can see _viewstate renders in bottom of the page .

Thursday, August 5, 2010

Position update progress at mouse click in asp.net which support all browsers.

I have got a new requirement .That is position the update progress control to display on the screen where the user clicks with the mouse. I have goggled and get some code which is not working in different browsers .So I have implemented it to work with all browsers.


Copy the below code and paste in new .js file  name it as "AjaxProgressHandler.js"

Clickhere to download the sample project.AjaxCustomProgress sample project 
 
function close_popup(pop_id)
 {
    var popup = pop_id;
    document.getElementById(popup).style.display = 'none';
}
function openSpecXY(pop_id, posX, posY)
 {
    var popup = pop_id;
    document.getElementById(popup).style.left = posX + 'px';
    document.getElementById(popup).style.top = posY + 'px';
    document.getElementById(popup).style.display = 'block';
}

function showhide(id)
 {
    if (document.getElementById) {
        obj = document.getElementById(id);
        if (obj.style.display == "none") {
            obj.style.display = "";
        }
        else {
            obj.style.display = "none";
        }
    }
}
// Detect if the browser is IE or not.
// If it is not IE, we assume that the browser is NS.
var IE = document.all ? true : false
// If NS -- that is, !IE -- then set up for mouse capture
if (!IE) document.captureEvents(Event.MOUSEMOVE)
// Set-up to use getMouseXY function onMouseMove
document.onclick = getMouseXY;
// Temporary variables to hold mouse x-y pos.s
var mousePosX = 0
var mousePosY = 0
var lastPopUp;
// Main function to retrieve mouse x-y pos.s

function getMouseXY(e) {
    try {
        if (IE) { // grab the x-y pos.s if browser is IE
            mousePosX = event.clientX + getScrollXY()[0]
            mousePosY = event.clientY + getScrollXY()[1]
        } else {  // grab the x-y pos.s if browser is NS
            mousePosX = e.pageX
            mousePosY = e.pageY
        }
        // catch possible negative values in NS4
        if (mousePosX < 0) { mousePosX = 0 }
        if (mousePosY < 0) { mousePosY = 0 }
    } catch (e) {
    }
}
function getScrollXY() {
    var scrOfX = 0, scrOfY = 0;
    if (typeof (window.pageYOffset) == 'number') {
        //Netscape compliant
        scrOfY = window.pageYOffset;
        scrOfX = window.pageXOffset;
    } else if (document.body && (document.body.scrollLeft || document.body.scrollTop)) {
        //DOM compliant
        scrOfY = document.body.scrollTop;
        scrOfX = document.body.scrollLeft;
    } else if (document.documentElement && (document.documentElement.scrollLeft || document.documentElement.scrollTop)) {
        //IE6 standards compliant mode
        scrOfY = document.documentElement.scrollTop;
        scrOfX = document.documentElement.scrollLeft;
    }
    return [scrOfX, scrOfY];
}
function open_popupRelative(popUpDivID, divApproxWidth, divApproxHeight, closeLastPopUp) {
    //alert(getScrollXY()[1]);alert("open_popupRelative is" + popUpDivID);
    if (!divApproxWidth) {
        divApproxWidth = 0;
    }
    if (!divApproxHeight) {
        divApproxHeight = 0;
    }
    if (closeLastPopUp) {
        if (lastPopUp) {
            //alert(lastPopUp);
            close_popup(lastPopUp);
        }
    }
    var popup = popUpDivID;
    if (popup != "flexInner" && popup != "productlist") {
        lastPopUp = popup;
    }
    var divElement = document.getElementById(popup);
    divElement.style.left = mousePosX - divApproxWidth + 'px';
    divElement.style.top = mousePosY - divApproxHeight + 'px';
    //alert('x:'+mousePosX+'::y:'+mousePosY+'divX:'+divApproxWidth+'divY:'+divApproxHeight);
    divElement.style.display = 'block';
}
if (document.images) {
    img1 = new Image();
    img2 = new Image();
    img1.src = "4-0.gif";
    img2.src = "4-0.gif";
}
function cancelMyAjaxPostBack() {
    try {
        Sys.WebForms.PageRequestManager.getInstance().abortPostBack();
    } catch (e) { }
}
function pageLoad() {
    Sys.WebForms.PageRequestManager.getInstance().add_initializeRequest(showMyLoading);
    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(stopMyLoading);
}
function showMyLoading(sender, args) {
    open_popupRelative('ajax_req_process_div', 0, 0, 1);
}
function stopMyLoading(sender, args) {
    close_popup('ajax_req_process_div');
    if (args.get_error() != undefined) {
        args.set_errorHandled(true);
    }
}







Execute the application .have a fun
 


Tuesday, August 3, 2010

Tips to improve Performance of Web Application

Hi all ,

From last few days i am working on improving the performance of application .I have worked with yslow and fiddler performance analyzer tools.Some of Important points i have collected .It will use full for every developer .


1. Make fewer HTTP requests

Decreasing the number of components on a page reduces the number of HTTP requests required to render the page, resulting in faster page loads. Some ways to reduce the number of components include: combine files, combine multiple scripts into one script, combine multiple CSS files into one style sheet, and use CSS Sprites and image maps.

2. Content Delivery Network (CDN)

User proximity to web servers impacts response times. Deploying content across multiple geographically dispersed servers helps users perceive that pages are loading faster.

3. Add Expires headers

Web pages are becoming increasingly complex with more scripts, style sheets, images, and Flash on them. A first-time visit to a page may require several HTTP requests to load all the components. By using Expires headers these components become cacheable, which avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often associated with images, but they can and should be used on all page components including scripts, style sheets, and Flash.

4. Compress components with gzip

Compression reduces response times by reducing the size of the HTTP response. Gzip is the most popular and effective compression method currently available and generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip.

5. Grade A on Put CSS at top

Moving style sheets to the document HEAD element helps pages appear to load quicker since this allows pages to render progressively.

6. Put JavaScript at bottom

JavaScript scripts block parallel downloads; that is, when a script is downloading, the browser will not start any other downloads. To help the page load faster, move scripts to the bottom of the page if they are deferrable.
7. Avoid CSS expressions

CSS expressions (supported in IE beginning with Version 5) are a powerful, and dangerous, way to dynamically set CSS properties. These expressions are evaluated frequently: when the page is rendered and resized, when the page is scrolled, and even when the user moves the mouse over the page. These frequent evaluations degrade the user experience.

8. Make JavaScript and CSS external

Using external JavaScript and CSS files generally produces faster pages because the files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded each time the HTML document is requested. This reduces the number of HTTP requests but increases the HTML document size. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the HTML document size is reduced without increasing the number of HTTP requests.

9. Reduce DNS lookups

The Domain Name System (DNS) maps hostnames to IP addresses, just like phonebooks map people's names to their phone numbers. When you type URL www.yahoo.com into the browser, the browser contacts a DNS resolver that returns the server's IP address. DNS has a cost; typically it takes 20 to 120 milliseconds for it to look up the IP address for a hostname. The browser cannot download anything from the host until the lookup completes.

10. Minify JavaScript and CSS

Minification removes unnecessary characters from a file to reduce its size, thereby improving load times. When a file is minified, comments and unneeded white space characters (space, newline, and tab) are removed. This improves response time since the size of the download files is reduced.

11. Avoid URL redirects

URL redirects are made using HTTP status codes 301 and 302. They tell the browser to go to another location. Inserting a redirect between the user and the final HTML document delays everything on the page since nothing on the page can be rendered and no components can be downloaded until the HTML document arrives.

12.Remove duplicate JavaScript and CSS

Duplicate JavaScript and CSS files hurt performance by creating unnecessary HTTP requests (IE only) and wasted JavaScript execution (IE and Firefox). In IE, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page. In both IE and Firefox, duplicate JavaScript scripts cause wasted time evaluating the same scripts more than once. This redundant script execution happens regardless of whether the script is cacheable.

12. Configure entity tags (ETags)

Entity tags (ETags) are a mechanism web servers and the browser use to determine whether a component in the browser's cache matches one on the origin server. Since ETags are typically constructed using attributes that make them unique to a specific server hosting a site, the tags will not match when a browser gets the original component from one server and later tries to validate that component on a different server.

13. Make AJAX cacheable

One of AJAX's benefits is it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using AJAX does not guarantee the user will not wait for the asynchronous JavaScript and XML responses to return. Optimizing AJAX responses is important to improve performance, and making the responses cacheable is the best way to optimize them.

14. GET for AJAX requests

When using the XMLHttpRequest object, the browser implements POST in two steps: (1) send the headers, and (2) send the data. It is better to use GET instead of POST since GET sends the headers and the data together (unless there are many cookies). IE's maximum URL length is 2 KB, so if you are sending more than this amount of data you may not be able to use GET.
15. Reduce the number of DOM elements



A complex page means more bytes to download, and it also means slower DOM access in JavaScript. Reduce the number of DOM elements on the page to improve performance.

16. Avoid HTTP 404 (Not Found) error

Making an HTTP request and receiving a 404 (Not Found) error is expensive and degrades the user experience. Some sites have helpful 404 messages (for example, "Did you mean ...?"), which may assist the user, but server resources are still wasted.

17. Reduce cookie size

HTTP cookies are used for authentication, personalization, and other purposes. Cookie information is exchanged in the HTTP headers between web servers and the browser, so keeping the cookie size small minimizes the impact on response time.

18. Use cookie-free domains

When the browser requests a static image and sends cookies with the request, the server ignores the cookies. These cookies are unnecessary network traffic. To workaround this problem, make sure that static components are requested with cookie-free requests by creating a subdomain and hosting them there.




19. Avoid AlphaImageLoader filter

The IE-proprietary AlphaImageLoader filter attempts to fix a problem with semi-transparent true color PNG files in IE versions less than Version 7. However, this filter blocks rendering and freezes the browser while the image is being downloaded. Additionally, it increases memory consumption. The problem is further multiplied because it is applied per element, not per image.

20. Do not scale images in HTML

Web page designers sometimes set image dimensions by using the width and height attributes of the HTML image element. Avoid doing this since it can result in images being larger than needed. For example, if your page requires image myimg.jpg which has dimensions 240x720 but displays it with dimensions 120x360 using the width and height attributes, then the browser will download an image that is larger than necessary.

21. Make favicon small and cacheable

A favicon is an icon associated with a web page; this icon resides in the favicon.ico file in the server's root. Since the browser requests this file, it needs to be present; if it is missing, the browser returns a 404 error (see "Avoid HTTP 404 (Not Found) error" above). Since favicon.ico resides in the server's root, each time the browser requests this file, the cookies for the server's root are sent. Making the favicon small and reducing the cookie size for the server's root cookies improves performance for retrieving the favicon. Making favicon.ico cacheable avoids frequent requests for it.