Wednesday, March 28, 2007

10 Tips for Writing High-Performance Web Applications

Writing a Web application with ASP.NET is unbelievably easy. So easy, many developers don't take the time to structure their applications for great performance. In this article, I'm going to present 10 tips for writing high-performance Web apps. I'm not limiting my comments to ASP.NET applications because they are just one subset of Web applications. This article won't be the definitive guide for performance-tuning Web applications—an entire book could easily be devoted to that. Instead, think of this as a good place to start.
Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I'd review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you're faced with either fixing performance problems or running a high-throughput site.
My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I'm sure that some of the tips that have helped me will help you as well.
You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application.
Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more.
The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.
Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.
There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.Performance on the Data Tier
When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.

Tip 1—Return Multiple Resultsets
Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.


Tip 2—Paged Data Access


The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you're doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.
In Community Server, we wrote a paging server control to do all the data paging. You'll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data.
The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.


Tip 3—Connection Pooling


Setting up the TCP connection between your Web application and SQL Server™ can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.
Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.
The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.


Tip 4—ASP.NET Cache API

One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature.
If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).
There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.
Figure 3 ASP.NET Cache
There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.


Tip 5—Per-Request Caching

Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching.
Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example.
In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request.
To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.


Tip 6—Background Processing

The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.
There is not enough room to go into the code here, but you can download a digestible sample at www.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.


Tip 7—Page Output Caching and Proxy Servers

ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page <%@ Page OutputCache VaryByParams="none" Duration="60" %>
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list.
Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.


Tip 8—Run IIS 6.0 (If Only for Kernel Caching)

If you're not running IIS 6.0 (Windows Server™ 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.


Tip 9—Use Gzip Compression

While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.


Tip 10—Server Control View State

View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting: <%@ Page EnableViewState="false" %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.


Conclusion
I've offered you some tips that I've found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found at Improving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.
See the sidebar "Common Performance Myths".

Monday, March 26, 2007

A tip for aspNet application run faster!

Don’t run production ASP.NET Applications with debug=”true” enabled
One of the things you want to avoid when deploying an ASP.NET application into production is to accidentally (or deliberately) leave the switch on within the application’s web.config file.

Doing so causes a number of non-optimal things to happen including:

1) The compilation of ASP.NET pages takes longer (since some batch optimizations are disabled)
2) Code can execute slower (since some additional debug paths are enabled)
3) Much more memory is used within the application at runtime
4) Scripts and images downloaded from the WebResources.axd handler are not cached

This last point is particularly important, since it means that all client-javascript libraries and static images that are deployed via WebResources.axd will be continually downloaded by clients on each page view request and not cached locally within the browser. This can slow down the user experience quite a bit for things like Atlas, controls like TreeView/Menu/Validators, and any other third-party control or custom code that deploys client resources. Note that the reason why these resources are not cached when debug is set to true is so that developers don’t have to continually flush their browser cache and restart it every-time they make a change to a resource handler (our assumption is that when you have debug=true set you are in active development on your site).

When is set, the WebResource.axd handler will automatically set a long cache policy on resources retrieved via it – so that the resource is only downloaded once to the client and cached there forever (it will also be cached on any intermediate proxy servers). If you have Atlas installed for your application, it will also automatically compress the content from the WebResources.axd handler for you when is set – reducing the size of any client-script javascript library or static resource for you (and not requiring you to write any custom code or configure anything within IIS to get it).

What about binaries compiled with debug symbols?

One scenario that several people find very useful is to compile/pre-compile an application or associated class libraries with debug symbols so that more detailed stack trace and line error messages can be retrieved from it when errors occur.

The good news is that you can do this without having the have the switch enabled in production. Specifically, you can use either a web deployment project or a web application project to pre-compile the code for your site with debug symbols, and then change the switch to false right before you deploy the application on the server.

The debug symbols and metadata in the compiled assemblies will increase the memory footprint of the application, but this can sometimes be an ok trade-off for more detailed error messages.

The Switch in Maching.config

If you are a server administrator and want to ensure that no one accidentally deploys an ASP.NET application in production with the switch enabled within the application’s web.config file, one trick you can use with ASP.NET V2.0 is to take advantage of the section within your machine.config file.

Specifically, by setting this within your machine.config file:







You will disable the switch, disable the ability to output trace output in a page, and turn off the ability to show detailed error messages remotely. Note that these last two items are security best practices you really want to follow (otherwise hackers can learn a lot more about the internals of your application than you should show them).

Setting this switch to true is probably a best practice that any company with formal production servers should follow to ensure that an application always runs with the best possible performance and no security information leakages. There isn’t a ton of documentation on this switch – but you can learn a little more about it here.

Hope this helps,

Scott

Thursday, March 22, 2007

Some techniques to improve your image!

Improve Your Image(s)

Master Image Processing and Management

By Steve C. Orr


A picture is worth a thousand words — and in some cases, they’re worth quite a few dollars too. Content is king on the Internet. Scattered throughout company hard drives everywhere are marketing materials, scanned documentation, artwork, charts containing sensitive data, and other valuable images that can do wonders in the right hands — or horrors in the wrong hands. Consolidating these materials into one central system is a common optimization of corporate dollars these days, and these systems usually must provide some way to get at files from across the Internet. Security is rightly a top concern in most document management systems.


In some basic cases you can configure IIS to manage the files and their permissions for you, but often a more customized system is necessary. As you’re probably aware, a standard Image control is defined with the following ASPX code:


<asp:Image ID="Image1" Runat="server" ImageUrl="SomeImage.jpg" />


When the page is output to the browser, the resulting HTML will consist of a standard <img> tag similar to this:


<img ID="Image1" src="SomeImage.jpg" />


A key point here is that the image is not really part of the page from the server’s point of view. Therefore, you can’t really do any custom image processing (such as cropping, resizing, or adding annotations) within the page itself. Rather, the image file name is all that’s written to the page (inside the image tag). As the browser interprets the HTML, it downloads the image from the Web server as a completely separate request.


Now consider the following code:


<asp:Image ID="Image1" Runat="server" ImageUrl="GenImage1.aspx" />


This Image control declaration illustrates that, instead of pointing directly to an image file, you can point an Image control toward a separate ASP.NET page where you can do any fancy dynamic image processing that is needed.


In this example, GenImage1.aspx doesn’t contain any HTML because its sole purpose is to output an image for inclusion in another page. The only code in the Page_Load event calls the procedure listed in Figure 1.


DisplayImage(New Bitmap("C:\PrivateDir\TopSecret.jpg")))


Private Sub DisplayImage(ByVal bmp As Bitmap)

With HttpContext.Current

'Clear any existing page content

.Response.Clear()



'Set the content type

.Response.ContentType = "image/jpeg"



'Output the image to the OutputStream object

bmp.Save(.Response.OutputStream, _

Imaging.ImageFormat.Jpeg)



'Ensure the image is the only thing that is output

.Response.End()

End With

End Sub

Figure 1: ASPX pages don’t have to output HTML. This example outputs an image, so that image controls on other pages can reference this page instead of pointing directly to a static image file.


You might choose to add authentication code to a page such as GenImage1 to ensure only proper individuals see the image. You’re also likely to sprinkle in some code to make this simple example more versatile by accepting an image as a url parameter or some other mechanism to serve out a variety of image files instead of a single hard-coded one.


For an ASP.NET application to effectively manage files, it must have permission to access these files. By default, ASP.NET runs under a user account (intuitively) named ASPNET. This user account has very limited permissions. It will not be able to interact with most of the server’s file system by default, and it won’t have access to any network shares, either. Therefore, you’ll want to give the ASPNET user account the folder permissions it needs, or have ASP.NET use a different user account that does have the necessary permissions.


SECURITY ALERT: For an ASP.NET application to effectively manage files, it must have permission to access these files. By default ASP.NET runs under a user account (intuitively) named ASPNET. This user account has very limited permissions. It will not be able to interact with most of the server’s file system by default, and it won’t have access to any network shares either. Therefore you’ll want to give the ASPNET user account the folder permissions it needs, or have ASP.NET use a different user account that does have the necessary permissions.

You can adjust the user account from within IIS, or you can configure Impersonation in the web.config file or the machine.config file. For initial experimentation and debugging I’d suggest having ASP.NET run under your user account since you know what files you have permission to access.


<!-- Web.config file. -->

<identity impersonate="true"/>

<identity impersonate="true" userName="Redmond\BillG" password="Melinda"/>

You can adjust the user account from within IIS, or you can configure Impersonation in the web.config file or the machine.config file. For initial experimentation and debugging I’d suggest having ASP.NET run under your user account because you know what files you have permission to access:


If the images aren’t stored in a file system, but instead are stored in a SQL Server database, then the code behind for GenImage1.aspx might look more like that shown in Figure 2.


Dim dr As System.Data.SqlClient.SqlDataReader cmdGetFile.Parameters("@File_ID").Value = _

Request("AttachmentID").ToString

dbConn.Open()

dr = cmdGetFile.ExecuteReader

If dr.Read Then

Response.Clear()

Response.ContentType = dr("ContentType").ToString

Response.OutputStream.Write(CType(dr("FileData"), _

Byte()), 0, CInt(dr("FileSize")))

Response.AddHeader("Content-Disposition", _

"inline;filename=" + dr("FileName").ToString())

End If

Figure 2: You can grab the image data from a database and write the raw file data directly into the Output Stream just before it’s sent to the browser.


This technique shows how you can dump a file directly from a database into the Response.OutputStream. ADO.NET is used to extract the binary data from a SQL Server image field, the data is then converted into a byte array, and, finally, it’s written to the output stream along with a descriptive header to help the browser better interpret the resulting file. For more details on this technique, see Easy Uploads.


Custom Image Generation

By using the functionality included in the System.Drawing namespace, your image manipulation capabilities are limitless. As if that weren’t enough power for a single developer to wield, there are also dozens of third-party components available under such categories as charting, reporting, and image processing libraries. Additionally, you can build your own image processing object models either from scratch or by building on existing technologies. Hopefully by now you’re beginning to realize the full power that can really lie behind the seemingly humble image control.


The previous techniques are great for distributing pre-existing images, but if you need to dynamically create an image from scratch (or modify an existing image on the fly,) then the System.Drawing namespace will become quite familiar to you. Using the classes within this namespace you could create dynamic charts, graphs, or other useful output. However, that’s soooo boring! The next example will focus on less tangible corporate enhancements, such as improved morale.


Smiles can be infectious, and the next example will generate as many as you’d like. Call the subroutine shown in Figure 3 to create a randomly generated smiley face.


Private Sub DrawSmiley(ByVal g As Graphics, _

ByVal Width As Integer, ByVal Height As Integer, _

ByVal rand As Random)



Dim SmileyWidth As Integer = rand.Next(Width / 2)

Dim SmileyHeight As Integer = rand.Next(Height / 2)



'Draw the head (a big circle)

Dim x As Integer = rand.Next(Width - SmileyWidth)

Dim y As Integer = rand.Next(Height - SmileyHeight)

Dim PenWidth As Integer = rand.Next(5)

Dim RandomColor As Color = _

Color.FromArgb(rand.Next(255), _

rand.Next(255), rand.Next(255))

Dim Pen As New Pen(RandomColor, PenWidth)

g.DrawEllipse(Pen, x, y, SmileyWidth, SmileyHeight)



'Draw the Nose (in the center of the head)

Dim NoseRect As System.Drawing.RectangleF

NoseRect.Width = CInt(SmileyWidth / 50)

NoseRect.Height = CInt(SmileyHeight / 50)

NoseRect.X = CInt(x + (SmileyWidth / 2) - _

(NoseRect.Width / 2))

NoseRect.Y = CInt(y + (SmileyHeight / 2) - _

(NoseRect.Height / 2))

g.DrawEllipse(Pen, NoseRect)

g.FillEllipse(Brushes.Green, NoseRect)



'Draw the Left Eye

Dim EyeRect As System.Drawing.RectangleF

EyeRect.Width = CInt(SmileyWidth / 30)

EyeRect.Height = CInt(SmileyHeight / 30)

EyeRect.X = CInt(x + (SmileyWidth / 2) - _

(EyeRect.Width / 2) - (SmileyWidth / 4))

EyeRect.Y = CInt(y + (SmileyHeight / 3) - _

(EyeRect.Height / 2))

g.DrawEllipse(New Pen(Color.Blue, PenWidth), EyeRect)

g.FillEllipse(Brushes.Blue, EyeRect)



'Draw the Right Eye

EyeRect.Width = CInt(SmileyWidth / 30)

EyeRect.Height = CInt(SmileyHeight / 30)

EyeRect.X = CInt(x + (SmileyWidth / 2) - _

(EyeRect.Width / 2) + (SmileyWidth / 4))

EyeRect.Y = CInt(y + (SmileyHeight / 3) - _

(EyeRect.Height / 2))

g.DrawEllipse(New Pen(Color.Blue, PenWidth), EyeRect)

g.FillEllipse(Brushes.Blue, EyeRect)



'Draw the smile

Dim points(2) As System.Drawing.PointF

points(0) = New System.Drawing.PointF(CInt(x + _

(SmileyWidth / 2) - (EyeRect.Width / 2) - _

(SmileyWidth / 4)), y + (SmileyHeight / 2))

points(1) = New System.Drawing.PointF(CInt(x + _

(SmileyWidth / 2)), y + (SmileyHeight / 2) + _

(SmileyHeight / 4))

points(2) = New System.Drawing.PointF(CInt(x + _

(SmileyWidth / 2) - (EyeRect.Width / 2) + _

(SmileyWidth / 4)), y + (SmileyHeight / 2))

g.DrawCurve(Pen, points, 1)

End Sub

Figure 3: By using the classes within the System.Drawing namespace, nearly any illustration imaginable can be generated at run time, including a bunch of smiley faces.


The first parameter is a Graphics object, which is the canvas on which this masterpiece will be painted. The height and width of the canvas are also passed along, to help ensure no smileys get abruptly cut off at the edges of the canvas. Finally, a Random object is passed along, which will be used to mix things up a bit.


Using the Random object, a random height and width are generated for the current smiley face and the head is drawn within this bounding rectangle. A pen is created of random thickness and color. This pen will be used to draw most features of the face. The DrawEllipse method creates a circle, which is used in concert with the FillEllipse method to fill it with color. Three smaller circles are then drawn within the head to represent the nose and two eyes. Finally, the smile is drawn by passing an array of points to the DrawCurve method of the Graphics object. All of the mathematical formulas throughout the example are there simply to calculate the position and size of each facial feature.


The final piece of this image generation puzzle is the code that will fill the Page_Load event of GenImage1.aspx and call the DrawSmiley routine. This Page_Load code is listed in Figure 4.


Dim g As Graphics

Dim rand As New Random 'random number generator

Dim bmp As Bitmap 'to hold the picture

Dim Width As Integer = 200 'image height

Dim Height As Integer = 200 'image width

Dim NumberOfSmileys As Integer = 3



'Grab parameters from the querystring (if any)

If Not IsNothing(Request.QueryString("NumSmileys")) Then

NumberOfSmileys = _

Int32.Parse(Request.QueryString("NumSmileys"))

End If

If Not IsNothing(Request.QueryString("Width")) Then

Width = _

CType(Request.QueryString("Width"), Integer)

End If

If Not IsNothing(Request.QueryString("Height")) Then

Height = _

CType(Request.QueryString("Height"), Integer)

End If



'create a new bitmap of the specified size

bmp = New Bitmap(Width, Height, _

Drawing.Imaging.PixelFormat.Format16bppRgb565)



'Get the underlying Graphics object.

g = Graphics.FromImage(bmp)



'Specify a white background

g.FillRectangle(Brushes.White, g.ClipBounds)



'Smooth out curves

g.SmoothingMode = Drawing2D.SmoothingMode.AntiAlias



'generate random smileys

For i As Integer = 1 To NumberOfSmileys

DrawSmiley(g, Width, Height, rand)

Next



DisplayImage(bmp)

Figure 4: This code goes in the Page_Load event of GenImage1.aspx, which can be referenced by the ImageURL property of a standard image control placed on any other page.


First, a few variables are declared with some default values specifying the size of the image and the number of smiley faces that will be drawn. Then the querystring is examined for optional parameters, which will replace the defaults. A blank bitmap is then created with a white background. Antialiasing is turned on to create smoother looking curves for rounded shapes, such as circles and smiles.


The main loop is then entered, iterating once for each smiley face to be drawn by calling the DrawSmiley subroutine mentioned earlier. Finally, the completed image is output by the DisplayImage subroutine in Figure 1.


To see the code in action, create a new WebForm and drop an Image control onto it. Then simply set the ImageURL property of that Image control to point to the GenImage1.aspx page. The result will look a lot like Figure 5.



Figure 5: The humble Image control can turn into a powerful tool once you’ve mastered the art of creating dynamic, configurable images at run time.


Conclusion

You should now have enough knowledge to manage and manipulate images in all kinds of complex ways. The graphical possibilities are endless with these tools at your disposal. You can expand on these ideas in all kinds of ways. For example, you could create image buttons and other graphical page elements on demand to keep your Web site feeling constantly fresh and new. Look for a future article about manipulating existing images at run time, such as: resizing, optimizing, cropping, rotating, adding borders, altering colors and brightness, etc.


The techniques outlined in this article are the foundation for virtually every modern third-party graphing component available on the market today. You could also create your own, if so inclined. Let your imagination wander and let me know what kinds of image creation tools you produce as a result.


The sample code in this article is available for download.

This article was originally published in ASP.NET Pro Magazine.


Check online/offline of Yahoo Messenger!

This article will show you how to know if you are being marked Permanent offline by a buddy. I have tested this method using Yahoo Messenger 7,0,0,437

Well you can know if a buddy is online and has marked you permanent offline only if the guy has signed into yahoo in Available mode and NOT if he has signed as Invisible..


Here is how to proceed..

Visit any of the following links replacing the [username] with the required yahoo id..


  • http://mail.opi.yahoo.com/online?u=[username]&m=g&t=0

    This will show you as a yellow smiley if the person is online and gray if the person is offline or invisible.
  • http://mail.opi.yahoo.com/online?u=[username]&amp;m=g&t=1

    This will show up as a button with “Online Now” or “Not Online
  • http://mail.opi.yahoo.com/online?u=[username]&amp;m=g&t=2

    This will show an image (125×25) with “I am Online send me a message” or “Not Online right now
  • http://mail.opi.yahoo.com/online?u=[username]&amp;m=a&t=0

    This shows a text with “[username] is ONLINE or NOT ONLINE
  • http://mail.opi.yahoo.com/online?u=[username]&amp;m=a&t=1

    This shows “00” if person is offline and “01” if his status is online.

All the above links can be used as an Online Status generator (with some PHP offcourse) and also you can write up a small code to know if a particular guy is hiding from you ;)

Finally sometimes this method may return wrong results (if there is some problem in yahoo server) and if someone appears online to you but is offline as per this method then know that the guy has signed in as invisible and is appearing as online to you only (may be coz you are some one special ;) )


Note : It always shows incorrect Yahoo id’s as offline..Do check the yahoo id carefully !!


Example : If your yahoo id is xyz..Try this method by using the url : http://mail.opi.yahoo.com/online?u=xyz&m=g&t=0


You can find out your invisible buddies by using BuddySpy (freeware)


Lastly this article is for educational purposes only and i am not responsible if anyone’s privacy is disturbed by this method.

Some basic technique with XML Processing!

Introduction

Based on a section of easy-to-read XML source data, I'll show you how to select and locate XML nodes and navigate through them using XPathNavigator and XPathNodeIterator. I will provide a few straightforward samples about XPath expression with which you could follow without difficulty. In the last part, there is some sample code to update, insert and remove XML nodes.


Some Concepts

  • XML - Extensible Markup Language, describe data structures in text format and with your own vocabularies, which means it does not use predefined tags and the meaning of these tags are not well understood.
  • XSL - Extensible Stylesheet Language, is designed for expressing stylesheets for XML documents. XSL is to XML as CSS is to HTML.
  • XML Transformation - is a user-defined algorithm that transforms a given XML document to another format, such as XML, HTML, XHTML. The algorithm is described by XSL.
  • XSLT - is designed for use as part of XSL, transforming an XML document into another XML document, or another type of document that is recognized by a browser, like HTML or XHTML. XSLT uses XPath.
  • XPath - is a set of syntax rules for defining parts of an XML document.

To keep this article simple and clear, I'll break it down into two parts, and put XSL, XSLT to my next article.


Using the code

Here is the source XML data:


<?xml version="1.0" encoding="ISO-8859-1"?> <catalog>   <cd country="USA">     <title>Empire Burlesque</title>     <artist>Bob Dylan</artist>     <price>10.90</price>   </cd>   <cd country="UK">     <title>Hide your heart</title>     <artist>Bonnie Tyler</artist>     <price>10.0</price>   </cd>   <cd country="USA">     <title>Greatest Hits</title>     <artist>Dolly Parton</artist>     <price>9.90</price>   </cd> </catalog>

If you want to select all of the price elements, here is the code:


using System.Xml; using System.Xml.XPath; .... string fileName = "data.xml"; XPathDocument doc = new XPathDocument(fileName); XPathNavigator nav = doc.CreateNavigator();  // Compile a standard XPath expression XPathExpression expr;  expr = nav.Compile("/catalog/cd/price"); XPathNodeIterator iterator = nav.Select(expr);  // Iterate on the node set listBox1.Items.Clear(); try {   while (iterator.MoveNext())   {      XPathNavigator nav2 = iterator.Current.Clone();      listBox1.Items.Add("price: " + nav2.Value);   } } catch(Exception ex)  {    Console.WriteLine(ex.Message); }

In the above code, we used "/catalog/cd/price" to select all the price elements. If you just want to select all the cd elements with price greater than 10.0, you can use "/catalog/cd[price>10.0]". Here are some more examples of XPath expressions:


/catalogselects the root element
/catalog/cdselects all the cd elements of the catalog element
/catalog/cd/priceselects all the price elements of all the cd elements of the catalog element
/catalog/cd[price>10.0]selects all the cd elements with price greater than 10.0
starts with a slash(/)represents an absolute path to an element
starts with two slashes(//)selects all elements that satisfy the criteria
//cdselects all cd elements in the document
/catalog/cd/title /catalog/cd/artistselects all the title and artist elements of the cd elements of catalog
//title //artistselects all the title and artist elements in the document
/catalog/cd/*selects all the child elements of all cd elements of the catalog element
/catalog/*/priceselects all the price elements that are grandchildren of catalog
/*/*/priceselects all price elements which have two ancestors
//*selects all elements in the document
/catalog/cd[1]selects the first cd child of catalog
/catalog/cd[last()]selects the last cd child of catalog
/catalog/cd[price]selects all the cd elements that have price
/catalog/cd[price=10.90]selects cd elements with the price of 10.90
/catalog/cd[price=10.90]/priceselects all price elements with the price of 10.90
//@countryselects all "country" attributes
//cd[@country]selects cd elements which have a "country" attribute
//cd[@*]selects cd elements which have any attribute
//cd[@country='UK']selects cd elements with "country" attribute equal to 'UK'

To update a cd node, first I search out which node you are updating by SelectSingleNode, and then create a new cd element. After setting the InnerXml of the new node, call ReplaceChild method of XmlElement to update the document. The code is as follows:


XmlTextReader reader = new XmlTextReader(FILE_NAME); XmlDocument doc = new XmlDocument();  doc.Load(reader); reader.Close();  //Select the cd node with the matching title XmlNode oldCd; XmlElement root = doc.DocumentElement; oldCd = root.SelectSingleNode("/catalog/cd[title='" + oldTitle + "']");  XmlElement newCd = doc.CreateElement("cd"); newCd.SetAttribute("country",country.Text);  newCd.InnerXml = "<title>" + this.comboBox1.Text + "</title>" +          "<artist>" + artist.Text + "</artist>" +         "<price>" + price.Text + "</price>";  root.ReplaceChild(newCd, oldCd);  //save the output to a file doc.Save(FILE_NAME);

Similarly, use InsertAfter and RemoveChild to insert and remove a node, check it out in the demo. When you run the application, make sure that "data.xml" is in the same directory as the EXE file.


Points of Interest

Anyway, XmlDocument is an in-memory or cached tree representation of an XML document. It is somewhat resource-intensive, if you have a large XML document and not enough memory to consume, use XmlReader and XmlWriter for better performance.

All things about Robots.txt!

The importance of robots.txt
Although the robots.txt file is a very important file if you want to have a good ranking on search engines, many Web sites don't offer this file.
If your Web site doesn't have a robots.txt file yet, read on to learn how to create one. If you already have a robots.txt file, read our tips to make sure that it doesn't contain errors.
What is robots.txt?
When a search engine crawler comes to your site, it will look for a special file on your site. That file is called robots.txt and it tells the search engine spider, which Web pages of your site should be indexed and which Web pages should be ignored.
The robots.txt file is a simple text file (no HTML), that must be placed in your root directory, for example:
http://www.yourwebsite.com/robots.txt
How do I create a robots.txt file?
As mentioned above, the robots.txt file is a simple text file. Open a simple text editor to create it. The content of a robots.txt file consists of so-called "records".
A record contains the information for a special search engine. Each record consists of two fields: the user agent line and one or more Disallow lines. Here's an example:
User-agent: googlebotDisallow: /cgi-bin/
This robots.txt file would allow the "googlebot", which is the search engine spider of Google, to retrieve every page from your site except for files from the "cgi-bin" directory. All files in the "cgi-bin" directory will beignored by googlebot.
The Disallow command works like a wildcard. If you enter
User-agent: googlebotDisallow: /support
both "/support-desk/index.html" and "/support/index.html" as well as all other files in the "support" directory would not be indexed by search engines.
If you leave the Disallow line blank, you're telling the search engine that all files may be indexed. In any case, you must enter a Disallow line for every User-agent record.
If you want to give all search engine spiders the same rights, use the following robots.txt content:
User-agent: *Disallow: /cgi-bin/
Where can I find user agent names?
You can find user agent names in your log files by checking for requests to robots.txt. Most often, all search engine spiders should be given the same rights. in that case, use "User-agent: *" as mentioned above.
Things you should avoid
If you don't format your robots.txt file properly, some or all files of your Web site might not get indexed by search engines. To avoid this, do the following:
Don't use comments in the robots.txt fileAlthough comments are allowed in a robots.txt file, they might confuse some search engine spiders."Disallow: support # Don't index the support directory" might be misinterepreted as "Disallow: support#Don't index the support directory".
Don't use white space at the beginning of a line. For example, don't writeplaceholder User-agent: *place Disallow: /supportbutUser-agent: *Disallow: /support
Don't change the order of the commands. If your robots.txt file should work, don't mix it up. Don't writeDisallow: /supportUser-agent: *butUser-agent: *Disallow: /support
Don't use more than one directory in a Disallow line. Do not use the followingUser-agent: *Disallow: /support /cgi-bin/ /images/Search engine spiders cannot understand that format. The correct syntax for this isUser-agent: *Disallow: /supportDisallow: /cgi-bin/Disallow: /images/
Be sure to use the right case. The file names on your server are case sensitve. If the name of your directory is "Support", don't write "support" in the robots.txt file.
Don't list all files. If you want a search engine spider to ignore all files in a special directory, you don't have to list all files. For example:User-agent: *Disallow: /support/orders.htmlDisallow: /support/technical.htmlDisallow: /support/helpdesk.htmlDisallow: /support/index.htmlYou can replace this withUser-agent: *Disallow: /support
There is no "Allow" commandDon't use an "Allow" command in your robots.txt file. Only mention files and directories that you don't want to be indexed. All other files will be indexed automatically if they are linked on your site.
Tips and tricks:
1. How to allow all search engine spiders to index all files
Use the following content for your robots.txt file if you want to allow all search engine spiders to index all files of your Web site:
User-agent: *Disallow:
2. How to disallow all spiders to index any file
If you don't want search engines to index any file of your Web site, use the following:
User-agent: *Disallow: /
3. Where to find more complex examples.
If you want to see more complex examples, of robots.txt files, view the robots.txt files of big Web sites:
http://www.cnn.com/robots.txt
http://www.nytimes.com/robots.txt
http://www.spiegel.com/robots.txt
http://www.ebay.com/robots.txt
Your Web site should have a proper robots.txt file if you want to have good rankings on search engines. Only if search engines know what to do with your pages, they can give you a good ranking.
Other special articles
The Google search engine resource page
10 ways to link popularity
How to submit to Yahoo and other directories
How link popularity can help your rankings
19 common mistakes that prevent your Web site from getting top rankings on search engines
The Search Engine Facts newsletter is free. Please recommend it by mailing this issue to someone you know.
If you want to publish one of the above articles on your Web site, you're allowed to do so! However, you must not change the contents in any way. Also, you must keep all links and you must add the following two sentences with a link to www.Axandra.com: "Copyright by Axandra.com. Internet marketing and search engine ranking software ."