Wednesday, April 25, 2007
Microsoft responds to latest EU charges
Mon Apr 23, 4:50 PM ET
BRUSSELS, Belgium - Microsoft responded Monday to European Union allegations that it is overcharging rivals for information that would make their products work better with Windows. The software maker also repeated its request for more guidance on what regulators consider to be an acceptable price.
To level the software industry's playing field, EU officials want Microsoft's competitors to have access at a "reasonable cost" to material that would help their programs interoperate with Windows-based servers. Regulators have called the current prices excessive and the Microsoft's information insufficient.
The EU had given Microsoft until Monday night to come through with a response on the fees it seeks from competitors to share computer information, and threatened daily fines that could go as high as $4 million a day. It said it will consider the company's reply and decide whether to impose a daily penalty.
Microsoft declined to provide details about the company's response.
Microsoft previously said the prices it charges are fair and that the EU has failed to provide clear guidance.
Also Monday, Microsoft declined the opportunity to have a hearing with the Commission on the EU's Statement of Objections.
"We need greater clarity on what prices the (European) Commission wants us to charge, and we believe that is more likely to come from a constructive conversation than from a formal hearing," said Brad Smith, Microsoft's general counsel.
Microsoft's license program sets a maximum 5.95 percent royalty rate for products that use its server protocols and the company has said it believes the prices reflect the code's value. It claims the Commission wants it to license the technology to competitors for free.
On March 1, the EU's executive arm said there was "no significant innovation" in the requested information Microsoft had to provide rivals — and therefore Microsoft did not have the right to charge high fees for licenses.
Microsoft has complained that the treatment it receives from the 27-nation EU is unmatched around the world and hurt Europe's efforts to become a thriving high-tech economy.
In a landmark 2004 ruling, EU regulators found the company broke competition laws and abused its dominant market position.
Besides a record $674 million fine it imposed at the time of the ruling, the EU levied a $380 million fine last summer, saying Microsoft did not supply — as demanded — complete interoperability documentation.
In the meantime, Microsoft has reached licensing agreements with several of the companies that originally took issue with the software maker's practices and pricing, including Sun Microsystems Inc. and Novell Inc.
Microsoft has appealed the 2004 ruling and a court decision is expected by September.
Shares of Microsoft fell 24 cents to close at $28.78 on the Nasdaq Stock Market.
(AP)
Sunday, April 22, 2007
ASP.NET 2.0 Caching Features
Thiru Thangarathinam
In ASP.NET 2.0, caching has been improved in a couple of notable ways. Probably the most interesting feature is the introduction of database-triggered cache invalidation. In ASP.NET 1.x, you can invalidate a cached item based on some pre-defined conditions such as change in an XML file or change in another cache item. Using this feature, you can remove or invalidate an item from the cache when the data or another cached item changes. However, the ASP.NET 1.x Cache API does not allow you to invalidate an item in the cache when data in a SQL Server database changes. This is a very common capability most applications will require. ASP.NET 2.0 addresses this by providing the database triggered cache invalidation capability to ensure that the items in the cache are kept up-to-date with the changes in the database. You can accomplish this using any one of the following methods.
Declarative Output caching - This is similar to declarative output caching in ASP.NET 1.x, wherein you configure caching by specifying the OutputCache directive and their related attributes.
Programmatic Output caching - In this method, you will use the SqlCacheDependency object programmatically to specify the items to be cached and set their attributes.
Cache API - In this option, you will use the static methods of the Cache class such as Insert, Remove, Add and so on to add or remove items from the ASP.NET cache, while still using the SqlCacheDependency object to trigger the cache invalidation.
Another important caching feature in ASP.NET 2.0 is the ability to create custom cache dependencies, which is not possible with ASP.NET 1.x Cache API. To accomplish this, you need to inherit from the CacheDependency class. Since the CacheDependency is a sealed class in ASP.NET 1.x, you can't inherit and extend it. However, in ASP.NET 2.0, this is no longer the case. You can inherit from CacheDependency class and create your own custom cache dependencies. This opens up a world of opportunities where you can roll your own custom cache dependencies required for a particular class of applications. For example, you can create a StockPriceCacheDependency class that automatically invalidates the cached data when the stock price changes.
SQL Server-Based Cache Invalidation Mechanism
The SQL Server based cache invalidation mechanism works with SQL Server 7.0 and above. However, with SQL Server 7.0 and 2000, only Table level cache invalidation mechanism is supported. This means that the cached items will be automatically invalidated any time the data in the table changes. The next release of SQL Server (code-named Yukon) will also feature row-level cache invalidation mechanism, providing a finer level of accuracy over the cached data.
In SQL Server 7 and SQL Server 2000, table level cache invalidation is supported using a polling system. Through this system, the ASP.NET process will poll the database (pull model) every so many seconds to check and see which tables have changed since it last checked. Even though the pull model works for most cases, it is not an efficient approach. However, this will be enhanced in Yukon to have Yukon actually notify (Push model) ASP.NET whenever a particular row of data has been modified. Yukon accomplishes this by using a feature called Notification Delivery Services (that uses ports 80), which directly interacts with HTTP.SYS of IIS 6.0 to notify the Web server of updates to the specific rows. For the purposes of this article, you will consider SQL Server 7 and 2000 and understand how to configure caching for those versions.
Before you can establish cache dependency with SQL Server 7 or SQL Server 2000, you need to perform the following steps.
You must have
You also need to perform one-time setup of the tables or databases you want to monitor using either the aspnet_regsqlcache utility or the EnableTableForNotifications method.
After you have completed the above steps, ASP.NET can start invalidating the data in the cache when the SQL Server data changes, which is accomplished by a polling mechanism approach. Note that with Yukon, the above steps are not required. Before looking at the three different ways to enable SQL Server based caching, you should understand the steps that are required for the caching to work.
First and foremost, you need to ensure your web.config file contains the appropriate cache related settings. The web.config file should contain a cache element as shown below.
???
???????
???
???
?????? ??
?????? ?? ???
?????? ???????????
???????????????????? ???? ? ?
????????????? ?????
?????????????
? ???? ????
???
?????? ---------
?????? ---------
In the above configuration entries, you specify the name of the database in which you want to enable the cache notification mechanism using the
Next step is to enable the specific tables in the Northwind database for notification. You can perform this using any one of the following two ways:
Using the aspnet_regsqlcache utiltity. You will see an example of this shortly.
Using the EnableTableForNotifications method of the SqlCacheDependencyAdmin class.
Once you configure the table to send notifications, any time data in the table changes, it notifies ASP.NET to invalidate the specific item in the cache. For the purposes of this article, consider the aspnet_regsqlcache utility to configure the tables. Basically this utility creates an extra table named AspNet_SqlCacheTablesForChangeNotification that is used to keep track of the changes to all the monitored tables in the database. It also creates a number of triggers and stored procedures to enable this capability. To run the aspnet_regsqlcache utility, open up the Visual Studio .NET command prompt and enter the command shown in the following screenshot.
In the above command:
S - Name of the Server
U - User ID to use to connect to the SQL Server
P - Password to use to connect to the SQL Server
d - Specifies the name of the database
t - Table to configure
et - enables the tables for SQL Server database triggered invalidation
As mentioned before, you need to follow the above steps only when you use SQL Server 7 or SQL Server 2000. If you are using the next version of SQL Server (code-named Yukon), the above configurations are not necessary. Moreover the cache invalidating mechanism works through a highly efficient notification model, wherein the Notification Delivery Service component of SQL Server directly notifies IIS using TCP Port 80 when the data in a SQL Server changes.
Now that you understood the steps required for enabling this, take a look at the different ways of caching ASP.NET pages so that it takes advantage of SQL Server trigger based cache invalidation mechanism.
Declaratively enabling caching using OutputCache directive
In this section, you will see how to enable output caching declaratively using the OutputCache directive. Here's the complete code of the ASP.NET page.
???????
?????? DataSet categories = new DataSet();
?????? adapter.Fill(categories);
?????? SqlCacheDependency dependency = new
SqlCacheDependency("Northwind", "Categories");
?????? Response.AddCacheDependency(dependency);
?????? Response.Cache.SetValidUntilExpires(true);
?????? Response.Cache.SetExpires(DateTime.Now.AddMinutes(60));
?? ????Response.Cache.SetCacheability(HttpCacheability.Public);
?????? gridCategories.DataSource = categories;
?????? gridCategories.DataBind();????????????????
?????? Response.Write("Page created on : "? + DateTime.Now.ToString());??????????????
??? }???
???
???
???????
???????
?? ?
The above code is similar to the previous example except that the caching is performed programmatically. In the Page_Load event, you create an instance of the SqlConnection object and pass in the connection string as an argument. To retrieve the connection string from the web.config file, you use one of the new classes supplied by ASP.NET 2.0, named ConnectionStrings, that provides helper properties to retrieve the connection strings specified in the connectionStrings section of the web.config file. You then create an instance of the SqlDataAdapter object passing in the SQL to be executed and the SqlConnection object as its arguments. After that you create an instance of the DataSet object and then fill the dataset object by invoking the Fill method of the SqlDataAdapter object. Then you create an instance of the class named SqlCacheDependency passing in the database and the table to be monitored as its arguments. You also add the SqlCacheDependency object to the Response object using the AddCacheDependency method. After that, you set various attributes of the Cache object. Finally, you bind the output of the returned data to the GridView control. Navigating to the above page results in an output that is similar to our previous example. To test this page, change the data in the categories table and see the change in the date time displayed in the page.
Cache API Example
So far, you have seen how to use the output caching declaratively and programmatically to enable caching on ASP.NET pages. In this section, you will see how to use the Cache API to accomplish the same functionality. As in ASP.NET 1.x, the Cache API is very powerful in that it not only provides complete control over how items are cached but also enables the execution of some code when an item is removed or invalidated from the cache. The following code shows an example of using Cache API to control caching for an ASP.NET page.
?
??? void Page_Load(object sender, System.EventArgs e)
??? {???????
??????? DataSet categories;???????
??????? categories = (DataSet)Cache["Categories"];
??????? if (categories == null)
??????? {
??????????? SqlConnection conn = new
?SqlConnection(
ConfigurationSettings.ConnectionStrings["Northwind"]);
??????????? SqlDataAdapter adapter = new
SqlDataAdapter("Select * from Categories", conn);
??????????? categories = new DataSet();
??????????? adapter.Fill(categories);
??????????? SqlCacheDependency dependency = new
????????????? SqlCacheDependency("Northwind", "Categories");
??????????? Cache.Insert("Categories", categories, dependency);
??????????? Response.Write("Categories retrieved from the database");???????
??????? }
??????? else
??????????? Response.Write("Categories retrieved from the Cache");???????
???????
??????? gridCategories.DataSource = categories;
??????? gridCategories.DataBind();????????????????????????
??? }
???
???
???
????????
???????
???
The above code is very similar to the previous example, except in this case, the Insert method of the Cache class is used to add items to the cache. In the above code, you start by creating an instance of the SqlConnection object passing in the connection string that is retrieved from the web.config file. Then you create an instance of the SqlDataAdapter object and pass in the SQL statement to be executed and the previously created SqlConnection object as its arguments. Then you execute the SQL query using the Fill method of the SqlDataAdapter object. After that you create an instance of the SqlCacheDependency object and supply the database name (that corresponds to the database name specified in the web.config file) and the table name as its arguments. Then you insert the categories dataset to the cache using the Insert method of the Cache object. At the time of inserting, you should also specify the SqlCacheDependency object so that the categories dataset can be invalidated when the data in the categories table changes. Finally, you sould bind the categories dataset to the GridView control.
When you navigate to the above page using the browser, you get the following output, which clearly shows that for the first time the categories information is retrieved from the database.
If you refresh the browser, you will see the following output, in which the categories information is retrieved from the cache.
To test if the SQL Server based cache invalidation mechanism works, modify the data in the Categories table and then if you navigate to the page using the browser, you will get a message stating that the categories is retrieved from the database.
Creating Custom Cache Dependencies
So far, you have used the built-in SqlCacheDependency class for invalidating the cached item when the data in the SQL Server database changes. Even though this approach is very useful, there are times you might want to create your own custom cache dependency. For example, when the stock price changes, you might want to invalidate an item in the cache. ASP.NET 2.0 Cache API enables these types of scenarios by providing the ability to create custom cache dependency classes that are inherited from the CacheDependency class.
Conclusion
The Cache API introduced with ASP.NET 1.0 was a powerful feature that could be immensely useful in increasing the performance of a Web application. The Cache API in ASP.NET 2.0 builds on the foundation provided by the ASP.NET 1.0 and makes it extremely easy and seamless to build high performance ASP.NET applications. Being able to invalidate a cached item when the data in the database changes is a capability that can go a long way in revolutionizing the way ASP.NET applications are built and deployed. Furthermore, the ability to create custom cache dependencies (when one of the built-in cache dependency classes does not suit your needs) enables a whole lot of impressive caching scenarios for the developers to take advantage of.
About the Author
Thiru has many years of experience in architecting, designing, developing and implementing applications using Object Oriented Application development methodologies. He also possesses a thorough understanding of software life cycle (design, development and testing).
He is an expert with ASP.NET, .NET Framework, Visual C#.NET, Visual Basic.NET, ADO.NET, XML Web Services and .NET Remoting and holds MCAD for .NET, MCSD and MCP certifications.
Thiru has authored numerous books and articles. He can be reached at thiruthangarathinam@yahoo.com.
Back to article
Copyright 2005 Jupitermedia Corp. All Rights Reserved.
Legal Notices, Licensing, Reprints, & Permissions, Privacy Policy.
http://www.internet.com
Wednesday, April 4, 2007
Tip/Trick: Url Rewriting with ASP.NET
People often ask me for guidance on how they can dynamically "re-write" URLs and/or have the ability to publish cleaner URL end-points within their ASP.NET web applications. This blog post summarizes a few approaches you can take to cleanly map or rewrite URLs with ASP.NET, and have the option to structure the URLs of your application however you want.
Why does URL mapping and rewriting matter?
The most common scenarios where developers want greater flexibility with URLs are:
1) Handling cases where you want to restructure the pages within your web application, and you want to ensure that people who have bookmarked old URLs don't break when you move pages around. Url-rewriting enables you to transparently forward requests to the new page location without breaking browsers.
2) Improving the search relevancy of pages on your site with search engines like Google, Yahoo and Live. Specifically, URL Rewriting can often make it easier to embed common keywords into the URLs of the pages on your sites, which can often increase the chance of someone clicking your link. Moving from using querystring arguments to instead use fully qualified URL's can also in some cases increase your priority in search engine results. Using techniques that force referring links to use the same case and URL entrypoint (for example: weblogs.asp.net/scottgu instead of weblogs.asp.net/scottgu/default.aspx) can also avoid diluting your pagerank across multiple URLs, and increase your search results.
In a world where search engines increasingly drive traffic to sites, extracting any little improvement in your page ranking can yield very good ROI to your business. Increasingly this is driving developers to use URL-Rewriting and other SEO (search engine optimization) techniques to optimize sites (note that SEO is a fast moving space, and the recommendations for increasing your search relevancy evolve monthly). For a list of some good search engine optimization suggestions, I'd recommend reading the SSW Rules to Better Google Rankings, as well as MarketPosition's article on how URLs can affect top search engine ranking.
Sample URL Rewriting Scenario
For the purpose of this blog post, I'm going to assume we are building a set of e-commerce catalog pages within an application, and that the products are organized by categories (for example: books, videos, CDs, DVDs, etc).
Let's assume that we initially have a page called "Products.aspx" that takes a category name as a querystring argument, and filters the products accordingly. The corresponding URLs to this Products.aspx page look like this:
http://www.store.com/products.aspx?category=bookshttp://www.store.com/products.aspx?category=DVDshttp://www.store.com/products.aspx?category=CDs
Rather than use a querystring to expose each category, we want to modify the application so that each product category looks like a unique URL to a search engine, and has the category keyword embedded in the actual URL (and not as a querystring argument). We'll spend the rest of this blog post going over 4 different approaches that we could take to achieve this.
Approach 1: Use Request.PathInfo Parameters Instead of QueryStrings
The first approach I'm going to demonstrate doesn't use Url-Rewriting at all, and instead uses a little-known feature of ASP.NET - the Request.PathInfo property. To help explain the usefulness of this property, consider the below URL scenario for our e-commerce store:
http://www.store.com/products.aspx/Bookshttp://www.store.com/products.aspx/DVDshttp://www.store.com/products.aspx/CDs
One thing you'll notice with the above URLs is that they no longer have Querystring values - instead the category parameter value is appended on to the URL as a trailing /param value after the Products.aspx page handler name. An automated search engine crawler will then interpret these URLs as three different URLs, and not as one URL with three different input values (search engines ignore the filename extension and just treat it as another character within the URL).
You might wonder how you handle this appended parameter scenario within ASP.NET. The good news is that it is pretty simple. Simply use the Request.PathInfo property, which will return the content immediately following the products.aspx portion of the URL. So for the above URLs, Request.PathInfo would return "/Books", "/DVDs", and "/CDs" (in case you are wondering, the Request.Path property would return "/products.aspx").
You could then easily write a function to retrieve the category like so (the below function strips out the leading slash and returning just "Books", "DVDs" or "CDs"):
Function GetCategory() As String
If (Request.PathInfo.Length = 0) Then
Return ""
Else
Return Request.PathInfo.Substring(1)
End If
End Function
Sample Download: A sample application that I've built that shows using this technique can be downloaded here. What is nice about this sample and technique is that no server configuration changes are required in order to deploy an ASP.NET application using this approach. It will also work fine in a shared hosting environment.
Approach 2: Using an HttpModule to Perform URL Rewriting
An alternative approach to the above Request.PathInfo technique would be to take advantage of the HttpContext.RewritePath() method that ASP.NET provides. This method allows a developer to dynamically rewrite the processing path of an incoming URL, and for ASP.NET to then continue executing the request using the newly re-written path.
For example, we could choose to expose the following URLs to the public:
http://www.store.com/products/Books.aspxhttp://www.store.com/products/DVDs.aspxhttp://www.store.com/products/CDs.aspx
This looks to the outside world like there are three separate pages on the site (and will look great to a search crawler). By using the HttpContext.RewritePath() method we can dynamically re-write the incoming URLs when they first reach the server to instead call a single Products.aspx page that takes the category name as a Querystring or PathInfo parameter instead. For example, we could use an an Application_BeginRequest event in Global.asax like so to do this:
void Application_BeginRequest(object sender, EventArgs e) {
string fullOrigionalpath = Request.Url.ToString();
if (fullOrigionalpath.Contains("/Products/Books.aspx"))
{
Context.RewritePath("/Products.aspx?Category=Books");
}
else if (fullOrigionalpath.Contains("/Products/DVDs.aspx"))
{
Context.RewritePath("/Products.aspx?Category=DVDs");
}
}
The downside of manually writing code like above is that it can be tedious and error prone. Rather than do it yourself, I'd recommend using one of the already built HttpModules available on the web for free to perform this work for you. Here a few free ones that you can download and use today:
UrlRewriter.net
UrlRewriting.net
These modules allow you to declaratively express matching rules within your application's web.config file. For example, to use the UrlRewriter.Net module within your application's web.config file to map the above URLs to a single Products.aspx page, we could simply add this web.config file to our application (no code is required):
The HttpModule URL rewriters above also add support for regular expression and URL pattern matching (to avoid you having to hard-code every URL in your web.config file). So instead of hard-coding the category list, you could re-write the rules like below to dynamically pull the category from the URL for any "/products/[category].aspx" combination:
This makes your code much cleaner and super extensible.
Sample Download: A sample application that I've built that shows using this technique with the UrlRewriter.Net module can be downloaded here.
What is nice about this sample and technique is that no server configuration changes are required in order to deploy an ASP.NET application using this approach. It will also work fine in a medium trust shared hosting environment (just ftp/xcopy to the remote server and you are good to go - no installation required).
Approach 3: Using an HttpModule to Perform Extension-Less URL Rewriting with IIS7
The above HttpModule approach works great for scenarios where the URL you are re-writing has a .aspx extension, or another file extension that is configured to be processed by ASP.NET. When you do this no custom server configuration is required - you can just copy your web application up to a remote server and it will work fine.
There are times, though, when you want the URL to re-write to either have a non-ASP.NET file extension (for example: .jpg, .gif, or .htm) or no file-extension at all. For example, we might want to expose these URLs as our public catalog pages (note they have no .aspx extension):
http://www.store.com/products/Bookshttp://www.store.com/products/DVDshttp://www.store.com/products/CDs
With IIS5 and IIS6, processing the above URLs using ASP.NET is not super easy. IIS 5/6 makes it hard to perform URL rewriting on these types of URLs within ISAPI Extensions (which is how ASP.NET is implemented). Instead you need to perform the rewriting earlier in the IIS request pipeline using an ISAPI Filter. I'll show how to-do this on IIS5/6 in the Approach 4 section below.
The good news, though, is that IIS 7.0 makes handling these types of scenarios super easy. You can now have an HttpModule execute anywhere within the IIS request pipeline - which means you can use the URLRewriter module above to process and rewrite extension-less URLs (or even URLs with a .asp, .php, or .jsp extension). Below is how you would configure this with IIS7:
Note the "runAllManagedModulesForAllRequests" attribute that is set to true on the
1) It will work on any IIS 7.0 machine. You don't need an administrator to enable anything on the remote host. It will also work in medium trust shared hosting scenarios.
2) Because I've configured the UrlRewriter in both the
IIS 7.0 server will ship later this year as part of Windows Longhorn Server, and will support a go-live license with the Beta3 release in a few weeks. Because of all the new hosting features that have been added to IIS7, we expect hosters to start aggressively offering IIS7 accounts relatively quickly - which means you should be able to start to take advantage of the above extension-less rewriting support soon. We'll also be shipping a Microsoft supported URL-Rewriting module in the IIS7 RTM timeframe that will be available for free as well that you'll be able to use on IIS7, and which will provide nice support for advanced re-writing scenarios for all content on your web-server.
Sample Download: A sample application that I've built that shows using this extension-less URL technique with IIS7 and the UrlRewriter.Net module can be downloaded here.
Approach 4: ISAPIRewrite to enable Extension-less URL Rewriting for IIS5 and IIS6
If you don't want to wait for IIS 7.0 in order to take advantage of extension-less URL Rewriting, then your best best is to use an ISAPI Filter in order to re-write URLs. There are two ISAPI Filter solutions that I'm aware of that you might want to check-out:
Helicon Tech's ISAPI Rewrite: They provide an ISAPI Rewrite full product version for $99 (with 30 day free trial), as well as a ISAPI Rewrite lite edition that is free.
Ionic's ISAPI Rewrite: This is a free download (both source and binary available)
I actually don't have any first-hand experience using either of the above solutions - although I've heard good things about them. Scott Hanselman and Jeff Atwood recently both wrote up great blog posts about their experiences using them, and also provided some samples of how to configure the rules for them. The rules for Helicon Tech's ISAPI Rewrite use the same syntax as Apache's mod_rewrite. For example (taken from Jeff's blog post):
[ISAPI_Rewrite]# fix missing slash on folders# note, this assumes we have no folders with periods!RewriteCond Host: (.*)RewriteRule ([^.?]+[^.?/]) http\://$1$2/ [RP]# remove index pages from URLsRewriteRule (.*)/default.htm$ $1/ [I,RP]RewriteRule (.*)/default.aspx$ $1/ [I,RP]RewriteRule (.*)/index.htm$ $1/ [I,RP]RewriteRule (.*)/index.html$ $1/ [I,RP]# force proper www. prefix on all requestsRewriteCond %HTTP_HOST ^test\.com [I]RewriteRule ^/(.*) http://www.test.com/$1 [RP]# only allow whitelisted referers to hotlink imagesRewriteCond Referer: (?!http://(?:www\.good\.comwww\.better\.com)).+RewriteRule .*\.(?:gifjpgjpegpng) /images/block.jpg [I,O]
Definitely check out Scott's post and Jeff's post to learn more about these ISAPI modules, and what you can do with them.
Note: One downside to using an ISAPI filter is that shared hosting environments typically won't allow you to install this component, and so you'll need either a virtual dedicated hosting server or a dedicated hosting server to use them. But, if you do have a hosting plan that allows you to install the ISAPI, it will provide maximum flexibility on IIS5/6 - and tide you over until IIS7 ships.
Handling ASP.NET PostBacks with URL Rewriting
One gotcha that people often run into when using ASP.NET and Url-Rewriting has to-do with handling postback scenarios. Specifically, when you place a
Wednesday, March 28, 2007
10 Tips for Writing High-Performance Web Applications
Writing a Web application with ASP.NET is unbelievably easy. So easy, many developers don't take the time to structure their applications for great performance. In this article, I'm going to present 10 tips for writing high-performance Web apps. I'm not limiting my comments to ASP.NET applications because they are just one subset of Web applications. This article won't be the definitive guide for performance-tuning Web applications—an entire book could easily be devoted to that. Instead, think of this as a good place to start.
Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I'd review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you're faced with either fixing performance problems or running a high-throughput site.
My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I'm sure that some of the tips that have helped me will help you as well.
You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application.
Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more.
The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.
Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.
There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.Performance on the Data Tier
When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.
Tip 1—Return Multiple Resultsets
Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.
The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you're doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.
In Community Server, we wrote a paging server control to do all the data paging. You'll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data.
The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.
Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.
The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.
Tip 4—ASP.NET Cache API
One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature.
If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).
There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.
Figure 3 ASP.NET Cache
There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.
Tip 5—Per-Request Caching
Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching.
Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example.
In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request.
To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.
Tip 6—Background Processing
The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.
There is not enough room to go into the code here, but you can download a digestible sample at www.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.
Tip 7—Page Output Caching and Proxy Servers
ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page <%@ Page OutputCache VaryByParams="none" Duration="60" %>
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list.
Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.
Tip 8—Run IIS 6.0 (If Only for Kernel Caching)
If you're not running IIS 6.0 (Windows Server™ 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.
Tip 9—Use Gzip Compression
While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.
Tip 10—Server Control View State
View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting: <%@ Page EnableViewState="false" %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.
Conclusion
I've offered you some tips that I've found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found at Improving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.
See the sidebar "Common Performance Myths".
Monday, March 26, 2007
A tip for aspNet application run faster!
One of the things you want to avoid when deploying an ASP.NET application into production is to accidentally (or deliberately) leave the
Doing so causes a number of non-optimal things to happen including:
1) The compilation of ASP.NET pages takes longer (since some batch optimizations are disabled)
2) Code can execute slower (since some additional debug paths are enabled)
3) Much more memory is used within the application at runtime
4) Scripts and images downloaded from the WebResources.axd handler are not cached
This last point is particularly important, since it means that all client-javascript libraries and static images that are deployed via WebResources.axd will be continually downloaded by clients on each page view request and not cached locally within the browser. This can slow down the user experience quite a bit for things like Atlas, controls like TreeView/Menu/Validators, and any other third-party control or custom code that deploys client resources. Note that the reason why these resources are not cached when debug is set to true is so that developers don’t have to continually flush their browser cache and restart it every-time they make a change to a resource handler (our assumption is that when you have debug=true set you are in active development on your site).
When
What about binaries compiled with debug symbols?
One scenario that several people find very useful is to compile/pre-compile an application or associated class libraries with debug symbols so that more detailed stack trace and line error messages can be retrieved from it when errors occur.
The good news is that you can do this without having the have the
The debug symbols and metadata in the compiled assemblies will increase the memory footprint of the application, but this can sometimes be an ok trade-off for more detailed error messages.
The
If you are a server administrator and want to ensure that no one accidentally deploys an ASP.NET application in production with the
Specifically, by setting this within your machine.config file:
You will disable the
Setting this switch to true is probably a best practice that any company with formal production servers should follow to ensure that an application always runs with the best possible performance and no security information leakages. There isn’t a ton of documentation on this switch – but you can learn a little more about it here.
Hope this helps,
Scott
Thursday, March 22, 2007
Some techniques to improve your image!
Improve Your Image(s)
Master Image Processing and Management
By Steve C. Orr
A picture is worth a thousand words — and in some cases, they’re worth quite a few dollars too. Content is king on the Internet. Scattered throughout company hard drives everywhere are marketing materials, scanned documentation, artwork, charts containing sensitive data, and other valuable images that can do wonders in the right hands — or horrors in the wrong hands. Consolidating these materials into one central system is a common optimization of corporate dollars these days, and these systems usually must provide some way to get at files from across the Internet. Security is rightly a top concern in most document management systems.
In some basic cases you can configure IIS to manage the files and their permissions for you, but often a more customized system is necessary. As you’re probably aware, a standard Image control is defined with the following ASPX code:
<asp:Image ID="Image1" Runat="server" ImageUrl="SomeImage.jpg" />
When the page is output to the browser, the resulting HTML will consist of a standard <img> tag similar to this:
<img ID="Image1" src="SomeImage.jpg" />
A key point here is that the image is not really part of the page from the server’s point of view. Therefore, you can’t really do any custom image processing (such as cropping, resizing, or adding annotations) within the page itself. Rather, the image file name is all that’s written to the page (inside the image tag). As the browser interprets the HTML, it downloads the image from the Web server as a completely separate request.
Now consider the following code:
<asp:Image ID="Image1" Runat="server" ImageUrl="GenImage1.aspx" />
This Image control declaration illustrates that, instead of pointing directly to an image file, you can point an Image control toward a separate ASP.NET page where you can do any fancy dynamic image processing that is needed.
In this example, GenImage1.aspx doesn’t contain any HTML because its sole purpose is to output an image for inclusion in another page. The only code in the Page_Load event calls the procedure listed in Figure 1.
DisplayImage(New Bitmap("C:\PrivateDir\TopSecret.jpg")))
With HttpContext.Current
'Clear any existing page content
.Response.Clear()
'Set the content type
.Response.ContentType = "image/jpeg"
'Output the image to the OutputStream object
bmp.Save(.Response.OutputStream, _
Imaging.ImageFormat.Jpeg)
'Ensure the image is the only thing that is output
.Response.End()
End With
End Sub
Figure 1: ASPX pages don’t have to output HTML. This example outputs an image, so that image controls on other pages can reference this page instead of pointing directly to a static image file.
You might choose to add authentication code to a page such as GenImage1 to ensure only proper individuals see the image. You’re also likely to sprinkle in some code to make this simple example more versatile by accepting an image as a url parameter or some other mechanism to serve out a variety of image files instead of a single hard-coded one.
For an ASP.NET application to effectively manage files, it must have permission to access these files. By default, ASP.NET runs under a user account (intuitively) named ASPNET. This user account has very limited permissions. It will not be able to interact with most of the server’s file system by default, and it won’t have access to any network shares, either. Therefore, you’ll want to give the ASPNET user account the folder permissions it needs, or have ASP.NET use a different user account that does have the necessary permissions.
You can adjust the user account from within IIS, or you can configure Impersonation in the web.config file or the machine.config file. For initial experimentation and debugging I’d suggest having ASP.NET run under your user account since you know what files you have permission to access.
<!-- Web.config file. -->
<identity impersonate="true"/>
<identity impersonate="true" userName="Redmond\BillG" password="Melinda"/>
You can adjust the user account from within IIS, or you can configure Impersonation in the web.config file or the machine.config file. For initial experimentation and debugging I’d suggest having ASP.NET run under your user account because you know what files you have permission to access:
If the images aren’t stored in a file system, but instead are stored in a SQL Server database, then the code behind for GenImage1.aspx might look more like that shown in Figure 2.
Request("AttachmentID").ToString
dbConn.Open()
dr = cmdGetFile.ExecuteReader
If dr.Read Then
Response.Clear()
Response.ContentType = dr("ContentType").ToString
Response.OutputStream.Write(CType(dr("FileData"), _
Byte()), 0, CInt(dr("FileSize")))
Response.AddHeader("Content-Disposition", _
"inline;filename=" + dr("FileName").ToString())
End If
Figure 2: You can grab the image data from a database and write the raw file data directly into the Output Stream just before it’s sent to the browser.
This technique shows how you can dump a file directly from a database into the Response.OutputStream. ADO.NET is used to extract the binary data from a SQL Server image field, the data is then converted into a byte array, and, finally, it’s written to the output stream along with a descriptive header to help the browser better interpret the resulting file. For more details on this technique, see Easy Uploads.
Custom Image Generation
By using the functionality included in the System.Drawing namespace, your image manipulation capabilities are limitless. As if that weren’t enough power for a single developer to wield, there are also dozens of third-party components available under such categories as charting, reporting, and image processing libraries. Additionally, you can build your own image processing object models either from scratch or by building on existing technologies. Hopefully by now you’re beginning to realize the full power that can really lie behind the seemingly humble image control.
The previous techniques are great for distributing pre-existing images, but if you need to dynamically create an image from scratch (or modify an existing image on the fly,) then the System.Drawing namespace will become quite familiar to you. Using the classes within this namespace you could create dynamic charts, graphs, or other useful output. However, that’s soooo boring! The next example will focus on less tangible corporate enhancements, such as improved morale.
Smiles can be infectious, and the next example will generate as many as you’d like. Call the subroutine shown in Figure 3 to create a randomly generated smiley face.
ByVal Width As Integer, ByVal Height As Integer, _
ByVal rand As Random)
Dim SmileyWidth As Integer = rand.Next(Width / 2)
Dim SmileyHeight As Integer = rand.Next(Height / 2)
'Draw the head (a big circle)
Dim x As Integer = rand.Next(Width - SmileyWidth)
Dim y As Integer = rand.Next(Height - SmileyHeight)
Dim PenWidth As Integer = rand.Next(5)
Dim RandomColor As Color = _
Color.FromArgb(rand.Next(255), _
rand.Next(255), rand.Next(255))
Dim Pen As New Pen(RandomColor, PenWidth)
g.DrawEllipse(Pen, x, y, SmileyWidth, SmileyHeight)
'Draw the Nose (in the center of the head)
Dim NoseRect As System.Drawing.RectangleF
NoseRect.Width = CInt(SmileyWidth / 50)
NoseRect.Height = CInt(SmileyHeight / 50)
NoseRect.X = CInt(x + (SmileyWidth / 2) - _
(NoseRect.Width / 2))
NoseRect.Y = CInt(y + (SmileyHeight / 2) - _
(NoseRect.Height / 2))
g.DrawEllipse(Pen, NoseRect)
g.FillEllipse(Brushes.Green, NoseRect)
'Draw the Left Eye
Dim EyeRect As System.Drawing.RectangleF
EyeRect.Width = CInt(SmileyWidth / 30)
EyeRect.Height = CInt(SmileyHeight / 30)
EyeRect.X = CInt(x + (SmileyWidth / 2) - _
(EyeRect.Width / 2) - (SmileyWidth / 4))
EyeRect.Y = CInt(y + (SmileyHeight / 3) - _
(EyeRect.Height / 2))
g.DrawEllipse(New Pen(Color.Blue, PenWidth), EyeRect)
g.FillEllipse(Brushes.Blue, EyeRect)
'Draw the Right Eye
EyeRect.Width = CInt(SmileyWidth / 30)
EyeRect.Height = CInt(SmileyHeight / 30)
EyeRect.X = CInt(x + (SmileyWidth / 2) - _
(EyeRect.Width / 2) + (SmileyWidth / 4))
EyeRect.Y = CInt(y + (SmileyHeight / 3) - _
(EyeRect.Height / 2))
g.DrawEllipse(New Pen(Color.Blue, PenWidth), EyeRect)
g.FillEllipse(Brushes.Blue, EyeRect)
'Draw the smile
Dim points(2) As System.Drawing.PointF
points(0) = New System.Drawing.PointF(CInt(x + _
(SmileyWidth / 2) - (EyeRect.Width / 2) - _
(SmileyWidth / 4)), y + (SmileyHeight / 2))
points(1) = New System.Drawing.PointF(CInt(x + _
(SmileyWidth / 2)), y + (SmileyHeight / 2) + _
(SmileyHeight / 4))
points(2) = New System.Drawing.PointF(CInt(x + _
(SmileyWidth / 2) - (EyeRect.Width / 2) + _
(SmileyWidth / 4)), y + (SmileyHeight / 2))
g.DrawCurve(Pen, points, 1)
End Sub
Figure 3: By using the classes within the System.Drawing namespace, nearly any illustration imaginable can be generated at run time, including a bunch of smiley faces.
The first parameter is a Graphics object, which is the canvas on which this masterpiece will be painted. The height and width of the canvas are also passed along, to help ensure no smileys get abruptly cut off at the edges of the canvas. Finally, a Random object is passed along, which will be used to mix things up a bit.
Using the Random object, a random height and width are generated for the current smiley face and the head is drawn within this bounding rectangle. A pen is created of random thickness and color. This pen will be used to draw most features of the face. The DrawEllipse method creates a circle, which is used in concert with the FillEllipse method to fill it with color. Three smaller circles are then drawn within the head to represent the nose and two eyes. Finally, the smile is drawn by passing an array of points to the DrawCurve method of the Graphics object. All of the mathematical formulas throughout the example are there simply to calculate the position and size of each facial feature.
The final piece of this image generation puzzle is the code that will fill the Page_Load event of GenImage1.aspx and call the DrawSmiley routine. This Page_Load code is listed in Figure 4.
Dim rand As New Random 'random number generator
Dim bmp As Bitmap 'to hold the picture
Dim Width As Integer = 200 'image height
Dim Height As Integer = 200 'image width
Dim NumberOfSmileys As Integer = 3
'Grab parameters from the querystring (if any)
If Not IsNothing(Request.QueryString("NumSmileys")) Then
NumberOfSmileys = _
Int32.Parse(Request.QueryString("NumSmileys"))
End If
If Not IsNothing(Request.QueryString("Width")) Then
Width = _
CType(Request.QueryString("Width"), Integer)
End If
If Not IsNothing(Request.QueryString("Height")) Then
Height = _
CType(Request.QueryString("Height"), Integer)
End If
'create a new bitmap of the specified size
bmp = New Bitmap(Width, Height, _
Drawing.Imaging.PixelFormat.Format16bppRgb565)
'Get the underlying Graphics object.
g = Graphics.FromImage(bmp)
'Specify a white background
g.FillRectangle(Brushes.White, g.ClipBounds)
'Smooth out curves
g.SmoothingMode = Drawing2D.SmoothingMode.AntiAlias
'generate random smileys
For i As Integer = 1 To NumberOfSmileys
DrawSmiley(g, Width, Height, rand)
Next
DisplayImage(bmp)
Figure 4: This code goes in the Page_Load event of GenImage1.aspx, which can be referenced by the ImageURL property of a standard image control placed on any other page.
First, a few variables are declared with some default values specifying the size of the image and the number of smiley faces that will be drawn. Then the querystring is examined for optional parameters, which will replace the defaults. A blank bitmap is then created with a white background. Antialiasing is turned on to create smoother looking curves for rounded shapes, such as circles and smiles.
The main loop is then entered, iterating once for each smiley face to be drawn by calling the DrawSmiley subroutine mentioned earlier. Finally, the completed image is output by the DisplayImage subroutine in Figure 1.
To see the code in action, create a new WebForm and drop an Image control onto it. Then simply set the ImageURL property of that Image control to point to the GenImage1.aspx page. The result will look a lot like Figure 5.
Figure 5: The humble Image control can turn into a powerful tool once you’ve mastered the art of creating dynamic, configurable images at run time.
Conclusion
You should now have enough knowledge to manage and manipulate images in all kinds of complex ways. The graphical possibilities are endless with these tools at your disposal. You can expand on these ideas in all kinds of ways. For example, you could create image buttons and other graphical page elements on demand to keep your Web site feeling constantly fresh and new. Look for a future article about manipulating existing images at run time, such as: resizing, optimizing, cropping, rotating, adding borders, altering colors and brightness, etc.
The techniques outlined in this article are the foundation for virtually every modern third-party graphing component available on the market today. You could also create your own, if so inclined. Let your imagination wander and let me know what kinds of image creation tools you produce as a result.
The sample code in this article is available for download.
This article was originally published in ASP.NET Pro Magazine.