Thursday, September 6, 2007

SolutionBase: Watch Web site activity with Webalizer

Takeaway: Do you know who is visiting your Web site, and when? A good Web admin needs to know these statistics. Webalizer is a reliable application that can help you analyze your HTTP servers' traffic, keeping you on top of your sites and how they are being used. In this article, Jack Wallen will take a closer look at Webalizer and how to use it.


You probably take for granted that your Web site is always up and that people are actually visiting it. But are they? If they are, do you actually know where your visitors are coming from, what their referrer was, or what browser they were using? Do you know what the top pages of your site are? How about your top entry and exit pages?


These are the kind of statistics that a good Web admin needs to know. But before you start combing through log files, consider installing Webalizer . Started as a simple Perl script, Webalizer has grown into something far more useful. Webalizer is now a very fast, reliable application that reads your server log files and places them in a user-friendly format that can help you analyze your HTTP servers' traffic, keeping you on top of your sites and how they are being used. In this article, I'll show you what exactly Webalizer is and how to use it.


Installing Webalizer


Webalizer can be installed in many different ways. I am working on a Fedora 7 environment, so the best means for me to install is via yum . Of course, there are dependencies to be met; Webalizer depends upon the gd graphics library so you will need to install gd . If you are running a Fedora (or any distribution that relies on yum ), this can be done with the command yum install gd . Once that is complete, you can continue to install Webalizer. To finish up the installation, run the command yum install Webalizer to get the application installed.


If you are not using a yum -based distribution, or you'd prefer to install via source, the process isn't nearly as simple. Nevertheless, you will still have to get gd installed. Grab a copy of the gd source , unpack the archive (using the tar xvzf gd-2.0.35.tar.gz command), move into the gd directory, and run the usual set of commands to compile source:


./configure

make

make install


With gd installed, you're ready to install Webalizer. First, download a copy of the Webalizer source . Unpack the archive using the tar xvzf webalizer-2.01-10-src.tgz command. The next step is to move to the source directory newly created by the tar command. Once inside the source directory, run the same compiling commands you used earlier.


Up and running ... almost


Since Webalizer is running, you're probably assuming you should point your browser to http://web_server_add/webalizer/ to see what you have. If you do, the only thing you'll see is:


Not Found

The requested URL /webalizer/ was not found on this server.


What went wrong?


After I installed the application, it took me a while to finally locate where the Webalizer folder had been installed to. I have no idea why the rpm installed Webalizer where it did; but, nestled in /var/lib sat my Webalizer folder. After making a backup of the /var/lib/webalizer directory (using the tar cfz webalizer.tgz /var/lib/webalizer command) I decided to move the /var/lib/webalizer directory to /var/www/html .


With the directory in its proper place, I ran -- as root -- the command to start Webalizer, which is simply webalizer . After running the command, I received the error:


Using logfile /var/log/httpd/access_log (clf)

Error: Can't change directory to /var/lib/webalizer


Before I panicked, I looked for a configuration file; inside of /etc/ was the webalizer.conf file, ready to be edited. Naturally, before I moved on to any further configurations, I needed to see that Webalizer was up and running properly. Taking a look inside the /etc/webalizer.conf file, there is a line:


OutputDir /var/lib/webalizer


Since I moved the Webalizer directory, the system can no longer find the directory to send its output to. That's pretty easy to fix. Open up the webalizer.conf file in your favorite text editor, and change that line to:


OutputDir /var/www/html/webalizer


(where /var/www/html is your Web servers' document root) and re-run the command. This time, you should see something like this scroll by:


Webalizer V2.01-10 (Linux 2.6.21-1.3228.fc7) English

Using logfile /var/log/httpd/access_log (clf)

DNS Lookup (10): 1 addresses in 5.25 seconds

Using DNS cache file dns_cache.db

Creating output in /var/www/html/Webalizer

Hostname for reports is 'localhost.localdomain'

Reading history file... webalizer.hist

Generating report for June 2007

Generating summary report

Saving history information...

1087 records in 0.09 seconds


If you point your browser to http://server_address/webalizer now, you should see a screen similar to Figure A .



Figure A



The Webalizer opening screen gives you a yearly summary in a simple-to-read graph.


Now when you select the month (in the lower table) you will be directed to that month's statistical breakdown. The monthly breakdown is incredibly detailed:



  • Per Month : Total Hits, Total Files, Total Pages, Total Visits, Total Kbytes, Total Unique Sites, Total Unique URLs, Total Unique Referrers, Total Unique User Agents

  • Avg /Max : Hits per Hour, Hits per Day, Files per Day, Pages per Day, Visits per Day, KBytes per Day

  • Hits by Response Code

  • Daily Usage : Shown in Figure B

  • Daily Statistics : Hits, Files, Pages, Visits, Sites, Kbytes

  • Hourly Usage : Shown in Figure C

  • Hourly Statistic s: Avg/Total Hits, Files, Pages, Kbytes

  • Top URLs

  • Top URLs By Kbytes

  • Top Entry Pages: Shown in Figure D

  • Top Exit Pages

  • Top Sites

  • Top Sites by Total Kbytes

  • Top Referrers

  • Top User Agents

  • Usage By Country

  • Top Countries



Figure B



This shot shows, at a glance, which days are generating the highest traffic.



Figure C



This shot illustrates how much detail the Webalizer system gives you.



Figure D



This shot gives you an idea how Webalizer can help you analyze where your traffic is primarily coming into and leaving from.


Now that you have Webalizer up and running, let's take a look at some of the configuration options available.


Configuring Webalizer


One of the first things to do is set Webalizer up to run at a regular interval. The best solution is to create a cron job that will run Webalizer daily. To do this, create a new file -- webalizer.cron -- with the following contents:


#! /bin/sh

/usr/bin/webalizer


and place it in /etc/cron.daily . Now, make this file executable with the command: chmod +x /etc/cron.daily/webalizer.cron . You can test your new cron job by running the command /etc/crond.daily/webalizer.cron . You should get the same output you did when you ran the webalizer command on its own.


You can customize Webalizer by making changes to its configuration file. Remember, the configuration file is /etc/webalizer.conf . Some of the configuration options you will want to deal with include:



  • LogType : This option defines the type of log file used. The types allowed are: clf (default), ftp ( xferlogs produced by wu -ftp ), or squid (native squid logs).

  • OutputDir : As described above, this is where the Webalizer will place its output.

  • HistoryName : This allows you to define the name of the history file produced. This file keeps data for up to twelve months and by default it is called webalizer.hist .

  • Incremental : If you run a larger site, you will want to enable this. Incremental processing allows you to set up multiple partial log files instead of one large file. The default is no.

  • IncrementalName : If you enable Incremental, you will want to check out this option (if you do not enable Incremental, ignore this option). The default name is webalizer.current . This file will store the most recent report data.

  • ReportTitle : This is the text displayed as the title of the report.

  • HostName : This defines the hostname used on the report. This hostname is the name used on the clickable entries within the report. If you change this, make sure it is correct. The default is localhost. Localhost, of course, will only work if you are viewing the report on the server running Webalizer.

  • HTMLExtension : This allows you to define the file extension to use when creating the HTML pages. The default is .html.

  • PageType : This defines, for Webalizer, what URLS you (or your system) consider a page. The defaults are htm * and cgi .

  • UseHTTPS : This is employed if Webalizer is deployed on a secure server.

  • DNSCache : Here is where you specify your DNS cache file. This file is used for reverse DNS lookups. The default is dns_cache.db .

  • DNSChildren : This is where you can define how many child processes may be used when performing DNS lookups. Standard values are between 5 and 20 with 10 being the default.

  • HTMLPre : This allows you to define any HTML code to insert at the beginning of the file. The default is DOCTYPE.

  • HTMLHead : This allows you to define any HTML code to insert between the tags.

  • HTMLBody : This allows you to define any HTML code inserted within the tag.

  • HTMLPost : This allows you to define any HTML code immediately before the first of the page.

  • HTMLTail : This allows you to define any HTML code at the bottom of each HTML document.

  • HTMLEnd : This allows you to define any HTML code to add at the very bottom of each HTML document.

  • Quiet : This option suppresses any output messages. If you are running Webalizer from a cron job it is best to use this option.

  • ReallyQuiet : This option will suppress all messages, including warnings.

  • TimeMe : This option will force Webalizer to show the timing information at the end of processing.

  • GMTTime : All reports will be shown in GMT (UTC) time.

  • Debug : Prints additional information within error messages.

  • FoldSeqErr : If set to yes, Webalizer will ignore sequence messages.

  • VisitTimeout : This allows you to set the default timeout for a visit. Default is 1800 seconds.

  • IgnoreHist : This option really shouldn't be used. If used, it will cause Webalizer to ignore the history file.

  • CountryGraph : This allows you to enable or disable the Country Graph. Default is yes (enabled).

  • DailyGraph/DailyStats : These allow you to enable or disable the Daily Graph and Daily Stats. Defaults are yes (enabled).

  • HourlyGraph/HourlyStats : These allow you to enable or disable the Hourly Graph and Hourly Stats. Defaults are yes (enabled).

  • GraphLegend : This allows you to enable the color-coded legends for all graphs. Default is yes.

  • GraphLines : This allows you to enable the lines used to make the graphs more easily readable. The value of the option is in a number; the lower the number the better. The default is 2.

  • Top Options : These options set the number of entries for each table. You can define these to fit your needs. The options are: TopSites, TopkSites, TopURLs, TopKURLs, TopReferrers, TopAgents, TopCountries, TopEntry, TopExit, TopSearch, and TopUsers.

  • All Options : These keywords enable the display of all URL's, Sites, Referrers, User Agents, Search Strings, and Usernames. When these are enabled each will have their own HTML page created. If these options are enabled there must first be more items than will fit in the Top tables and the listing will only show those items that are normally visible. The options are: AllSites, AllURLs, AllReferrers, AllAgents,AllSearchStr, and AllUsers.

  • IndexAlias : Using this feature will strip the need for the string index.html from an address. In otherwords /directory/index.html can be used as only /directory/.

  • Ignore* : This keyword will cause Webalizer to ignore records.

  • Hide* : This keyword will prevent items from being displayed in the Top tables but will be included in the main totals.

  • Group* : This keyword groups similar objects together.

  • Include* : This keyword allows you to include log records based on hostname, URL, user agent, referrer, or username.

  • SearchEngine : Allows you to define search engines and their query strings that are used to find your site. An example: SearchEngine google.com q=

  • Dump* : These keywords allow sites, URLs, Referrers, User Agents, Usernames, and Search Strings to be dumped into a tab-delineated text file that can be used in database applications.


Final thoughts


I have used Webalizer with many sites. The information is displays is informative, easy to read, and will help you in the analysis of your Web sites. If you're looking for one of the best and your budget points you to open source, Webalizer is the perfect tool for your needs.

SolutionBase: File searching made easy with Linux

Takeaway: If you're using Linux as a desktop operating system, you're not stuck using command line tools in order to find data. Jack Wallen examines some outstanding Linux tools that should make your searching simple.

One of the major innovations claimed by Microsoft in Windows Vista was its improved searching and integrated search bar. Likewise, Apple has been touting Spotlight for desktop search in Mac OS X. If you're using Linux as a desktop operating system, you're not left out in the cold, nor are you stuck using command line tools in order to find data on your Linux workstation.

There are some outstanding Linux tools to make your searching simple. We'll examine the GNOME Beagle search tool, the KDE search tool Kfind, and the Google Linux Desktop.

Beagle

Beagle is a tool that will, as described from the official Beagle Web site, "Ransack your personal information space to find whatever you're looking for." Beagle does so, but with a bit more panache than you'd think. For one, Beagle works in conjunction with the Linux kernel's inotify, which is a kernel-level file notification system that remains aware of any file change.

If you are running a recent version of GNOME, you won't have to worry about installing Beagle; it's already there. Using Beagle is very simple. The first thing you are going to do (in the GNOME Desktop Environment) is to go to the Places menu and select "Search" (as shown in Figure A).

Figure A

It couldn't get any simpler.

This will open up the main Beagle window (shown in Figure B). The Beagle main window has but one function, search for files. You enter the string you are looking for in the Find bar and press Find Now or [Enter].

Figure B

The tips are pretty much the same for any Web search engine.

When you start up Beagle more than just a front-end for a command starts up. If you run ps when the Beagle main window is open, you'll see more than just Beagle running. Take a look at Figure C. Notice (when running ps aux | grep Beagle) that there are three entries: Beagle Daemon (Beagled), Beagle Search (Beagle-search), and the ps command just run. This shows that Beagle does have a background daemon running.

Figure C

Looks like someone forgot that .exe commands belong in the Windows world.

Unlike the Linux Google Desktop (which we'll chat about later), Beagle doesn't automatically index your entire system. What it does do is index your ~/ directory. So starting up Beagle for the first time will not take hours to complete. However, this does mean, upon first start, the only indexed files are those in your ~/ directory. This, of course, doesn't help those of us who store files on separately mounted drives or in other non-standard directory structures. For that, you will have to tell Beagle where to look.

To do so, open Beagle and go to the Search menu, and select Preferences. Within the Preferences window (see Figure D below) you'll see two tabs (Search and Indexing). Press on the Indexing tab to reveal the location of the indexed directories.

Figure D

You will definitely want to have the Start search & indexing services automatically checked.

Within the Indexing tab, shown in Figure E, you'll see that Index my home directory is checked by default.

Figure E

If you do not keep files in your ~/ directory, you can uncheck the Index My Home Directory check box.

This is good, but we want Beagle to index more locations. To do that, press the Add button which will open up a window where you will navigate to the directory you wish to add. You'll see this in Figure F.

Figure F

It would not be wise (unless you are running as the root user -- which you are not) to add the entire filesystem to the indexing.

Once you have located the directory you want to index, press Open to add it. Depending on the size of the directory you have just added, Beagle could take quite some time to index your files (even hours). So you might want to go about your normal business while Beagle takes care of its business. Of course, there is no notification that Beagle is done indexing your new directories.

I added my /data directory which is a 40 GB drive mounted on the /data partition currently 93 percent filled with data. Once I added this directory to Beagle, it took less than five minutes to index the entire contents of the drive and Beagle to start showing me results from searches.

To do a search in Beagle, simply type in the string you're looking for and do one of the following three things: Press [Enter], select Search, or stop typing. You can also easily configure how results are sorted. To do this, select View and then select the option you want.

Another way to search with Beagle is to add a search button to your panel. This just adds a shortcut to the Beagle main window so you don't have to run through the Places menu. It would be nice if the developers just added a text entry window for the panel so you could type in the word you are looking for, and Beagle would pop up the results.

Beagle is a great search feature that works exactly as it should. Now let's move on to KDE and see what it has to offer.

KDE searches

Searching in KDE is not quite as advanced as Beagle, but it's as intuitive as one would expect in the user-friendly world of KDE. The application used is called Kfind.

Kfind has a problem: it's not an indexing search tool. In other words, when you go to search for a file, Kfind starts from the beginning and rescans your system. So if you're searching a rather large file system for a file that starts with a "z" and the file is located in a directory that starts with a "z", you might have a while to wait. Of course, you can narrow your parameters; but, if you have no idea where to begin (and thus can't narrow your parameters), you're just going to have to wait.

There are two ways to run Kfind. First, from within the Konqueror browser, you can hit [Ctrl]F to add the Kfind applet within Konqueror, as shown in Figure G.

Figure G

Now Konqueror has a find applet added.

Once the Kfind applet has been opened within Konqueror, a search is simple. It will look as if there is an "indexing" option, but what it really does is tell Kfind to use the locate database. This is not really the same as a more modern search indexing feature, but the locate database is generally updated daily, so it does at least keep track of what has changed.

You can also do your Kfind searches without the help of Konqueror (at least initially) by adding the Kfind applet to the Kpanel. To do this right, select the Kpanel and select Add Applet To Panel. A new window will appear where you will scroll down until you reach Find, as shown in Figure H.

Figure H

Highlight Find and press Add to add the Kfind applet to the panel.

Now when you press the Kfind button on the panel, a submenu will appear, offering you two choices: Find Files and Web Searches. Selecting Find Files will open up the Kfind application. If you click on a Web Search, Kfind will open up Konqueror to www.google.com.

Google Linux Desktop

GLD is the Mack Daddy of Linux file search tools. This tool is not only the easiest to use, it's the fastest (once it's indexed) and most reliable. It's simple to install: Download the required file from the Google Linux Desktop page and install it. I am using Fedora, so I will install the rpm with the command rpm -ivh google-desktop-linux-1.0.1.0060.rpm.

Once installed, though, I was a bit confused at how to start the tool. In KDE, I found a menu entry for the Google Desktop, but selecting the entry didn't actually start anything. It wasn't until I stumbled across a keyboard shortcut within either Firefox or Konqueror that I saw Google Desktop in action.

With either browser open, if you double-click [Ctrl], the Google Desktop tool will appear, as shown in Figure I.

Figure I

Enter your search parameters and press [Enter] to see the results.

Once you enter your search string, do not press [Enter]; let the Google Desktop show you the initial results first, as shown in Figure J. In order to see the complete results of your string, you will want to select See All Results In A Browser.

Figure J

If you select Search More, you will multiple search options for your string.

You can also customize your Google Desktop experience. From the KDE menu, if you go to the Google Desktop submenu, you'll see two entries: Google Desktop and Google Desktop Preferences. Select the latter to open up the GLD Preferences window.

One of the first configurations I took care of was to change the default action of the Quick Search box, which is found in the Display tab. I am not sure why the developers of GLD made Search The Web the default action; it makes no sense. Instead, I changed that to Search Desktop.

The rest of the Google Linux Desktop preferences are all straightforward. The only option which might need addressing is the Advanced Features under the Other tab. This is one of those features that I would recommend you shut off immediately. Keeping this feature enabled allows GLD to collect "non-personal" information and send it to Google. (No thank you; Google doesn't need to know that much about me and my computer.)

Final thoughts

Did you think Linux was only a collection of antiquated command lines? Shame on you. As you can see, the Linux desktop is full of outstanding search tools. With Google Linux Desktop being the most popular, the other graphical options are not far behind.

Of course, I still tend to stick with Find and Locate; but, with Google Linux Desktop on my system, the days of command line searches might just be fading fast.

ASP.NET C# Control Data Binding

When you want to visually deal with records of a database from a Windows application, you may simply want to view the data. Although Microsoft Visual Studio 2005 provides various effective means of binding data to Windows controls, sometimes, you may want to manually bind the controls. To do this, you can use a DataSet object.


The DataSet class allows you to access any type of information from a table. These include table's object name, the columns (and their properties), and the records. This means that you should be able to locate a record, retrieve its value, and assign it to a control. Probably the only real problem is to make sure your DataSet object can get the necessary records. The records could come from a database (Microsoft SQL Server, Oracle, Microsoft Access, Paradox, etc).


Here is an example of binding data to two text boxes to the records of a Microsoft SQL Server table:


using System; using System.Data; using System.Data.SqlClient; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { SqlConnection conDatabase = new SqlConnection("Data Source=(local);Database='bcr1';" + "Integrated Security=true"); SqlCommand cmdDatabase = new SqlCommand("SELECT * FROM dbo.Employees;", conDatabase); DataSet dsEmployees = new DataSet("EmployeesSet"); SqlDataAdapter sda = new SqlDataAdapter(); sda.SelectCommand = cmdDatabase; sda.Fill(dsEmployees); DataRow recEmployee = dsEmployees.Tables[0].Rows[0]; txtFirstName.Text = (String)recEmployee["FirstName"]; txtLastName.Text = (String)recEmployee["LastName"]; conDatabase.Close(); } }

C# Introduction and Overview

For the past two decades, C and C++ have been the most widely used languages for developing commercial and business software. While both languages provide the programmer with a tremendous amount of fine-grained control, this flexibility comes at a cost to productivity. Compared with a language such as Microsoft® Visual Basic®, equivalent C and C++ applications often take longer to develop. Due to the complexity and long cycle times associated with these languages, many C and C++ programmers have been searching for a language offering better balance between power and productivity.

There are languages today that raise productivity by sacrificing the flexibility that C and C++ programmers often require. Such solutions constrain the developer too much (for example, by omitting a mechanism for low-level code control) and provide least-common-denominator capabilities. They don't easily interoperate with preexisting systems, and they don't always mesh well with current Web programming practices.

The ideal solution for C and C++ programmers would be rapid development combined with the power to access all the functionality of the underlying platform. They want an environment that is completely in sync with emerging Web standards and one that provides easy integration with existing applications. Additionally, C and C++ developers would like the ability to code at a low level when and if the need arises.

Microsoft Introduces C#

The Microsoft solution to this problem is a language called C# (pronounced "C sharp"). C# is a modern, object-oriented language that enables programmers to quickly build a wide range of applications for the new Microsoft .NET platform, which provides tools and services that fully exploit both computing and communications.

Because of its elegant object-oriented design, C# is a great choice for architecting a wide range of components-from high-level business objects to system-level applications. Using simple C# language constructs, these components can be converted into XML Web services, allowing them to be invoked across the Internet, from any language running on any operating system.

More than anything else, C# is designed to bring rapid development to the C++ programmer without sacrificing the power and control that have been a hallmark of C and C++. Because of this heritage, C# has a high degree of fidelity with C and C++. Developers familiar with these languages can quickly become productive in C#.

Productivity and Safety

The new Web economy-where competitors are just one click away-is forcing businesses to respond to competitive threats faster than ever before. Developers are called upon to shorten cycle times and produce more incremental revisions of a program, rather than a single monumental version.

C# is designed with these considerations in mind. The language is designed to help developers do more with fewer lines of code and fewer opportunities for error.

Embraces emerging Web programming standards
The new model for developing applications means more and more solutions require the use of emerging Web standards like Hypertext Markup Language (HTML), Extensible Markup Language (XML), and Simple Object Access Protocol (SOAP). Existing development tools were developed before the Internet or when the Web as we know it today was in its infancy. As a result, they don't always provide the best fit for working with new Web technologies.

C# programmers can leverage an extensive framework for building applications on the Microsoft .NET platform. C# includes built-in support to turn any component into an XML Web service that can be invoked over the Internet-from any application running on any platform.

Even better, the XML Web services framework can make existing XML Web services look just like native C# objects to the programmer, thus allowing developers to leverage existing XML Web services with the object-oriented programming skills they already have.

There are more subtle features that make C# a great Internet programming tool. For instance, XML is emerging as the standard way to pass structured data across the Internet. Such data sets are often very small. For improved performance, C# allows the XML data to be mapped directly into a struct data type instead of a class. This is a more efficient way to handle small amounts of data.

Eliminates costly programming errors
Even expert C++ programmers can make the simplest of mistakes-forgetting to initialize a variable, for instance-and often those simple mistakes result in unpredictable problems that can remain undiscovered for long periods of time. Once a program is in production use, it can be very costly to fix even the simplest programming errors.

The modern design of C# eliminates the most common C++ programming errors. For example:

  • Garbage collection relieves the programmer of the burden of manual memory management.
  • Variables in C# are automatically initialized by the environment.
  • Variables are type-safe.

The end result is a language that makes it far easier for developers to write and maintain programs that solve complex business problems.

Reduces ongoing development costs with built-in support for versioning
Updating software components is an error-prone task. Revisions made to the code can unintentionally change the semantics of an existing program. To assist the developer with this problem, C# includes versioning support in the language. For example, method overriding must be explicit; it cannot happen inadvertently as in C++ or Java. This helps prevent coding errors and preserve versioning flexibility. A related feature is the native support for interfaces and interface inheritance. These features enable complex frameworks to be developed and evolved over time.

Put together, these features make the process of developing later versions of a project more robust and thus reduce overall development costs for the successive versions.

Power, Expressiveness, and Flexibility

Better mapping between business process and implementation
With the high level of effort that corporations spend on business planning, it is imperative to have a close connection between the abstract business process and the actual software implementation. But most language tools don't have an easy way to link business logic with code. For instance, developers probably use code comments today to identify which classes make up a particular abstract business object.

The C# language allows for typed, extensible metadata that can be applied to any object. A project architect can define domain-specific attributes and apply them to any language element-classes, interfaces, and so on. The developer then can programmatically examine the attributes on each element. This makes it easy, for example, to write an automated tool that will ensure that each class or interface is correctly identified as part of a particular abstract business object, or simply to create reports based on the domain-specific attributes of an object. The tight coupling between the custom metadata and the program code helps strengthen the connection between the intended program behavior and the actual implementation.

Extensive interoperability
The managed, type-safe environment is appropriate for most enterprise applications. But real-world experience shows that some applications continue to require "native" code, either for performance reasons or to interoperate with existing application programming interfaces (APIs). Such scenarios may force developers to use C++ even when they would prefer to use a more productive development environment.

C# addresses these problems by:

  • Including native support for the Component Object Model (COM) and Windows.-based APIs.
  • Allowing restricted use of native pointers.

With C#, every object is automatically a COM object. Developers no longer have to explicitly implement IUnknown and other COM interfaces. Instead, those features are built in. Similarly, C# programs can natively use existing COM objects, no matter what language was used to author them.

For those developers who require it, C# includes a special feature that enables a program to call out to any native API. Inside a specially marked code block, developers are allowed to use pointers and traditional C/C++ features such as manually managed memory and pointer arithmetic. This is a huge advantage over other environments. It means that C# programmers can build on their existing C and C++ code base, rather than discard it.

In both cases-COM support and native API access-the goal is to provide the developer with essential power and control without having to leave the C# environment.

Conclusion

C# is a modern, object-oriented language that enables programmers to quickly and easily build solutions for the Microsoft .NET platform. The framework provided allows C# components to become XML Web services that are available across the Internet, from any application running on any platform.

The language enhances developer productivity while serving to eliminate programming errors that can lead to increased development costs. C# brings rapid Web development to the C and C++ programmer while maintaining the power and flexibility that those developers call for.

Setting Up Global Objects with the global.asax File

By Phil Syme

ASP.NET uses a special file, called global.asax, to establish any global objects that your Web application uses. The .asax extension denotes an application file rather than .aspx for a page file.


Each ASP.NET application can contain at most one global.asax file. The file is compiled on the first page hit to your Web application. ASP.NET is also configured so that any attempts to browse to the global.asax page directly are rejected.


Listing 3.8 shows a global.asax file that you can use to make a more complete hit counter.

Listing 3.8 global.asax: Event Handlers for the Application and Session Objects


1: <%@ language="C#" %>

2: <script runat="server"><br /> 3: void Application_Start(Object Sender, EventArgs e)<br /> 4: {<br /> 5: Application["Hits"] = 0;<br /> 6: Application["Sessions"] = 0;<br /> 7: Application["TerminatedSessions"] = 0;<br /> 8: }<br /> 9:<br />10: //The BeginRequest event is fired for every hit to every page in the site<br />11: void Application_BeginRequest(Object Sender, EventArgs e)<br />12: {<br />13: Application.Lock();<br />14: Application["Hits"] = (int) Application["Hits"] + 1;<br />15: Application.UnLock();<br />16: }<br />17: void Session_Start(Object Sender, EventArgs e)<br />18: {<br />19: Application.Lock();<br />20: Application["Sessions"] = (int) Application["Sessions"] + 1;<br />21: Application.UnLock();<br />22: }<br />23:<br />24: void Session_End(Object Sender, EventArgs e)<br />25: {<br />26: Application.Lock();<br />27: Application["TerminatedSessions"] = <br />28: (int) Application["TerminatedSessions"] + 1;<br />29: Application.UnLock();<br />30: }<br />31:<br />32: void Application_End(Object Sender, EventArgs e)<br />33: {<br />34: //Write out our statistics to a log file<br />35: //...code omitted...<br />36: }<br />37: </script>


The global.asax file in Listing 3.8 contains event handlers for the Session and Application objects. Each event handler has the same signature as the Page_Load event handler.


The code in Listing 3.8 handles three Application object-related events: Start (Lines 3-8), End (Lines 24-30), and BeginRequest (Lines 11-16). Start and End are called when the Web application starts and ends, respectively. BeginRequest is called for every page hit that the site receives. Listing 3.8 updates the total number of hits on this event.


The Session Start (Lines 17-22) and End (Lines 24-30) events are handled in the middle of the listing. These two events count how many different Web users have accessed the site.


You can write a simple page to utilize the statistics that Listing 3.8 tracks. Listing 3.9 shows a page that writes out the results of the hit-counting code


Figure 3.5


Figure 3.5 shows the Statistics page after a few hits.

Listing 3.9 Statistics.aspx: The Results of the Tracking in the global.asax File


<%@ page language="C#" %>


<h2> Statistics for the Test Web Application </h2>

Total hits: <% Response.Write(Application["Hits"].ToString()); %>


Total sessions: <% Response.Write(Application["Sessions"].ToString()); %>


Expired sessions:

<% Response.Write(Application["TerminatedSessions"].ToString()); %>


 


TIP


If the global.asax file is modified, the Web application is restarted on the next page hit, and the global.asax file is recompiled.


Figure 3.5 The Statistics page after some traffic.

Adding Objects to the global.asax File


To use global objects in your ASP.NET application, add the <object> tag in the global.asax file for each one. The <object> tag has an optional attribute called scope, which determines if the added object will be created on-the-fly, associated with the Application object, or associated with the Session object.


To explore the <object> tag, let's create a simple class that stores and retrieves strings. The sample is going to associate an object of this class with the Application object in the global.asax file, so the class must be thread-safe. The term thread-safe means that many client threads can access the class at the same time without any data corruption. Because ASP.NET uses one thread per page, ensuring that the class is thread-safe is critical if multiple users browse the site at the same time.


Understanding Threads


What's a thread? To answer, let's review processes first. All Windows applications are processes that run on your computer. Processes contain their own code and memory space, and can interact with computer peripherals, such as the screen or the network card. ASP.NET runs as a process, and it executes your code, of course.


Each process contains one or many threads. A thread is like a process or an individual program because it also executes a certain set of code. However, a thread is a "lightweight" version of a process. Threads live inside processes and use a process' memory. The Windows operating system gives each thread a small amount of time to execute and quickly switches between threads so that it seems like more than one thread is executing at the same time. For all practical intents, the threads are running at the same time.


Because threads use their parent process' memory, they can potentially change the same object (in memory) at the same time. For two threads, A and B, thread A might add 10 to a counter object. Thread B might subtract 10 from the counter. If the two threads are switched on and off by the operating system in an "unlucky" way, the counter object could contain a scrambled result.


Each ASP.NET page, when it is being processed, gets its own thread. If more than one user uses the Web site at the same time, many threads will appear, even if both users are accessing the same ASP.NET page.


To prevent threads (and ASP.NET pages) from interfering with each other when accessing the same object, use the technique in the example that follows.


To make the class thread-safe, use the Synchronized method of the base collection class, Hashtable. This class is shown in Listing 3.10.

Listing 3.10 MyClass.cs: Implementing a Class to Store and Retrieve Strings in a Thread-Safe Way


using System;

using System.Collections;


namespace TestApplication {


public class MyClass {

private Hashtable m_col;

//m_colSync will be a thread-safe container for m_col

private Hashtable m_colSync;


public MyClass() {

m_col = new Hashtable();

m_colSync = Hashtable.Synchronized(m_col);

}


public void AddItem(String Name, String Value) {

m_colSync[Name] = Value;

}


public String GetItem(String Name) {

return (String) m_colSync[Name];

}

}

}


//note: use "csc /out:bin\myclass.dll /t:library myclass.cs /r:system.dll"

// to compile with the command line utility


The next step is to add this object to the global.asax file with the <object> tag. A short global.asax file that does this follows:


<object id="StringCollection" runat="server" class="TestApplication.MyClass" scope="Application">


The id attribute tells ASP.NET what to call our object when it's used later. The class attribute can be used to specify COM objects in addition to .NET components, but by using their ProgID.


If the listing omitted the scope attribute, a new object is created on-the-fly for every page that uses the StringCollection object.


Let's write a sample page that uses StringCollection. Listing 3.11 shows just such a page.

Listing 3.11 UseObject.aspx: Using the Global Object


<%@ language="C#" %>

<script runat="server"><br />void Page_Load(Object Sender, EventArgs e)<br />{<br /> StringCollection.AddItem("FirstUser", "Joe Smith");<br />}<br /></script>


The name of the first user is

<% Response.Write(StringCollection.GetItem("FirstUser")); %>


Putting Code Behind the global.asax File


If you use Visual Studio.NET to create your Web project, it will use the code behind feature of ASP.NET for global.asax. The code behind file that is generated is named global.asax.cs, when using the C# compiler. To use code behind in global.asax manually, use the Application directive instead of the Page directive, like this:


<@ Application Inherits="MyApplication.GlobalClass" %>


Listing 3.12 shows a code behind example for the global.asax file.

Listing 3.12 GlobalClass.cs: Implementing the Code for the global.asax File


namespace MyApplication

{

using System;

using System.Web;

using System.Web.SessionState;


public class GlobalClass : System.Web.HttpApplication

{

protected void Session_Start(Object Sender, EventArgs e)

{

Response.Write("Session started

");

}

}

}

//note: use "csc /out:bin\globalclass.dll /t:library globalclass.cs /r:system.dll"

// to compile with the command line utility


TIP


You can mix code behind and the object tag in the global.asax file.

Basic Ruby on Rails Tutorial

As a newbie, getting started with Rails was tricky without some help from the IRC folks. If you get stuck, that’s a good place for help, as the author hangs out in there pretty regularly.

That said, some sample code is worth its weight in gold, so here’s how I got a basic Rails application running.

First, check GettingStartedWithRails or http://api.rubyonrails.org/ for installation and basic setup instructions.
Super-quick “hello world” app

This was written by ReinH as the quickest possible way to get from install to “hello world”.


alias rails_hello_world='rails hello && cd hello && ./script/generate controller welcome hello && echo "Hello World" > app/views/welcome/hello.rhtml && ./script/server -d && firefox 0.0.0.0:3000/welcome/hello'

Requirements

All the requirements are outlined for each operating system at the top of GettingStartedWithRails
Optional

If you’d like a full IDE for rails, try Aptana (formerly “RadRails”)
Getting started

First build a rails project using the following command:


rails MyProject

Then, start the WeBRICK webserver with the following command:


cd MyProject
./script/server

If you’re on Windows, the script/server isn’t marked executable, or the #! line is incorrect for your system for some other reason, you may have to use


ruby ./script/server


and other similar commands may also require that you explicitly invoke the ruby interpretter.

Then browse to http://localhost:3000 and and check that you get the “Congratulations, you’re on Rails!” screen. If you don’t see anything, make sure you’re not running any firewalls block localhost, port 3000.
Or with Apache

It is also possible to use the Apache webserver with Ruby on Rails. To do this follow these steps:

* Set up Apache for the Rails application (see GettingStartedWithRails)
* Go to http://rails/ (or whatever your ServerName is) and check that you get the “Congratulations, you’re on Rails!” screen.
_Apache2 note: The httpd.conf contains an entry which determines the port to be used. For example, ServerName AServerNameHere:80, port 80 was selected. Try the following url http://localhost:80/

Building a simple application

Note: this tutorial will use WEBrick style URLs. If you’re using apache please change URLs to match your configuration.

To start, we’ll make a simple “Hello World” type example. For this demonstration, we won’t even use the database.

Rails is a MVC (Model View Controller) framework, which means that all the output will happen in the controllers and the views. So the first thing we’ll need is a controller. Run this from your rails project directory to generate one:


./script/generate controller hello index

This will generate all the files necesary for our example. Try browsing to http://localhost:3000/hello . And you should see something like:

Hello#index

Find me in app/views/hello/index.rhtml

Now (just like it tells you) open app/views/hello/index.rhtml. This is the View. It contains the text that will be shown at http://localhost:3000/hello (or http://localhost:3000/hello/index ).

Change it however you want. Or leave it alone.

So it should be getting clearer now that URLs in Rails usually follow the format:
hostname/controller/view

Now take a look at app/controllers/hello_controller.rb. This is the Controller. It contains program logic that builds the data for the view. Notice the index function. Any public methods on a controller become actions in Rails.

Try adding the following method just before “end” :


def world
@greeting = "hello world!"
end

Now browse to: http://localhost:3000/hello/world

Template is missing

Missing template script/../config/../app/views/hello/world.rhtml

Uh oh! We have a problem. The template is missing. “Template” is a rails word for “View”. And it even tells us the path where it was expecting the template. This path has a lot of ..’s in it, but regardless you can probably see where it’s going. So let’s edit app/views/hello/world.rhtml and add the following text:


<%= @greeting %>

Now refresh that last page.

hello world!

Well look at that. Notice how the controller created a variable @greeting that carried over into the view. All the instance variables (ones that start with @) from the controller get pulled into the view.

Next we’ll get the Model involved, which will take a little more code. In the mean time, feel free to check out the other tutorials on GettingStartedWithRails.

Go on to TutorialStepOne

A Simple French Tutorial to start with Rails : Ma première application RubyOnRails

A Simple Turkish Tutorial to start with Rails : GMYT

Wednesday, September 5, 2007

Phalanger PHP language Compiler for the .NET platform

Phalanger overview

Phalanger is a new PHP implementation introducing the PHP language into the family of compiled .NET languages. It provides PHP applications an execution environment that is fast and extremely compatible with the vast array of existing PHP code. Phalanger gives web-application developers the ability to benefit from both the ease-of-use and effectiveness of the PHP language and the power and richness of the .NET platform taking profit from the best from both sides.

Phalanger and existing PHP applications

Phalanger provides PHP language and standard library implementation that is compatible with most of the existing PHP applications including many large open source PHP projects (see Phalanger apps section for more details). Phalanger compiles PHP scripts into MSIL (Microsoft Intermediate Language) which can be executed by .NET or Mono runtime. This runtime executes MSIL code using JIT (Just-In-Time) compilation, which makes the execution far more effective than interpretation and significantly improves application speed.

As part of Phalanger project we also implemented standard PHP library functions (for example string and array manipulation etc). These functions are reimplemented using managed language (mostly C#) and have very good performance. Thanks to the managed code the implementation is also more secure and the security can be configured using standard .NET tools. Phalanger also supports calling native PHP4 extensions which makes it possible to use most of the PHP functions and classes.

Phalanger is using the ASP.NET framework internally, but only for implementing HTTP request and respons handling, sessions and cookies. The page rendering is still the same as in PHP which gives you full control over the generated code and also compatibility with existing PHP applications.

Enabling PHP to use .NET classes

Starting with the 2.0 version, Phalanger supports full interoperability with .NET. This means that you can access almost any .NET classes (written in C#, VB.NET and other managed languages) from your PHP applications. This requires adding several features to the PHP language that allows you to use .NET features like namespace (which are used to organize .NET classes) and generics (used for specifying type parameters of methods and classes). These language extensions are called PHP/CLR and are designed to retain dynamic PHP behavior (for more details see PHP/CLR Language Extensions).

Thanks to the PHP/CLR extensions you can easilly integrate existing PHP and ASP.NET applications, or use classes available for .NET Framework in your PHP application. This gives you for example the possibility to modify open-source PHP applications to use the standard ASP.NET 2.0 Membership (user management) system, which is very powerfull option for integrating web applications.

You can also develop new applications using PHP with the PHP/CLR language extensions and combine PHP and other .NET languages (for example C#) in one project. This gives you the possibility to leverage of the C# strictness in the application logic layer where the safety and strict object orientation is important, but use the simplicity and efficiency of PHP language for developing the presentation layer.

Developing .NET/Mono applications or libraries with PHP

Thanks to the full .NET/Mono support it is also possible to develop all kinds of .NET applications using PHP language. This includes applications with Windows Forms/Gtk# user interface, class libraries and web applications build using the ASP.NET infrastructure. This allows you to develop ASP.NET style applications which benefits from the ASP.NET features like code separation using code-behind, ASP.NET controls (including any third-party controls) and other. You can use this Phalanger project for smoothly porting PHP applications to the ASP.NET infrastructure, because you can make the original PHP application a part of a larger ASP.NET system, but still write all the source code in the PHP language.

Using this option you can also compile existing PHP projects to standard .NET assembly and use it in any .NET application. Using this technique you can use many of the very usefull and publicly available PHP projects in .NET as well. Phalanger contains two different compilation modes - the first mode (called legacy) is fully compatible with standard PHP and you can use it for compiling any PHP scripts, however using PHP scripts compiled in legacy mode is a bit more difficult. To make using PHP objects from C# as simple as possible we also introducted pure mode in which you have to follow a few additional rules (like specifying all source files during the compilation instead of using includes), but it gives you full .NET interoperability, which means that you can use class written in PHP directly from C#!

The possibility of developing fully .NET/Mono compatible application using PHP language is demonstrated on the attached screenshot where you can see Gtk# application written in PHP running on Fedora Core 6. You can visit the Tutorials section of this page for more examples including Windows Forms and ASP.NET applications.

Visual Studio Integration

Visual Studio supports the integration of additional languages into the editor using the VSIP (Visual Studio Integration Package). Thanks to this possibility we were able to implement Visual Studio extensions for PHP developers. This extensions includes wide range of project templates including Legacy PHP Application, Windows Forms Phalanger Application, ASP.NET Application written in PHP and many other.

Syntax highlighting for PHP source files is a must-have for every IDE and we’re working on support for InteliSense as well. You can also use the Visual Studio debugger for finding bugs in your PHP applications (running on Phalanger). The debugger allows you to set breakpoints in the source code, step through the code and view values of variables, however we are still working on improving the full VS.NET support.

Requirements

Phalanger runs on Microsoft .NET and Mono. If you want to use it with .NET you’ll need Microsoft .NET Framework 2.0 (which runs on Microsoft Windows 2000/XP/2003/Vista) and optionally Internet Information Services (IIS) with ASP.NET installed for hosting Phalanger web applications. For Mono, we recommend using the latest Mono package and optinally Apache web server with configured Mono support (using mod-mono). To benefit from the additional Visual Studio integration feature a Microsoft Visual Studio 2005 is required (Express editions of Visual Studio unfortunately can’t be supported because of licensing limitations).

Phalanger Features

Makes PHP first-class citizen in the .NET languages family

  • Compiles PHP language to the MSIL (Microsoft Intermediate Language), which is a byte-code assembly used by the .NET CLR
  • Allows using .NET objects from the PHP language thanks to the PHP/CLR Language Extensions
  • Enables using libraries written in PHP from other .NET languages

Compiles existing PHP applications to improve execution speed

  • Compiles many existing PHP applications (see Phalanger apps)
  • Improves execution speed thanks to the compilation and use of JIT (Just-In-Time) compilation
  • Implements standard PHP library functions and allows calling native PHP4 extensions using unmanaged code

Extends PHP with useful PHP/CLR extensions

  • PHP/CLR makes it possible to fully integrate PHP application with the .NET type system
  • It is possible to import namespaces as well as use namespaces in new PHP/CLR projects
  • Allows using .NET generics including writing and extending generic objects in the PHP language
  • Supports .NET custom attributes, partial classes, .NET properties and other important features

Creating .NET libraries in the PHP language

  • Compiles PHP scripts directly to the .NET/Mono assemblies
  • Allows writing objects fully compatible with .NET languages (like C#) in the pure mode
  • Allows calling functions written in PHP and working with PHP objects in the legacy mode

Using .NET libraries in PHP projects

  • Thanks to Phalanger you can use PHP for developing presentation layer on top of bussines logic written in C#
  • Phalanger makes it possible to use any .NET object in PHP applications
  • You can use ASP.NET 2.0 Membership for integrating user accounts in PHP and ASP.NET application

Integrates the PHP language into Microsoft Visual Studio

  • Provides project templates for developing Phalanger applications in Visual Studio
  • Supports syntax highlighting for PHP source files
  • Supports debugging of PHP applications running on Phalanger

About Me

Ordinary People that spend much time in the box
Powered By Blogger