Friday, September 21, 2007

SQL Server Transact-SQL General Tips Part III

By : Brad McGehee

When using the WHILE statement, don't avoid the use of BREAK just because some people consider it bad programming form
. Often when creating Transact-SQL code using the WHILE statement, you can avoid using BREAK by moving a few lines of code around. If this works in your case, then by all means don't use BREAK. But if your efforts to avoid using BREAK require you to add additional lines of code that makes your code run slower, then don't do that. Sometimes, using BREAK can speed up the execution of your WHILE statements. [6.5, 7.0, 2000, 2005] Updated 6-12-2006

*****

One of the advantages of using SQL Server for n-tier applications is that you can offload much (if not most) of the data processing work from the other tiers and place it on SQL Server. The more work you can perform within SQL Server, the fewer the network roundtrips that need to be made between the various tiers and SQL Server. And generally the fewer the network roundtrips, the more scalable and faster the application becomes.

But in some applications, such as those than involve complex math, SQL Server has traditionally been weak. In these cases, complex math often could not be performed within SQL Server, instead it had to be performed on another tier, causing more network roundtrips than desired.

By using user-defined functions (UDFs), this is becoming less of a problem. UDFs allow developers to perform many complex math functions from within SQL Server, functions that previously could only be performed outside of SQL Server. By taking advantage of UDFs, more work can stay with SQL Server instead of being shuttled to another tier, reducing network roundtrips, and potentially boosting your application's performance.

Obviously, boosting your application's performance is not as simple as moving math functions to SQL Server, but it is one feature of SQL Server 2000/2005 that developers can take advantage of in order to boost their application's scalability and performance. [2000, 2005] Updated 6-12-2006

*****

SQL Server 2000/2005 offers a data type called "table." Its main purpose is for the temporary storage of a set of rows. A variable, of type "table," behaves as if it is a local variable. And like local variables, it has a limited scope, which is within the batch, function, or stored procedure in which it was declared. In most cases, a table variable can be used like a normal table. SELECTs, INSERTs, UPDATEs, and DELETEs can all be made against a table variable.

For better performance, if you need a temporary table in your Transact-SQL code, consider using a table variable instead of creating a conventional temporary table. Table variables are often faster, but not always. In addition, table variables found in stored procedures result in fewer compilations (than when using temporary tables), and transactions using table variables only last as long as the duration of an update on the table variable, requiring less locking and logging resources. [2000, 2005] Updated 10-02-2006

*****

Don't repeatedly reuse the same function to calculate the same result over and over within your Transact-SQL code. For example, if you need to reuse the value of the length of a string over and over within your code, perform the LEN function once on the string, and this assign the result to a variable, and then use this variable, over and over, as needed in your code. Don't recalculate the same value over and over again by reusing the LEN function each time you need the value, as it wastes SQL Server resources and hurts performance. [6.5, 7.0, 2000, 2005] Updated 10-02-2006

*****

Many developers choose to use an identify column at their primary key. By design, an identity column does not guarantee that that each newly created row will be consecutively numbered. This means there will most likely be occasional gaps in the identity column numbering scheme. For most applications, occasional gaps in the identity column present no problems.

On the other hand, some developers don't like these occasional gaps, trying to avoid them. With some clever use of INSTEAD OF triggers in SQL Server 2000,2005, it is possible prevent these numbering gaps. But at what cost?

The problem with trying to force an identify column to number consecutively without gaps can lead to locking and scalability problems, hurting performance. So the recommendation is not to try to get around the identify column's built-in method of working. If you do, expect performance problems. [2000, 2005] Updated 10-02-2006

*****

If you use the BULK INSERT to import data into SQL Server, seriously consider using the TABLOCK hint along with it. This will prevent SQL Server from running out of locks during vary large imports, and also boost performance due to the reduction of lock contention. [7.0, 2000, 2005] Added 11-22-2004

*****

To help identify long running queries, use the SQL Server Profiler Create Trace Wizard to run the "TSQL By Duration" trace. You can specify the length of the long running queries you are trying to identify (such as over 1000 milliseconds), and then have these recorded in a log for you to investigate later. [7.0]

Thursday, September 20, 2007

SQL Server Transact-SQL General Tips Part II

By : Brad McGehee

Instead of using temporary tables, consider using a derived table instead. A derived table is the result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, you can reduce I/O and often boost your application's performance. [7.0, 2000, 2005] Updated 6-12-2006

*****

Sometimes, it is handy to be able to perform some calculation on one or more columns of a record, and then take the result of that calculation and then add it to similar calculations performed on other related records to find a grand total.

For example, let's say you want to find the total dollar cost of an invoice. An invoice will generally involve a header record and one or more detail records. Each detail record will represent a line item on the invoice. In order to calculate the total dollar cost of an invoice, based on two or more line items, you would need to multiply the quantity of each item sold times the price of each item. Then, you would need to add the total price of each line item together in order to get the total dollar cost of the entire invoice. To keep this example simple, let's ignore things like discounts, taxes, shipping, etc.

One way to accomplish this task would be to use a cursor; like we see below (we are using the Northwind database for this example code):

DECLARE @LineTotal money --Declare variables
DECLARE @InvoiceTotal money
SET @LineTotal = 0 --Set variables to 0
SET @InvoiceTotal = 0

DECLARE Line_Item_Cursor CURSOR FOR --Declare the cursor

SELECT UnitPrice*Quantity --Multiply unit price times quantity ordered
FROM [order details]
WHERE orderid = 10248 --We are only concerned with invoice 10248

OPEN Line_Item_Cursor --Open the cursor
FETCH NEXT FROM Line_Item_Cursor INTO @LineTotal --Fetch next record
WHILE @@FETCH_STATUS = 0

BEGIN
SET @InvoiceTotal = @InvoiceTotal + @LineTotal --Summarize line items
FETCH NEXT FROM Line_Item_Cursor INTO @LineTotal
END

CLOSE Line_Item_Cursor --Close cursor
DEALLOCATE Line_Item_Cursor --Deallocate cursor
SELECT @InvoiceTotal InvoiceTotal --Display total value of invoice

The result for invoice number 10248 is $440.00.

What the cursor does is to select all of the line items for invoice number 10248, then multiply the quantity ordered times the price to get a line item total, and then it takes each of the line item totals for each record and then adds them all up in order to calculate the total dollar amount for the invoice.

This all works well, but the code is long and hard to read, and performance is not great because a cursor is used. Ideally, for best performance, we need to find another way to accomplish the same goal as above, but without using a cursor.

Instead of using a cursor, let's rewrite the above code using set-based Transact-SQL instead of a cursor. Here's what the code looks like:

DECLARE @InvoiceTotal money
SELECT @InvoiceTotal = sum(UnitPrice*Quantity)
FROM [order details]
WHERE orderid = 10248
SELECT @InvoiceTotal InvoiceTotal

The result for invoice number 10248 is $440.00.

Right away, it is obvious that this is a lot less code and that is it more readable. What may not be obvious is that it uses less server resources and performs faster. In our example--with few rows--the time difference is very small, but if many rows are involved, the time difference between the techniques can be substantial.

The secret here is to use the Transact-SQL "sum" function to summarize the line item totals for you, instead of relying on a cursor. You can use this same technique to help reduce your dependency on using resource-hogging cursors in much of your Transact-SQL code. [6.5, 7.0, 2000, 2005] Updated 6-12-2006

*****

While views are often convenient to use, especially for restricting users from seeing data they should not see, they aren't always good for performance. So if database performance is your goal, avoid using views (SQL Server 2000/2005 Indexed Views are another story).

When the Query Optimizer gets a request to run a view, it runs it just as if you had run the view's SELECT statement from the Query Analyzer or Management Studio. If fact, a view runs slightly slower than the same SELECT statement run from the Query Analyzer or Management Studio--but you probably would not notice the small difference--as it is small in simple examples like this.

Another issue with views is that they are often combined (nested) with other code, such as being embedded within another view, a stored procedure, or other T-SQL script. Doing so often makes it more difficult to identify potential performance issues.

Views don't allow you to add more restrictive WHERE clauses as needed. In other words, they can't accept input parameters, which are often needed to restrict the amount of records returned. I have seen lazy developers write generic views that return hundreds of thousands of unnecessary rows, and then user other code, such as client code, to filter only those few records that are needed. This is a great waste of SQL Server's resources.

Instead of embedding SELECT statements in a view, put them in a stored procedure for optimum performance. Not only do you get an added performance boost (in many cases), you can also use the stored procedure to restrict user access to table columns, just as you can with views. [6.5, 7.0, 2000, 2005] Updated 6-12-2006

SQL Server Transact-SQL General Tips Part I

By : Brad McGehee

Don't include code, variable, or parameters that don't do anything. This may sound obvious, but I have seen this in some off-the-shelf SQL Server-based applications. For example, you may see code like this:

SELECT column_name FROM table_name
WHERE 1 = 0

When this query is run, no rows will be returned. Obviously, this is a simple example (and most of the cases where I have seen this done have been very long queries). A query like this (even if part of a larger query) doesn't perform anything useful, and doesn't need to be run. It is just wasting SQL Server resources. In addition, I have seen more than one case where such dead code actually causes SQL Server to throw errors, preventing the code from even running. [6.5, 7.0, 2000, 2000, 2005] Updated 1-24-2006

*****

Don't be afraid to make liberal use of in-line and block comments in your Transact-SQL code, they will not affect the performance of your application, and they will enhance your productivity when you have to come back to the code and try to modify it. [6.5, 7.0, 2000, 2005] Updated 1-24-2006

*****

If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications. If you need to perform row-by-row operations, try to find another method to perform the task. Some options are to perform the task at the client, use tempdb tables, use derived tables, use a correlated subquery, or use the CASE statement. More often than not, there are non-cursor techniques that can be used to perform the same tasks as a SQL Server cursor. [6.5, 7.0, 2000, 2005] Updated 1-24-2006

*****

If your users perform many ad hoc queries on your SQL Server data, and you find that many of these "poorly-written" queries take up an excessive amount of SQL Server resources, consider using the "query governor cost limit" configuration option to limit how long a query can run.

This option allows you to specify the maximum amount of "seconds" a query will run, and whenever the query optimizer determines that a particular query will exceed the maximum limit, the query will be aborted before it even begins.

Although the value you set for this setting is stated as "seconds," it does not mean seconds like we think of seconds. Instead, it relates to the actual estimated cost of the query as calculated by the query optimizer. You may have to experiment with this value until you find one that meets your needs.

There are two ways to set this option. First, you can change it at the server level (all queries running on the server are affected by it) using sp_configure "query governor cost limit," or you can set it at the connection level (only this connection is affected) by using the SET QUERY_GOVERNOR_COST_LIMIT command. [7.0, 2000, 2005] Updated 1-24-2006

*****

You may have heard of a SET command called SET ROWCOUNT. Like the TOP operator, it is designed to limit how many rows are returned from a SELECT statement. In effect, the SET ROWCOUNT and the TOP operator perform the same function.

While in most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query. [6.5, 7.0, 2000, 2005] Updated 1-24-2006

*****

If you have the choice of using a join or a subquery to perform the same task within a query, generally the join is faster. But this is not always the case, and you may want to test the query using both methods to determine which is faster for your particular application. [6.5, 7.0, 2000, 2005] Updated 1-24-2006

*****

If you need to create a primary key (using a value meaningless to the record, other than providing a unique value for a record), many developers will use either an identity column (with an integer data type) or an uniqueindentifier data type.

If your application can use either option, then you will most likely want to choose the identity column over the uniqueindentifier column.

The reason for this is that the identity column (using the integer data type) only takes up 4 bytes, while the uniqueindentifier column takes 16 bytes. Using an identifier column will create a smaller and faster index. [7.0, 2000, 2005] Updated 1-24-2006

*****

If your application requires you to create temporary tables for use on a global or per connection use, consider the possibility of creating indexes for these temporary tables. While most temporary tables probably won't need, or even use an index, some larger temporary tables can benefit from them. A properly designed index on a temporary table can be as great a benefit as a properly designed index on a standard database table.

In order to determine if indexes will help the performance of your applications using temporary tables, you will probably have to perform some testing. [6.5, 7.0, 2000, 2005] Updated 1-24-2006

*****

Suppose you have data in your table that represents the logical information of "Yes" and "No" and you want to give the results of a query to someone who isn't working all day with computers. For such people, they may not know that a 1 is the logical representation of TRUE while a 0 represents FALSE. Sure, you can do this at the presentational layer. But what if someone comes to your desk, begging for immediate help? Here's a little trick to make BITs (or any other 0 and 1 data) look a bit more intuitive:

CREATE TABLE MyBits 
(
id INT IDENTITY(1,1) PRIMARY KEY 
, bool BIT
)
GO
INSERT INTO MyBits
SELECT 0  
UNION ALL 
SELECT 1
GO
SELECT 
id 
, bool 
, SUBSTRING('YesNo', 4 - 3 * bool, 3) as YesNo
FROM
MyBits
GO
DROP TABLE MyBits 
 
id          bool YesNo 
----------- ---- ----- 
1           0    No
2           1    Yes
 
(2 row(s) affected)
How does this work? The trick happens inside the SUBSTRING function. Precisely, when calculating the start value for the SUBSTRING. If our column "bool" contains a 0, the calculation looks like SUBSTRING('YesNo', 4 - 3 * 0, 3), which resolves to SUBSTRING('YesNo', 4, 3) and therefore, correctly returns 'No'. We actually use here another feature of SUBSTRING. If the string is shorter than our requested length, SUBSTRING simply returns the shorter string without filling up the missing spaces. Finally, in case a 1 is in our "bool" column, the calculation goes like SUBSTRING('YesNo', 4 - 3 * 1, 4), which is SUBSTRING('YesNo, 1, 3) and that is 'Yes'. [7.0, 2000, 2005] Added 5-9-2005

Fedora Core 6 Tips and Tricks (v1.3)

This is based on my Fedora Core 5 Tips and Tricks page. This is in maintenance only mode since there is now a Fedora 7 version of this guide in the works. Recent changes are highlighted in yellow.

I've started to add x86_64 specific instructions below when they differ from traditional 32-bit instructions. The biggest issue is with multimedia plug-ins which are still often available only in 32-bit versions.


Fix i586/i686 Kernel issue

Under some circumstances the Fedora Core 6 installer (called Anaconda) will mistakenly install the i586 version of the Kernel rather than the more appropriate i686 version for Pentium 4 and newer 32-bit processors. This is not a problem on x86_64 and non-Intel/AMD processors.

To find out if your system has this problem type the following command:

$ rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n"|grep kernel|sort 
If you have a Pentium 4 or newer processor and the kernel version ends in i586 then your system and affected and needs to be updated to function at it's best. There are many ways described to do this on various forums but by far the easiest is the Kernel Fix Script on the Bugs/FC6Common site:
Fedora Core 6 - Common Bugs and Known Issues
Script down and find the script. Download it to your hard drive and run it as root:
# mkdir /tmp/kernel-fix
# cd /tmp/kernel-fix
# sh kernel-fix.sh
Then just follow the prompts and it will update your installed kernel. Using kernel-fix.sh sould be with care as it runs all the *.rpm at current directory (see line 32).

Add support for other repositories

Fedora core 6 has one major problem where on some systems it installs the i586 version of the kernel instead of i686. What this means is that those systems will be a little slower and will lack CPU frequency scaling. If you are running an i686 based system you can force the installation of the correct kernel by booting the installer with the following command:
linux i686

Fedora comes with a ton of software but there are still plenty of packages of interest to most users that are not included for a variety of reasons. This is where you find the MP3 plug-in and a ton of other packages.

These instructions can vary depending on 32bit or 64bit architecture. If there is a difference it will be noted. If you don't know which architecture you're running you can run the following command:

$ uname -m
x86_64
...or...
i686
I'm still working on the 64 bit specific instructions so your feedback is very important.

Before you add repositories it's probably a good idea to make sure your system is fully updated first. It's still early but right now the Livna and freshrpms repositories seems to be the most useful. The easiest way to get started is to install the freshrpms-release packages:

Both 32 and 64 bits:
# rpm -ihv http://ayo.freshrpms.net/fedora/linux/6/i386/RPMS.freshrpms/freshrpms-release-1.1-1.fc.noarch.rpm
32 bit version:
# rpm -ihv http://rpm.livna.org/fedora/6/i386/livna-release-6-1.noarch.rpm
64 bit version:
# rpm -ihv http://rpm.livna.org/fedora/6/x86_64/livna-release-6-1.noarch.rpm
To automatically install/update the Macromedia Flash version 9.0 plug-in copy This File to your /etc/yum.repos.d directory. You can browse the packages available there at http://rpm.livna.org/fedora/6/i386/ and http://zod.freshrpms.net/.


Install MP3 Plug-in

Since you've been following along this next step is about as easy as it gets. Just use yum to automatically install the MP3 plug-ins for xmms and Rhythmbox like this:
# yum -y install xmms xmms-mp3 xmms-faad2 gstreamer-plugins-ugly libmad libid3tag
While you're here you might as well install my personal favorite (this week at least) music player Banshee:
# yum -y install banshee
The -y flag is to automatically answer yes to any question. If you want to be able to say no you can ignore that flag.

While you're there I highly recommend the grip CD ripper which supports both MP3 and Ogg formats. Once again installation is quite simple:

# yum -y install grip

Install Macromedia Flash/Shockwave plug-in

Flash Plug-in 9.0
If you set up the repositories correctly above you should just need to do this to install the Flash plug-in version 9.0:
# yum -y install flash-plugin
You can get more information about this plug-in at http://macromedia.mplug.org/. Before the plug-in gets installed you'll need to agree to the terms of the license.

Special 64-bit instructions:
Now the problem with 64 bit, even on Windows, is that most plug-ins are still for some reason only available in a 32 bit version. This is a problem because a 64 bit version of Firefox can only use 64 bit plugins. There are several ways to solve this but by far the easiest is to just force the use of the 32-bit Firebox. Both versions are installed by default, you just need to make a little change to make sure only the 32-bit version gets run.

As of firefox-1.5.0.10-5.fc6 the method of selecting the 32-bit version has been simplified. Now you simply create a file called /etc/sysconfig/firefox-arch containing the following lines:

MOZ_LIB_DIR="/usr/lib"
SECONDARY_LIB_DIR="/usr/lib64"

The remainder of these instructions are only for people who have not updated lately and still have an older version of Firefox. These instructions will go away soon:

Edit the file /usr/bin/firefox as root and go down to about line 40 and comment out the following code:

# Force 32 bit version
#if [ -x "/usr/lib64/firefox-1.5.0.8/firefox-bin" ]
#then
# MOZ_LIB_DIR="/usr/lib64"
#fi
Then when you restart Firefox you'll be running the 32 bit version and the plug-ins you installed above will work just fine.

Install DVD player

Currently I find the DVD player that works best is the Xine Multimedia Player which is found in the Livna repository so installing it is just this simple:
# yum -y install xine xine-lib xine-skins xine-lib-extras-nonfree libdvdcss
This will install the xine DVD/VCD/CD player. Now to get xine to automatically play a DVD upon insertion instead of the Totem player which can't actually play DVDs, you can simply use the gconftool-2 utility as follows:
$ gconftool-2 --set /desktop/gnome/volume_manager/autoplay_dvd_command \
'xine --auto-play --auto-scan dvd' --type='string'

Install MPlayer Media Player

At some point you're probably going to want to play a QuickTime, AVI or ASF file so you'll want the MPlayer media player. Fortunately with the FreshRpms repositories it's also very easy to download and install. Once again there are conflicts between the Livna and FreshRpms repositories and you'll have to disable one of them.

To prevent potential problems of updates in the Livna repository from messing up the mplayer and mencoder packages add the following line highligted in bold to the file /etc/yum.repos.d/livna.repo :

gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-livna
exclude=mplayer* mencoder* ffmpeg*

[livna-debuginfo]
Then you can go ahead and install mplayer and all it's dependencies:
# yum -y install mplayer mplayer-skins mplayer-fonts mplayerplug-in
This command line will download the whole kit and kaboodle. that if you want to play content from a command line that you use the gmplayer version which will include a skin-able control panel. This will install the plug-in to play a wide variety of media within your browser window. Restart your web browser after that whole mess is done installing and you'll also have a plug-in for Mozilla so you can play embedded content. While you're at it be sure to configure mplayer to use the ALSA sound system rather than the default. It just works better. Edit the file ~/.mplayer/config and add the following line:
ao=alsa

Special 64-bit instructions:
This installs the 64-bit version of everything but because your other plug-ins are 32-bits you need to run the 32-bit version of Firefox, which won't be able to use the 64-bit version of the plug-in you just installed. The plug-in can use the 64-bit version of the mplayer application just fine so all you need to do then is to install the 32-bit mplayerplug-in plus a dependency it requires. If you know of any easier way to do this please let me know below.

# rpm -ihv http://ftp.ndlug.nd.edu/pub/fedora/linux/core/6/i386/os/Fedora/RPMS/libXpm-3.5.5-3.i386.rpm
# rpm -ihv http://ftp.freshrpms.net/pub/freshrpms/fedora/linux/6/mplayerplug-in/mplayerplug-in-3.31-2.fc6.i386.rpm
And finally you'll probably also want some additional codecs to play all that proprietary video that seems to have infected the Internet. Go to the MPlayer Download page and download the essential Binaries Codec Package. You'll need to install those files in /usr/local/lib/win32. Here are the steps. Remember the exact file names may change at some point.
# gtar xjvf essential-20061022.tar.bz2
# mkdir /usr/local/lib/win32
# mv essential-20061022/* /usr/local/lib/win32

Install VLC (VideoLAN Client)

Multimedia can be the achilles heel of Linux, but with just a little work you should be able to play just about anything your friends can. Besides Mplayer the other great video player is called VLC. It too is trivially easy to install once you have your repositories set up:
# yum -y install videolan-client
Once the client and a zillion dependencies get installed you can play a huge variety of video formats easy with the command vlc

Install RealPlayer 10 Media Player

If you have a better way of installing a Real Medial player please let me know if in the comments section below. Thanks to Chandra Shekhar for this great tip for making RealPlayer actually use ALSA instead of OSS. I've incorporated the changes into the guide below but here is a link to the original document.
http://docs.google.com/View?docid=ddt5bn9t_4c9238p

Before you install the play you'll need to make sure the compat-libstdc++-33 module is installed. Download the RealPlayer10 package from the following location:

RealPlayer10GOLD.rpm
First install the dependencies.
32 bit version
# yum -y install compat-libstdc++-33 alsa-oss
64 bit version There really MUST be an easier way!
# rpm -ihv ftp://fedora.cat.pdx.edu/linux/extras/6/i386/alsa-oss-1.0.12-3.fc6.i386.rpm
Then install the RPM:
# rpm -ihv RealPlayer10GOLD.rpm
The other thing you'll need to do is prevent the mplayerplug-in you installed above from trying to handle Real Media. I don't know why it's included because it almost never works correctly. The easiest way to disable it is to remove the appropriate plugin files:
# cd /usr/lib/mozilla/plugins
# rm mplayerplug-in-rm.so
Then whenever you want to view something just use /usr/bin/realplay . Here is a link to a cute test video to make sure it's working for you.

If audio is working but you have a black screen then it's possible your video card doesn't support XVideo. You can turn it off by clicking on Tools -> Preferences then choose the Hardware tab and disable Use XVideo .

If the video doesn't play properly the first thing to check is to make sure you're not running SElinux, it seems to prevent the RealPlayer from getting access to the drivers. I currently run with SElinux disabled but I recommend you run it in the Targeted mode rather than the most secure mode.

Now a bit of a tricky part. You'll need to edit the executable /usr/bin/realplay as root and locate the section below around line 56. Then add the code that's highlighted and save the file back.

 .
.
export HELIX_LIBS
fi

LD_PRELOAD="$LDPRELOAD:/usr/lib/libaoss.so.0.0.0"
export LD_PRELOAD

# See if LD_PRELOAD contains any of the sound server libs. If so, remove them.
LD_PRELOAD=`echo $LD_PRELOAD | sed -e 's/\([^:]*libesd[^:]*\|[^:]*libarts[^:]*\):\?//g'`
.
.

After you've run it the first time and gone through the configuration screens edit the ~/.realplayerrc file and add the following line in the [helix] section of the configuration:

[helix]
SoundDriver=2
.
.
For some reason on my system RealPlayer uses the the old and virtually obsolete OSS sound driver. The line above tells it to use the newer ALSA sound driver instead.

Install Java J2RE and Mozilla Plug-in

It's also very handy to have the Java run-time environment available and most importantly a Mozilla plug-in so you can view dynamic content. It's unfortunate that Mozilla will actually crash if you go to a site containing Java and you don't have the plug-in installed.

For now there is no easy way to do this but I found the following instructions on FedoraForums.org. Basically, start by downloading the Java Runtime Environment (JRE) 5.0 Update 9 (at the time I wrote this) from Sun.com. You'll want to grab the Linux RPM in self-extracting file. Then you want to install it with:

# sh ./jre-1_5_0_09-linux-i586-rpm.bin

Then you'll probably want to enable Java Plug-ins and here once again there is no easy way:

# ln -s /usr/java/jre1.5.0_09/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins
And finally you'll need to tell Fedora that you wish to use this version of Java as the preferred interpreter rather than the Open Source version that's installed by default. You'll of course need to adjust the full pathname if you install a newer version of the jre than the one in this example:
# /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jre1.5.0_09/bin/java 1509
# java -version
java version "1.5.0_09"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_09-b05)
Java HotSpot(TM) Client VM (build 1.5.0_09-b05, mixed mode, sharing)
If you know of an easier way please post it to the Comments section below.

Install NTFS driver

With repositories like Fedora Extras it's now very easy to add NTFS support to Fedora:
# yum -y install ntfs-3g
Then you can simply mount NTFS file systems using the option -t ntfs-3g. You can find more detailed information about this driver at http://www.ntfs-3g.org/

Install Internet Explorer

I know what you're saying, why would I ever want Internet Explorer installed on my perfectly good Linux system? If you don't have your own answer to that question, feel free to just skip this section. For everyone else, it's actually quite easy thanks to some very handy scripts from IEs4Linux. Before you start you'll need to make sure you have wine and cabextract installed:
# yum -y install wine cabextract
Then just download the latest script, extract and run it. The example below is based on version 2.0.5, just adjust the version number as necessary. Please note that you will want to install and run this as your own user, NOT as root. I used the defaults except that I installed all the versions of IE. I do some web development and I always find myself needing to resolve some goofy incompatibilites with older versions of IE.
$ gtar xzvf ies4linux-2.0.5.tar.gz
$ cd ies4linux-2.0.5
$ ./ies4linux
Welcome, greg! I'm IEs4Linux.
I can install IE 6, 5.5 and 5.0 for you easily and quickly.
You are just four 'enter's away from your IEs.

I'll ask you some questions now. Just answer y or n (default answer is the bold one)

IE 6 will be installed automatically.
Do you want to install IE 5.5 SP2 too? [ y / n ] y
.
.
.
IEs 4 Linux installations finished!

To run your IEs, type:
ie6
ie55
ie5

You can read more about this feature on my Internet Explorer with ActiveX on Linux page. It goes into a little more detail about using IE on Linux.


Install Other Odds and Ends

Add MS TrueType Fonts (TTF)

Many people will find it handy to have MS TrueType fonts available to make sure many websites render correctly. You can download the latest RPM from http://www.mjmwired.net/resources/mjm-fedora-fc5.html#ttf and install it as follows:
#  wget --referer=http://www.mjmwired.net/resources/mjm-fedora-fc6.html \
http://www.mjmwired.net/resources/files/msttcorefonts-2.0-1.noarch.rpm
# rpm -ihv msttcorefonts-2.0-1.noarch.rpm
# service xfs restart

Turn off the ANNOYING Spatial Nautilus Behavior

I don't know if it's the worst feature of Fedora but it's definitely in the top 5. You can get the old more sane behavior by bringing up nautilus then choose Edit -> Preferences then select the Behavior tab. Near the top find the option for Always open in browser windows and make sure it is checked.

Other Handy Utilities

Here are a few other tools that aren't installed by default but a lot of people find handy:
# yum -y install bittorrent-gui gnomebaker testdisk thunderbird \
audacity-nonfree screen cups-pdf
audacity-nonfree - A version of the excellent Audacity sound editor which includes MP3 support bittorrent-gui - Simple Gnome based BitTorrent client
cups-pdf - Add-on to CUPS which creates a PDF Printer which you can use to print any document in PDF format. The file is written to your Desktop.
gnomebaker - GTK based CD/DVD burning utility
screen - If you do a lot with the command line you'll find screen invaluable
testdisk - Two command line utilities to recover lost partitions and undelete files on FAT filesystems. VERY handy for undeleting files on flash memory cards.
thunderbird - Excellent E-mail client that complements Firefox

Other Useful Resources

I've tried to not just copy other people's tips so I've included a list of other people's tips and tricks I've found to be useful. There should be little or no overlap. FedoraForum - Linux Support Community - This is now the official way to get community support of the Fedora Linux system. There is no official Red Hat mailing list for any version of Fedora any more.

Mauriat Miranda's FC6 Installation Guide - Great guide that goes into more depth of selecting options during the installation process. This is also the source of the MS fonts RPM.

Fedora Core 5 Linux Installation Notes - Another great Fedora installation guide. This guide goes into some server related features rather than just desktop features.

Using Linux and Bluetooth DUN on the Treo 650 - A very nice guide to using a Treo 650 phone as a modem with your Linux based PC. It works great for me with one change. Do NOT uncomment the line encrypt enable; as it just won't work for me with encryption enabled with a D-Link DBT-120 and a Treo 650 phone.

Fedora Multimedia Installation HOWTO - I discovered this great resource after I wrote this. This document goes into more detail than mine so it's a great resource.

The Unofficial Fedora FAQ - Another great guide that should answer most general questions about Fedora. Fedora Core 5 doesn't seem to be addressed there yet but most answers are the same for both FC4 and FC5.

This Fedora Core 6 Tips & Tricks translated into Italian - Thanks to Guido for translating this guide into Italian. Please contact me if you wish to translate this guide into other languages.

Fedora fc5 on EasyLinux.info - Yet another guide. The thing I love about Linux is that you can solve any problem a number of different ways. That includes these tips guides, everyone has a different way. Different strokes for different folks.

Wednesday, September 19, 2007

Tip/Trick: Optimizing ASP.NET 2.0 Web Project Build Performance with VS 2005

This posts covers how to best optimize the build performance with Visual Studio 2005 when using web projects. If you are experiencing slow builds or want to learn how to speed them up please read on.

Quick Background on VS 2005 Web Site Project and VS 2005 Web Application Project options

VS 2005 supports two project-model options: VS 2005 Web Site Projects and VS 2005 Web Application Projects.

VS 2005 Web Site Projects were built-in with the initial VS 2005 release, and provide a project-less based model for doing web development that uses that same dynamic compilation system that ASP.NET 2.0 uses at runtime. VS 2005 Web Application Projects were released as a fully supported download earlier this spring, and provide a project model that uses a MSBuild based build system that compiles all code in a project into a single assembly (similar to VS 2003 -- but without many of the limitations that VS 2003 web projects had with regard to FrontPage Server Extensions, IIS dependencies, and other issues). To learn more about VS 2005 Web Application Projects, please review the tutorials I've published on my http://webproject.scottgu.com web-site. Note that VS 2005 Web Application Project support will be included in VS 2005 SP1 (so no additional download will be required going forward).

Both the VS 2005 Web Site Project option and the VS 2005 Web Application Project option will continue to be fully supported going forward with future Visual Studio releases. What we've found is that some people love one option, while disliking the other, and vice-versa. From a feature perspective there is no "one best option" to use - it really depends on your personal preferences and team dynamics as to which will work best for you. For example: a lot of enterprise developers love the VS 2005 Web Application option because it provides a lot more build control and team integration support, while a lot of web developers love the VS 2005 Web Site model because of its "just hit save" dynamic model and flexibility.

Two articles you might find useful to decide which works best for you is this MSDN whitepaper that includes some comparisons between the two models, and Rick Strahl's Web Application Projects and Web Deployment Projects are Here article that provides a good discussion of the pros/cons of the different options.

To migrate from the VS 2005 Web Site Project model to the VS 2005 Web Application Project model, please follow this C# or VB tutorial that walks-through the steps for how to-do so.

So Which Project Option Builds Faster?

When doing full builds of projects, the VS 2005 Web Application Project option will compile projects much faster that the VS 2005 Web Site Project option. By "full build" I mean cases where every class and page in a project is being compiled and re-built - either because you selected a "Rebuild" option within your "build" menu, or because you modified code within a dependent class library project or in the /app_code directory and then hit "build" or "ctrl-shift-b" to compile the solution.

There are a few reasons why the VS 2005 Web Application Project ends up being significantly faster than Web Site Projects in these "full rebuild" scenarios. The main reason is that (like VS 2003), the VS 2005 Web Application Project option only compiles your page's code-behind code and other classes within your project. It does not analyze or compile the content/controls/in-line code within your .aspx pages -- which means it does not need to parse those files. On the downside this means that during compilation it will not check for errors in those files (unlike the VS 2005 Web Site Project option which will identify any errors there). On the positive side it makes compilations much faster.

So does this mean that you should always use the VS 2005 Web Application Project option to get the fastest build times with large projects? No -- not necessarily. One nice feature that you can enable with the VS 2005 Web Site Project option is support for doing "on demand compilation". This avoids you having to always re-build an entire project when dependent changes are made -- instead you can just re-build those pages you are working on and do it on-demand. This will lead to significant build performance improvements for your solution, and can give you a very nice workflow when working on very large projects. I would definitely recommend using this option if you want to improve your build performance, while retaining the flexibility of the web-site model.

The below sections provide specific tutorials for both the VS 2005 Web Site Project Model and the VS 2005 Web Application Project Model on optimization techniques -- including the "on demand compilation" build option I described above.

Specific Tips/Tricks for Optimizing VS 2005 Web Site Project Build Times

When using the VS 2005 Web Site Project model, you can significantly improve build performance times by following these steps:

1) Verify that you are not suffering from an issue I call "Dueling Assembly References". I describe how to both detect and fix this condition in this blog post. If you are ever doing a build and see the compilation appear to pause in the "Validating Web Site" phase of compilation (meaning no output occurs in the output window for more than a few seconds), then it is likely that you are running into this problem. Use the techniques outlined in this blog post to fix it.

2) Keep the number of files in your /app_code directory small. If you end up having a lot of class files within this directory, I'd recommend you instead add a separate class library project to your VS solution and move these classes within that instead since class library projects compile faster than compiling classes in the /app_code directory. This isn't usually an issue if you just have a small number of files in /app_code, but if you have lots of directories or dozens of files you will be able to get speed improvements by moving these files into a separate class library project and then reference that project from your web-site instead. One other thing to be aware of is that whenever you switch from source to design-view within the VS HTML designer, the designer causes the /app_code directory to be compiled before the designer surface loads. The reason for this is so that you can host controls defined within /app_code in the designer. If you don't have an /app_code directory, or only have a few files defined within it, the page designer will be able to load much quicker (since it doesn't need to perform a big compilation first).

3) Enable the on-demand compilation option for your web-site projects. To enable this, right-click on your web-site project and pull up the project properties page. Click the "Build" tab on the left to pull up its build settings. Within the "Build" tab settings page change the F5 Start Action from "Build Web Site" to either the "Build Page" or "No Build" option. Then make sure to uncheck the "Build Web site as part of solution" checkbox:

When you click ok to accept these changes you will be running in an on-demand compilation mode. What this means (when you select the "Build Page" option in the dialog above) is that when you edit a page and then hit F5 (run with debugging) or Ctrl-F5 (run without debugging) the solution will compile all of the class library projects like before, then compile the /app_code directory and Global.asax file, and then instead of re-verifying all pages within the web-site it will only verify the current page you are working on, and any user controls that the page references. With large (and even medium) projects with lots of pages, this can obviously lead to major performance wins. Note that ASP.NET will automatically re-compile any other page or control you access at runtime -- so you will always have an up-to-date and current running application (you don't need to worry about old code running). You can optionally also use the "No Build" option to by-pass page-level validation in the IDE, which obviously speeds up the entire process much further (I'd recommend giving both options a try to see which you prefer).

By deselecting the "Build Web site as part of solution" checkbox, you will find that the Ctrl-Shift-B keystroke (which builds the solution) will continue compiling all class library projects, but will not re-build all pages within your web-site project. You will still get full intellisense support in your pages in this scenario - so you won't lose any design-time support. You will also continue to get warning/error squiggles in code/class when they are open. If you want a way to force a re-build to occur on pages not open, or across all pages within the web-site, you can use the "Build Page" or "Build Web Site" menu options within the "Build" menu of Visual Studio:

This gives you control as to which pages on your site you want to verify (and when) - and can significantly improve build performance. One trick I recommend doing is adding a new shortcut keystroke to your environment to allow you to quickly short-cut the "Build Page" menu option to avoid you having to ever use a mouse/menu for this. You can do this by selecting the Tools->Customize menu item, and then click the "Keyboards" button on the bottom-left of the customize dialog. This will bring up a dialog box that allows you to select the VS Build.BuildPage command and associate it within any keystroke you want:

Once you do this, you can type "Ctrl-Shift-P" (or any other keystroke you set) on any page to cause VS to compile any modified class library project (effectively the same thing that Ctrl-Shift-B does), then verify all classes within the /app_code directory, and then re-build just the page or user control (and any referenced master pages or user controls it uses) that you are working on within the project.

Once the above steps are applied, you should find that your build performance and flexibility is much improved - and that you have complete control over builds happen.

Specific Tips/Tricks for Optimizing VS 2005 Web Application Project Build Times

If you are using the VS 2005 Web Application project option, here are a few optimizations you might want to consider:

1) If you have a very large project, or are working on an application with many other developers, you might want to consider splitting it up into multiple "sub-web" projects. I wouldn't necessarily recommend this for performance reasons (unless you have thousands and thousands of pages it probably doesn't make a huge difference), but it can sometimes make it easier to help manage a large project. Please read this past blog-post of mine on creating sub-web projects to learn how to use this.

2) Consider adding a VS 2005 Web Deployment project to your solution for deep verification. I mentioned above that one downside of using the VS 2005 Web Application Project option was that it only compiled the code-behind source code of your pages, and didn't do a deeper verification of the actual .aspx markup (so it will miss cases where you have a mis-typed tag in your .aspx markup). This provides the same level of verification support that VS 2003 provided (so you aren't loosing anything from that), but not as deep as the Web Site Project option. One way you can still get this level of verification with VS 2005 Web Application Projects is to optionally add a VS 2005 Web Deployment Project into your solution (web deployment projects work with both web-site and web-application solutions). You can configure this to run only when building "release" or "staging" builds of your solution (to avoid taking a build hit at development time), and use it to provide a deep verification of both your content and source code prior to shipping your app.

Common Tips/Tricks for Optimizing any VS 2005 Build Time

Here are a few things I recommend checking anytime you have poor performance when building projects/solutions (note: this list will continue to grow as I hear new ones - so check back in the future):

1) Watch out for Virus Checkers, Spy-Bots, and Search/Indexing Tools

VS hits the file-system a lot, and obviously needs to reparse any file within a project that has changed the next time it compiles. One issue I've seen reported several times are cases where virus scanners, spy-bot detecters, and/or desktop search indexing tools end up monitoring a directory containing a project a little too closely, and continually change the timestamps of these files (they don't alter the contents of the file - but they do change a last touched timestamp that VS also uses). This then causes a pattern of: you make a change, rebuild, and then in the background the virus/search tool goes in and re-searches/re-checks the file and marks it as altered - which then causes VS to have to re-build it again. Check for this if you are seeing build performance issues, and consider disabling the directories you are working on from being scanned by other programs. I've also seen reports of certain Spybot utilities causing extreme slowness with VS debugging - so you might want to verify that you aren't having issues with those either.

2) Turn off AutoToolboxPopulate in the Windows Forms Designer Options

There is an option in VS 2005 that will cause VS to automatically populate the toolbox with any controls you compile as part of your solution. This is a useful feature when developing controls since it updates them when you build, but I've seen a few reports from people who find that it can cause VS to end up taking a long time (almost like a hang) in some circumstances. Note that this applies both to Windows Forms and Web Projects. To disable this option, select the Tools->Options menu item, and then unselect the Windows Forms Designer/General/AutoToolboxPopulate checkbox option (for a thread on this see: http://forums.asp.net/1108115/ShowPost.aspx).

3) Examine which 3rd party packages are running in Visual Studio

There are a lot of great 3rd party VS packages that you can plug into Visual Studio. These deliver big productivity wins, and offer tons of features. Occasionally I've seen issues where performance or stability is being affected by them though. This is often true in cases where an older version (or beta) of one of these packages is being used (always keep an eye out for when a manufacturer updates them with bug-fixes). If you are seeing issues with performance or stability, you might want to look at trying a VS configuration where you uninstall any additional packages to see if this makes a difference. If so, you can work with the 3rd party manufacturer to identify the issue.

Visual Basic Build Performance HotFix

The Visual Basic team has released several hotfixes for compilation performance issues with large VB projects. You can learn how to obtain these hotfixes immediately from this blog post. The VB team also has a direct email address -- vbperf@microsoft.com -- that you can use to contact them directly if you are running into performance issues.

Hope this helps,

Scott
original link: http://weblogs.asp.net/scottgu/archive/2006/09/22/

Monday, September 17, 2007

Dependency injection in PHP5

Several months ago I started using Spring Framework . That is how I faced the dependency injection . As my programming skills mostly originated from web development with PHP, I found out that I've missed a great technique, which is as usually common in Java land, but rare in PHP. I ended up with a research of dependency injection libraries for PHP. There was a port of Pico container by Pawel Kozlowski, who gave a talk on it recently, and that was almost it (some bright minds from #php.thinktank had their own solutions for that).



However, all these solutions made you to define your dependencies within the code of your application, and were quite minimalistic. That made my mind - I started working on Garden - the dependency injection container for PHP5, which would use XML definitions with syntax as similar as possible to the Spring. Today Garden is packed up and ready for use. Here is a sample application context that injects a collar on a dog:



<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE beans PUBLIC "-//GARDEN//DTD BEAN//EN" "garden-beans.dtd">

<beans default-lazy-init="true">



   <bean id="dog" class="Dog" file="ext/Dog.php">

     <property name="collar">

       <ref local="dogCollar"/>

     </property>

   </bean>



   <bean id="dogCollar" class="Collar" file="ext/Dog/Collar.php">

     <property name="spiked" value="true"/>

   </bean>



</beans>



The contents of Dog and Collar classes:



class Dog

{

     private $collar;

     public function setCollar($collar)

     {

         $this->collar = $collar;

     }

     public function getCollar()

     {

         return $this->collar;

     }

}



class Collar

{

     private $spiked;

     public function setSpiked($spiked)

     {

          $this->spiked = $spiked;

     }

     public function getSpiked()

     {

          return $this->spiked;

     }

}



And finally, a piece of code to access the beans:



require_once 'Garden.php';

Garden::initApplicationContext('example.xml');

$ctx = Garden::getApplicationContext();

$dog = $ctx->getBean('dog');

$collar1 = $dog->getCollar();

$collar2 = $ctx->getBean('dogCollar');

var_dump($dog, $collar1, $collar1 === $collar2);



The output of dump:



object(Dog)#1 (1) {

["collar:private"]=>

object(Collar)#2 (1) {

["spiked:private"]=>

bool(true)

}

}

object(Collar)#2 (1) {

["spiked:private"]=>

bool(true)

}

bool(true)



This example reveals only a basic setter injection, however the possibilities allow to use constructor argument injection, nested bean definitions, combined XML definition resources, aliasing, and literally almost all functionality found here . I hope this will be helpful for those of you who work on large scale, and trust that PHP applications can be scalable, easily maintainable and designed well. Make your applications blossom!

.NET or PHP Framework??

Original link : http://junal.wordpress.com/2007/07/28/net-or-php-framework/

I have experience of programming using both these frameworks. I did few projects during my university life using .NET framework. I can remember I did a big project on an ad firm called Asiatic Ltd. When I joined at Alliance creation, I was assigned to garments software which was developed my by Php framework called CakePHP. I thought I would be a .NET programmer. But for my profession I had to learn PHP and code with it for my assigned project. I learned it fast. Doesn’t it mean PHP is easy to learn? Well its easy. Although this language has some weird syntax but still it was fun to learn an open source programming language. I had freedom to create things my own way. Which I didn’t get in .NET framework. Its kinds being in the cage if ur a .NET developer. You will get freedom in PHP for sure. I had some questions in my mind while I was using PHP. Few things u have to code with your hand and it takes a lot of time. But in .NET you don’t have to do that. Simply u have controllers to do that for you. Just drag and drop the options and use it.(i.e dataview, datagrid, CalendarView and so on.. ) In case of .NET programming all you have to know is how to use the stuff that are integrated in the package. Another good thing about .NET is you will get integrated stuff in it. Like you can us C#, VB.net, J# library code in your asp.net. and vice versa. I mean you can get help from other library functions .You can develop stand alone software by .NET on the other hand u cannot develop stand alone software by PHP framework.

Now, after using two types of platform im kinda confused which framework we should use for development!! Well, there are few things I want to discus here. It’s totally up to you which framework u want to use. From my experience, .NET is a kind of easy programming tools which gives you lots of programming friendly environment. It has very strong IDE. It has some controllers and functionalities that give you pleasure to make programming more easy and it saves your lot of times. Let see, what .NET offers you. Interoperability, Common Runtime Engine, Language Independence, Base Class Library, Simplified Deployment, Security

Well, what is the disadvantage of this framework then? First thing you have to have a healthy system so that it runs well in ur system. Because it requires a high configuration to run smoothly. You have to spend money [at least $200] for the .NET package. For distributing .NET application, client needs to install .NET framework and . NET application doesn’t run in all platforms.

Now let’s see what kind of advantage we have in PHP framework. Speed, Stability, Security, Simplicity

Very light programming language. you will always get a way how to solve a critical problem. You have freedom of using your own ideas and ways. Everything is open to you. If you are not a lazy programmer you won’t get bored of coding.

Final question is which framework should we use? From my view if you are working on a web applications then don’t use ASP.NET. You better use any kind of PHP framework (i.e CakePHP or Codeigniter). This frameworks are improving their resources day by day. I have seen some ASP.NET site which are really slow. Recently, I had to work on a site which was mainly a clone site. Owner wanted to move to PHP because that site was developed with ASP.NET and it was really slow. After redeveloped I found site was way faster than before. I can tell u that owner had to spend a lot of money for that ASP.NET site. At the end I have to tell you if you don’t want to storm your brain that much or you don’t want to be an enthusiastic programmer then use .NET. It’s easier and saves time. You will enjoy all drag drop options and features. Your lazy hand won’t become tired. You will get huge community support in 30 minutes

.Php Framework: the .Net Framework clone in php

Downloads

.Php Framework - .Php Framework and a sample page



Introduction

I'm a .Net developer, I like the OOP and the .Net Framework structure.
In the last days I had to study and writing some web work with php.
Php results to me like a potential language but the syntax is awful, function names are all confusing and it is often used as a "spaghetti code".

This give me an idea on php usage: I copied the classes (ok, not all the classes :) ) of the .Net Framework in php.
This is to let me and other people using known classes and with a better syntax/OOP.

In example, array in php -> Dictionary in .Net
// php
// initialize the array with one item key/value
$my_array = array("color1" => "red");

// add others items
$my_array["color2"] = "blue";
$my_array["color3"] = "green";
// color2 was added?
$color_exists = array_key_exists("color2", $my_array);

// .Php Fw
// function to add the namespace
using("System.Collections");

// initialize the list with one item key/value
$list = new phpDictionary();
$list->Add("color1", "red");
$list->Add("color2", "blue");
$list->Add("color3", "green");
$color_exists = $list->ContainsKey("color2");
As you can see the .Php Framework code is similar to the .Net Framework code:
// C# .Net Fw
// function to add the namespace
using System.Collections.Generic;

// initialize the list with one item key/value
Dictionary list = new Dictionary();;
list.Add("color1", "red");
list.Add("color2", "blue");
list.Add("color3", "green");
bool color_exists = list.ContainsKey("color2");

.Php Fw Guidelines


Of course, php syntax is limited, so the copy of a class in php may be a little different fron the .Net one, but if I follow some guidelines the result is good.

  • Properties

    For php a property is a field of the class, so I divided a property in 2 way: the get and the set methods.
    • To get the value the method will have the plain name of the property.
    • To set the value the method will have the "Set" prefix and the parameter with the value.
    i.e for the Text property
    // The read way of a "property"
    function Text()
    {
    return $this->text;
    }

    // The writing way of a "property"
    function SetText(string $text)
    {
    $this->text = $text;
    }

Reference of .Php Fw


  • Next Release

    I hope to write something for the System.Data and I hope that someone can help me fixing bugs and enhance all the framework :)
  • Release 1.0

    Namespaces (incomplete! Not all the namespace classes are rewritten :) )
    + System
    + System.Collection
    + System.Diagnostics
    + System.IO
    + System.Reflection
    + System.Threading (not working: php does not support threading... :( )
    + System.Web
    + System.Web.UI

Notes

In System.Web.* I added quite ALL the web control fo ASP.Net: you can code a complete web page.
In the demo php in the package to download you can see the code to do this.
You can set it as the http://localhost and view it.

Php is funny, now it's also better and if you want to help me, or if you have some question don't be timid and ask to me ;)

Original Link : http://www.devbox4.net/?q=node/36

Differences Between Visual Basic .NET and Visual C# .NET

Because of the previous differences between Visual Basic and C/C++, many developers assume incorrectly about the capabilities of Visual Basic .NET. Many Visual Basic developers think that Visual C# is a more powerful language than Visual Basic. In other words, Visual Basic developers assume that you can do many things in Visual C# that you cannot do in Visual Basic .NET, just as there are many things that you can do in C/C++ but cannot do in Microsoft Visual Basic 6.0 or earlier. This assumption is incorrect.

Although there are differences between Visual Basic .NET and Visual C# .NET, both are first-class programming languages that are based on the Microsoft .NET Framework, and they are equally powerful. Visual Basic .NET is a true object-oriented programming language that includes new and improved features such as inheritance, polymorphism, interfaces, and overloading. Both Visual Basic .NET and Visual C# .NET use the common language runtime. There are almost no performance issues between Visual Basic .NET and Visual C# .NET. Visual C# .NET may have a few more "power" features such as handling unmanaged code, and Visual Basic .NET may be skewed a little toward ease of use by providing features such as late binding. However, the differences between Visual Basic .NET and Visual C# .NET are very small compared to what they were in earlier versions.

The "Differences Between Microsoft Visual Basic .NET and Microsoft Visual C# .NET" white paper describes some of the differences between Visual Basic .NET and Visual C# .NET. However, remember that the .NET Framework is intended to be language independent. When you must select between Visual Basic .NET and Visual C# .NET, decide primarily based on what you already know and what you are comfortable with. It is easier for Visual Basic 6.0 developers to use Visual Basic .NET and for C++/Java programmers to use Visual C# .NET. The existing experience of a programmer far outweighs the small differences between the two languages.

No matter which language you select based on your personal preference and past experience, both languages are powerful developer tools and first-class programming languages that share the common language runtime in the .NET Framework.

About Me

Ordinary People that spend much time in the box
Powered By Blogger