Full-Text Search Optimization Tips

10:19:00 am 0 Comments

Here are fourteen little known tips that you can use to ensure your SQL Server Full-Text Search queries are performing in the most efficient manner possible.

1. Set the virtual memory size to at least 3 times the physical memory installed in the computer, and set the SQL Server "max server memory" server configuration option to half the virtual memory size setting (1.5 times the physical memory).

Because working with full-text search is very resource intensive, you should have enough physical and virtual memory.

2. Set the "Maximize Throughput for Network Applications" option.

This can increase full-text search performance as Windows NT will allocate more RAM to SQL Server than to its file cache.
To set this option, you can do the following:
1. Double-click the Network icon in Control Panel;
2. Click the Services tab;
3. Click Server to select it, and then click the Properties button;
4. Click Maximize Throughput for Network Applications, and then click OK;
5. Restart the computer.


3. Ensure full-text index population takes place during periods of low database access.

Because full-text index population takes some time, these updates should be scheduled during CPU idle time and slow production periods.

4. Assign a very large table (a table that has millions of rows) to its own full-text catalog.

This can improve performance and can be used to simplify administering and monitoring.

5. You can boost the resource usage for the full-text search service (increase the "System Resource Usage" option for the full-text search service).

Run SQL Server Enterprise Manager, expand a server group, and then expand a server. Then expand "Support Services", right-click the "Full-Text Search" and select "Properties". Finally, choose the "Performance" tab and increase the "System Resource Usage" option for the full-text search service.
Note: Don't set the "System Resource Usage" option to the "Dedicated" value (right border of the "System Resource Usage" slider bar) as it can negatively affect your SQL Server's performance.

6. Reduce the full-text unique key size.

To create a full-text index, the table to be indexed must have a unique index. Try to select a numeric column as the full-text unique key to increase the speed of full-text population. If the table to be indexed does not have a numeric unique index, consider creating a numeric unique index.

7. If you have several physical disks, create several Pagefile.sys files so that each Pagefile.sys file will be placed on its own physical disk.

Spreading paging files across multiple disk drives and controllers improves performance on most disk systems because multiple disks can process input/output requests concurrently.

8. If you use SQL Server 2000, consider using the Change Tracking with scheduled or background update index option versus Incremental Population.

The Change Tracking with scheduled propagation should be used when CPU and memory can be used at scheduled times and changes between the scheduled times are not significant.
The Change Tracking with background update index option should be used when CPU and memory are available and the value of an up-to-date index is high.

9. Consider using a full population when a large percentage of records were changed or added at once.


10. If you work with SQL Server 7.0, consider using an incremental population when there have not been a large percentage of records changed or added at once.

Using an incremental population instead of a full population decreases the population time and results in good performance benefits.

11. If you have several physical disks, place the database files separately from the full-text catalog files.

This can produce speed improvements for full-text queries as multiple disks can process input/output requests concurrently.

12. Upgrade to SQL Server 2000 in order to enhance full-text search performance, especially if you need to work with full-text search in clustered environment.

The full text search is not available in SQL Server 7.0 clustered environment.

13. If you work with SQL Server 2000, consider using the new top_n_by_rank parameter with CONTAINSTABLE or FREETEXTTABLE.

It can be used to restrict the number of rows returned. The top_n_by_rank parameter specifies that only the n-highest ranked matches, in descending order, will be returned.

14. Try to use the CONTAINS or FREETEXT predicates instead of the CONTAINSTABLE or FREETEXTTABLE rowset functions, whenever possible.

Because qualifying rows returned by the CONTAINSTABLE or FREETEXTTABLE rowset functions must be explicitly joined with the rows in the original SQL Server table, the queries that use the CONTAINSTABLE and FREETEXTTABLE functions are more complex than those that use the CONTAINS and FREETEXT predicates.

0 comments:

SQL Server Stored Procedures Optimization Tips

10:18:00 am 0 Comments

Here are twelve helpful tips for ensuring that you've constructed your SQL Server stored procedures to perform in the most efficient manner possible.

1. Use stored procedures instead of heavy-duty queries.
This can reduce network traffic as your client will send to the server only the stored procedure name (perhaps with some parameters) instead of all the text from a large heavy-duty query. Stored procedures can be used to enhance security and conceal underlying data objects as well. For example, you can give the users permission to execute the stored procedure to work with restricted sets of columns and data.

2. Include the SET NOCOUNT ON statement in your stored procedures to stop the message indicating the number of rows affected by a Transact-SQL statement.
This can reduce network traffic due to the fact that your client will not receive the message indicating the number of rows affected by a Transact-SQL statement.

3. Call stored procedures using their fully qualified name.
The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name.
Using fully qualified names eliminates any confusion about which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names.

4. Consider returning the integer value as a RETURN statement instead of returning an integer value as part of a recordset.
The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason. Using the RETURN statement can boost performance because SQL Server will not create a recordset.

5. Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database.
The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend using the prefix "sp_" in user-created stored procedure names as SQL Server always looks for a stored procedure beginning with "sp_" in the following order: the master database, the stored procedure based on the fully qualified name provided, followed by the stored procedure using dbo as the owner (if one is not specified).
When you have the stored procedure with the prefix "sp_" in a database other than master, the master database is always checked first. If the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.

6. Use the sp_executesql stored procedure instead of the EXECUTE statement.
The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improves readability of your code when many parameters are used.
When you use the sp_executesql stored procedure to execute a Transact-SQL statement that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation.

7. Use the sp_executesql stored procedure instead of temporary stored procedures.
Microsoft recommends using temporary stored procedures when connecting to earlier versions of SQL Server that do not support the reuse of execution plans. Applications connecting to SQL Server 7.0 or SQL Server 2000 should use the sp_executesql system stored procedure instead of temporary stored procedures in order to have a better chance of reusing the execution plans.

8. If you have a very large stored procedure, try to break down the stored procedure into several sub-procedures, and call them from a controlling stored procedure.
The stored procedure will be recompiled when any structural changes are made to a table or view referenced by the stored procedure (an ALTER TABLE statement, for example), or when a large number of INSERTS, UPDATES or DELETES are made to a table referenced by a stored procedure. So, if you break down a very large stored procedure into several sub-procedures, there's a chance that only a single sub-procedure will be recompiled, while other sub-procedures will not.

9. Try to avoid using temporary tables inside your stored procedures.
Using temporary tables inside stored procedures reduce the chance to reuse the execution plan.

10. Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
Using DDL statements inside stored procedures also reduce the chance to reuse the execution plan.

11. Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is always recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary each time it is run from the stored procedure, because in this case the wrong execution plan will not be used.

12. Use SQL Server Profiler to determine which stored procedures have been recompiled too often.
To check if a stored procedure has been recompiled, run SQL Server Profiler and choose to trace the event in the "Stored Procedures" category called "SP:Recompile". You can also trace the event "SP:StmtStarting" to see at what point in the procedure it is being recompiled. When you identify these stored procedures, you can take some correction actions to reduce or eliminate the excessive recompilations.

0 comments:

T-SQL Optimization Tips

10:16:00 am 0 Comments

Here are fourteen little known tips that you can use to ensure your Transact-SQL queries are performing in the most efficient manner possible.

1. Try to restrict the queries result set by using the WHERE clause.

This can result in a performance benefit, as SQL Server will return to the client only particular rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query.

2. Try to restrict the queries result set by returning only the particular columns from the table, not all the table's columns.

This can result in a performance benefit as well, because SQL Server will return to the client only particular columns, not all the table's columns. This can reduce network traffic and boost the overall performance of the query.

3. Use views and stored procedures instead of heavy-duty queries.

This can reduce network traffic as your client will send to the server only stored procedures or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can be used to facilitate permission management also, because you can restrict user access to table columns they should not see.

4. Whenever possible, try to avoid using SQL Server cursors.

SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.

5. If you need to return the total table's row count, you can use an alternative way instead of the SELECT COUNT(*) statement.

Because the SELECT COUNT(*) statement makes a full table scan to return the total table's row count, it can take an extremely long time for large tables. There is another way to determine the total row count in a table. In this case, you can use the sysindexes system table. There is a ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*):
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2
This way, you can improve the speed of such queries by several times. See this article for more details: Alternative way to get the table's row count.

6. Try to use constraints instead of triggers, whenever possible.

Constraints are much more efficient than triggers and can boost performance. So, whenever possible, you should use constraints instead of triggers.

7. Use table variables instead of temporary tables.

Table variables require fewer locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.

8. Try to avoid the HAVING clause, whenever possible.

The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so that they will contain only WHERE and GROUP BY clauses without the HAVING clause. This can improve the performance of your query.

9. Whenever possible, try to avoid using the DISTINCT clause.

Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is absolutely necessary.

10. Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement.

This can reduce network traffic, as your client will not receive the message indicating the number of rows affected by a T-SQL statement.

11. Use select statements with the TOP keyword or the SET ROWCOUNT statement if you need to return only the first n rows.

This can improve performance of your queries, as a smaller result set will be returned. This can also reduce the traffic between the server and the clients.

12. Use the FAST number_rows table hint if you need to quickly return 'number_rows' rows.

You can quickly get the n rows and can work with them when the query continues execution and produces its full result set.

13. Try to use UNION ALL statement instead of UNION, whenever possible.

The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, while the UNION statement does look for duplicate rows, whether they exist or not.

14. Do not use optimizer hints in your queries.

Because the SQL Server query optimizer is very clever, it is highly unlikely that you can optimize your query by using optimizer hints; more often than not, this will hurt performance.

0 comments:

Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime

10:14:00 am 0 Comments

I have seen scores of expert developers getting perplexed with SQL Server in finding time only from datetime datatype. Let us have a quick glance look at the solution.

SQL Server 2000/2005

SELECT
CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond,
CONVERT(VARCHAR(8),GETDATE(),101) AS DateOnly
GO


SQL Server 2008

SELECT
CONVERT(TIME,GETDATE()) AS HourMinuteSecond,
CONVERT(DATE,GETDATE(),101) AS DateOnly
GO

I hope the above solution is clear to you all.

0 comments:

Stored Procedure Optimization Tips

10:10:00 am 0 Comments

We will go over how to optimize Stored Procedure with making simple changes in the code. Please note there are many more other tips, which we will cover in future articles.

  • Include SET NOCOUNT ON statement: With every SELECT and DML statement, the SQL server returns a message that indicates the number of affected rows by that statement. This information is mostly helpful in debugging the code, but it is useless after that. By setting SET NOCOUNT ON, we can disable the feature of returning this extra information. For stored procedures that contain several statements or contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost because network traffic is greatly reduced.
CREATE PROC dbo.ProcName
AS
SET
NOCOUNT ON;
--Procedure code here
SELECT column1 FROM dbo.TblTable1
-- Reset SET NOCOUNT to OFF
SET NOCOUNT OFF;
GO
  • Use schema name with object name: The object name is qualified if used with schema name. Schema name should be used with the stored procedure name and with all objects referenced inside the stored procedure. This help in directly finding the complied plan instead of searching the objects in other possible schema before finally deciding to use a cached plan, if available. This process of searching and deciding a schema for an object leads to COMPILE lock on stored procedure and decreases the stored procedure’s performance. Therefore, always refer the objects with qualified name in the stored procedure like
SELECT * FROM dbo.MyTable -- Preferred method
-- Instead of
SELECT * FROM MyTable -- Avoid this method
--And finally call the stored procedure with qualified name like:
EXEC dbo.MyProc -- Preferred method
--Instead of
EXEC MyProc -- Avoid this method
  • Do not use the prefix “sp_” in the stored procedure name: If a stored procedure name begins with “SP_,” then SQL server first searches in the master database and then in the current session database. Searching in the master database causes extra overhead and even a wrong result if another stored procedure with the same name is found in master database.
  • Use IF EXISTS (SELECT 1) instead of (SELECT *): To check the existence of a record in another table, we uses the IF EXISTS clause. The IF EXISTS clause returns True if any value is returned from an internal statement, either a single value “1” or all columns of a record or complete recordset. The output of the internal statement is not used. Hence, to minimize the data for processing and network transferring, we should use “1” in the SELECT clause of an internal statement, as shown below:
IF EXISTS (SELECT 1 FROM sysobjects
WHERE name = 'MyTable' AND type = 'U')
  • Use the sp_executesql stored procedure instead of the EXECUTE statement.
    The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve the re-usability of your code. The execution plan of a dynamic statement can be reused only if each and every character, including case, space, comments and parameter, is same for two statements. For example, if we execute the below batch:
DECLARE @Query VARCHAR(100)
DECLARE @Age INT
SET
@Age = 25
SET @Query = 'SELECT * FROM dbo.tblPerson WHERE Age = ' + CONVERT(VARCHAR(3),@Age)
EXEC (@Query)
If we again execute the above batch using different @Age value, then the execution plan for SELECT statement created for @Age =25 would not be reused. However, if we write the above batch as given below,
DECLARE @Query NVARCHAR(100)
SET @Query = N'SELECT * FROM dbo.tblPerson WHERE Age = @Age'
EXECUTE sp_executesql @Query, N'@Age int', @Age = 25
the compiled plan of this SELECT statement will be reused for different value of @Age parameter. The reuse of the existing complied plan will result in improved performance.
  • Try to avoid using SQL Server cursors whenever possible: Cursor uses a lot of resources for overhead processing to maintain current record position in a recordset and this decreases the performance. If we need to process records one-by-one in a loop, then we should use the WHILE clause. Wherever possible, we should replace the cursor-based approach with SET-based approach. Because the SQL Server engine is designed and optimized to perform SET-based operation very fast. Again, please note cursor is also a kind of WHILE Loop.
  • Keep the Transaction as short as possible: The length of transaction affects blocking and deadlocking. Exclusive lock is not released until the end of transaction. In higher isolation level, the shared locks are also aged with transaction. Therefore, lengthy transaction means locks for longer time and locks for longer time turns into blocking. In some cases, blocking also converts into deadlocks. So, for faster execution and less blocking, the transaction should be kept as short as possible.
  • Use TRY-Catch for error handling: Prior to SQL server 2005 version code for error handling, there was a big portion of actual code because an error check statement was written after every t-sql statement. More code always consumes more resources and time. In SQL Server 2005, a new simple way is introduced for the same purpose. The syntax is as follows:
BEGIN TRY
--Your t-sql code goes here
END TRY
BEGIN CATCH
--Your error handling code goes here
END CATCH

0 comments:

JQuery Select Header Checkbox When All Child Checkboxes Selected in Asp.net Gridview

7:22:00 am 0 Comments

Introduction:


In this article I will explain how to check/uncheck or select/deselect all the checkboxes in gridview with header checkbox using JQuery in asp.net.

Description:

 
Now I will explain how to select/deselect header checkbox when all child checkboxes selected in gridvuew using JQuery.
For that first design your aspx page like this
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
<title>check uncheck all checkboxes in gridview using jquery</title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js">
</script>
<script type="text/javascript">
$(document).ready(function() {
$('[id$=chkHeader]').click(function() {
$("[id$=chkChild]").attr('checked', this.checked);
});
$("[id$=chkChild]").click(function() {
if ($('[id$=chkChild]').length == $('[id$=chkChild]:checked').length)
{
$('[id$=chkHeader]').attr("checked", "checked");
}
else{
$('[id$=chkHeader]').removeAttr("checked");
}
});
});
</script>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:GridView ID="gvUserInfo" runat="server">
<HeaderStyle BackColor="#df5015" Font-Bold="true" ForeColor="White" />
<Columns>
<asp:TemplateField>
<HeaderTemplate>
<asp:CheckBox ID="chkHeader" runat="server" />
</HeaderTemplate>
<ItemTemplate>
<asp:CheckBox ID="chkChild"  runat="server"/>
</ItemTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView>
</div>
</form>
</body>
</html>
Now add following namespaces in codebehind
C# Code
using System;
using System.Data;
using System.Data.SqlClient;

After that add following code in code behind
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
BindGridview();
}
}
// This method is used to bind gridview from database
protected void BindGridview()
{
SqlConnection con = new SqlConnection("Data Source=ARYAN-PC\SQLEXPRESS;Integrated Security=true;Initial Catalog=SalmanTempDB");
con.Open();
SqlCommand cmd = new SqlCommand("select TOP 10 UserName,LastName,Location from UserInformation", con);
SqlDataAdapter da = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
da.Fill(ds);
gvUserInfo.DataSource = ds;
gvUserInfo.DataBind();
}
VB.NET Code
Imports System.Data
Imports System.Data.SqlClient
Partial Class VBCode
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
If Not IsPostBack Then
BindGridview()
End If
End Sub
' This method is used to bind gridview from database
Protected Sub BindGridview()
Dim con As New SqlConnection("Data Source=ARYAN-PC\SQLEXPRESS;Integrated Security=true;Initial Catalog=SalmanTempDB")
con.Open()
Dim cmd As New SqlCommand("select TOP 10 UserName,LastName,Location from UserInformation", con)
Dim da As New SqlDataAdapter(cmd)
Dim ds As New DataSet()
da.Fill(ds)
gvUserInfo.DataSource = ds
gvUserInfo.DataBind()
End Sub
End Class


0 comments:

using statement example in asp.net uses of using statement

7:08:00 am 0 Comments

Introduction:

In this article I will explain uses of using statement and how to declare and use using statement in asp.net.


Description:

Generally in our applications we will write code like create connection object to handle connectionstring after that open a connection and create command object etc. to interact with database to get data that would be like this 

SqlConnection con = new SqlConnection(connectionString);
con.Open();
SqlCommand cmd = new SqlCommand (commandString, con);
cmd.ExecuteNonQuery();
con.Close();
It’s very easy way to write above code but problem is SqlConnection and SqlCommand objects will create IDISPOSABLE interface that means it could create unmanaged resources in our application to cleanup those objects we need to call Dispose() method at the end of our process otherwise those objects will remain in our application.
Suppose we use using statement in our applications it will automatically create try / finally blocks for the objects and automatically runs Dispose() method for us no need to create any try/finally block and no need to run any Dispose() method.
If we use using statement we need to write code will be like this 
C# Code

using (SqlConnection con = new SqlConnection(connectionString))
{
using (SqlCommand cmd = new SqlCommand (commandString, con))
{
con.Open();
cmd.ExecuteNonQuery();
}
}
VB.NET Code

Using con As New SqlConnection(connectionString)
Using cmd As New SqlCommand(commandString, con)
con.Open()
cmd.ExecuteNonQuery()
End Using
End Using
In above code using statement no need to worry about close connection because using statement automatically close the connection once process is complete and automatically it will run Dispose() method to dispose objects. 
The above using statement code will equal to below code 
SqlConnection con = null;
SqlCommand cmd = null;
try
{
con = new SqlConnection(connectionString);
cmd = new SqlCommand(commandString, con);
con.Open();
cmd.EndExecuteNonQuery();
}
finally
{
if (con != null)
con.Dispose();
if (cmd != null)
cmd.Dispose();
}
Sample Code to display data in gridview by using statement 
First open Default.aspx page and write the following code
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Untitled Page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:GridView ID="gvDetails" runat="server">
</asp:GridView>
</div>
</form>
</body>
</html>
After that open code behind page and add the following namespace references
using System;
using System.Configuration;
using System.Web.Configuration;
After add namespaces write the following code in code behind
C# code
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
BindGridviewData();
}
}
//Bind Data to Repeater Control
protected void BindGridviewData()
{
string connectionString = "Data Source=ARYAN-PC\SQLEXPRESS;Initial Catalog=SalmanTempDB;Integrated Security=true";
using (SqlConnection con = new SqlConnection(connectionString))
{
using (SqlCommand cmd=new SqlCommand("select * from Repeater_Table Order By PostedDate desc",con))
{
con.Open();
DataTable dt = new DataTable();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
gvDetails.DataSource = dt;
gvDetails.DataBind();
}
}
}
VB.NET Code
Imports System.Data
Imports System.Data.SqlClient
Partial Class Default
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
If Not IsPostBack Then
BindGridviewData()
End If
End Sub
'Bind Data to Repeater Control
Protected Sub BindGridviewData()
Dim connectionString As String = "Data Source=ARYAN-PC\SQLEXPRESS;Initial Catalog=SalmanTempDB;Integrated Security=true"
Using con As New SqlConnection(connectionString)
Using cmd As New SqlCommand("select * from Repeater_Table Order By PostedDate desc", con)
con.Open()
Dim dt As New DataTable()
Dim da As New SqlDataAdapter(cmd)
da.Fill(dt)
gvDetails.DataSource = dt
gvDetails.DataBind()
End Using
End Using
End Sub
End Class

0 comments: