Archive

Archive for December, 2012

ASYNC Network IO wait type issue along with resolution

December 24, 2012 Leave a comment

When a session waits on the “async network io” event, it may be encountering network issues. More likely, however, it may be an issue with the client application not processing the data quickly enough. If the wait times for “async network io” are high.

We can use the below script to know information about the wait type.

SELECT req.session_id

,blocking_session_id

,ses.host_name

,DB_NAME(req.database_id) AS DB_NAME

,ses.login_name

,req.status

,req.command

,req.start_time

,req.cpu_time

,req.total_elapsed_time / 1000.0 AS total_elapsed_time

,req.command

,req.wait_type

,sqltext.text

FROM sys.dm_exec_requests req

CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext

JOIN sys.dm_exec_sessions ses

ON ses.session_id = req.session_id

WHERE req.wait_type IS NOT NULL

–WHERE req.wait_type = ‘?’

If we are getting lots of ASYNC Network IO Then it point to network related issues, but most often are caused by a client application that is not processing results from the SQL Server quickly enough. This will result in filling the network buffers so that SQL Server cannot send more data to the client. Therefore, the process executing the batch will need to wait for the ability to continue sending results to the client.
The “async network io” (in SQL 2005/2008) wait types can Reducing Waits / Wait times by the below method:

  1. Some applications, such as Microsoft Access, will ask for large result sets (typically identified by select statements with no where clause or full table scans), and then further filter the data on the client. If this is causing significant wait time, see if it’s possible to create a view for the client application to use instead. This will significantly reduce the amount of data being pushed to client since all of the filtering will done on SQL Server. Another fix could be to add a ‘where clause’ or further restrict the query so that less data is being sent to the client.
  1. Identify large result sets and verify with the application or developer team how the data is being consumed. If the application is querying large result sets but using only a few rows, consider only querying the rows that are needed or use ‘TOP n’ to reduce the number of rows returned.
  1. If you are encountering high “async network io” wait times when performing data loads on the server, make sure the shared memory protocol is enabled for the SQL Server instance and the session is connected using net_transport = ‘Shared memory’. You can determine the net_transport for the connection by looking at the DMV – sys.dm_exec_connections.We can use the shared memory connection string to pull the data to minimize the async network io” wait times .

This connection string exemplifies how the network specification can look like inside the connection string.

“Provider=sqloledb;Data Source=190.190.200.100,1433;Network Library=DBMSSOCN;Initial Catalog=pubs;User;Password=myPassword;”

Network protocol codes

Name Network library
dbnmpntw Named Pipes
dbmslpcn Shared Memory (local machine connections only, might fail when moving to production…)
dbmssocn Winsock TCP/IP
dbmsspxn SPX/IPX
dbmsvinn Banyan Vines
dbmsrpcn Multi-Protocol (Windows RPC)
dbmsadsn Apple Talk
dbmsgnet VIA
  1. Use the ‘TOP n’ clause so that only the rows that the client needs are returned to the client. Investigate network issues, if client application tuning tips do not apply.

If the above tuning tips are reviewed and applied, but the server is still is encountering high “async network io” times, then we need to ensure the below steps there aren’t any network related issues:
• Review counters such as ‘Batch Requests per second’. Any values over 3000 for a 100MB network card are getting excessive. ‘Current Bandwidth’ on the Network Interface with values over 0.6 are getting excessive.
• Check network adapter bandwidth – 1 Gigabit is better than 100 megabits and 100 megabits is better than 10 megabits.
• Look at your NIC configuration on the server to make sure there are no issues with the physical card. Also, check if auto detect is picking the fastest speed.
• Validate all of the network components between the client application and the SQL Server instance (e.g. switches / routers).

Database statistics update issue along with a resolution

December 24, 2012 Leave a comment

In our environment we execute the sp_updatestats job on a daily basis at 5 am, but we have faced severe performance issue due to inaccuracy of statistics and as a result query is unable to use the optimal execution plan.

Finding the state of your statistics

 

Up-to-date statistics help ensure that the most appropriate index is chosen to obtain the underlying data. They also help ensure that the correct index access mechanism is chosen, for example, seek or lookup. Running the SQL script given in the following listing will identify the current state of your statistics.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

SELECT

ss.name AS SchemaName

, st.name AS TableName

, s.name AS IndexName

, STATS_DATE(s.id,s.indid) AS ‘Statistics Last Updated’

, s.rowcnt AS ‘Row Count’

, s.rowmodctr AS ‘Number Of Changes’

, CAST((CAST(s.rowmodctr AS DECIMAL(28,8))/CAST(s.rowcnt AS

DECIMAL(28,2)) * 100.0)

AS DECIMAL(28,2)) AS ‘% Rows Changed’

FROM sys.sysindexes s

INNER JOIN sys.tables st ON st.[object_id] = s.[id]

INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]

WHERE s.id > 100

AND s.indid > 0

AND s.rowcnt >= 500

ORDER BY SchemaName, TableName, IndexName

Reason :

The system stored procedure sp_updatestats does not provide the ability to update your statistics with the FULLSCAN option. When we are faced with an urgent performance issue updating statistics with the “WITH FULLSCAN” option can be exactly what SQL Server needs to find the optimal execution plan and update the statistics

Workaround:

We can use rhe EXEC sp_MSforeachtable ‘UPDATE STATISTICS ? WITH FULLSCAN’ at specific database level to update the statistics correctly or we can schedule the Rebuild job daily basis if it is permitted by business owner or did not conflict with any other job.

Categories: Performence Tuning

Restore the Production Database which is part of replication

December 24, 2012 Leave a comment

Issue Description: One week back we got an urgent request to restore the database from a specific point of time because someone from an application team accidently deleted some rows from the database which stopped 1400 hundred employees from accessing building facilities. They wanted to restore the database from a specific point of time, but concern here is that database being restored  is part of the replication as per the high availability scenario. I had to restore the primary database without destroying the Replication settings.  Here I have mentioned the step to restore the database while preserving the replication settings.

 

We are maintaining daily full backup and every 15 minute Transactional log backup in a database Full recovery model. The issue happened at 7 40. so we have a full backup from 7 and two transactional log backups at 7.15 and 7.30. We can restore the database up to 7.30 and here we have to use Preserve the replication settings option during restoration so replication settings are not destroyed.

 

Categories: Backup and Restore

Some rare facts about Log shipping

December 24, 2012 2 comments

We have tested some rare facts in our test environment and shared the details.

 

Database can be mirrored or log shipped simultaneously

 

Yes . it can also be simultaneously mirrored and log shipped. Typically, when combining log shipping and database mirroring, the mirroring session is established before log shipping, although this is not required. Then the current principal database is configured as the log shipping primary (the principal/primary database), along with one or more remote secondary databases. Also, the mirror database must be configured as a log shipping primary (the mirror/primary database). The log shipping secondary databases should be on different server instances than either the principal/primary server or mirror/primary server.

 

Setting Up Mirroring and Log Shipping Together

Restore backups of the principal/primary database with NORECOVERY onto another server instance to be later used as database mirroring mirror database for the principal/primary database.

Set up database mirroring.

  1. Restore backups of the principal/primary database to other server instances to be later used as log shipping secondary databases for the primary database.
  2. Set up log shipping on the principal database as the primary database for one or more secondary databases.

Need to set up a single share as the backup directory (a backup share). This ensures that after role switching between the principal and mirror servers, backup jobs continue to write to the same directory as before. A best practice is to ensure that this share is located on a different physical server from the servers hosting the databases involved in mirroring and log shipping.

Manually failover from the principal to the mirror.

Step To manually fail over a database mirroring session

  1. Connect to the principal server.
  2. Set the database context to the master database:

USE master;

  1. Issue the following statement on the principal server:

ALTER DATABASE database_name SET PARTNER FAILOVER, where database_name is the mirrored database.

This initiates an immediate transition of the mirror server to the principal role.

On the former principal, clients are disconnected from the database and in-flight transactions are rolled back.

Set up log shipping on the new principal (previously mirror) as the primary database.

 

Log shipping can be possible in different SQL Server versions

 

SQL Server 2008 à 2005 Log shipping not possible because 2008 database can’t be restored at 2005 server.

SQL Server 2005 à 2008—it is possible, but database can be only recovery mode. You can’t put the database read-only or standby  mode.

It is always recommended that both primary and secondary servers have same version.

Categories: SQL Server 2008

Troubleshoot SQL server performance issue due to user-defined scalar functions

December 24, 2012 Leave a comment

We have faced a strange issue in Database server where it took 20 hours time to upload the 1 million rows. The issue is not related with Database locking and resource consuming issue. The database server has 16 CPU and 32 GB ram RAM. To fixed such kind of issue first we have to look out Inefficient query plan, When generating an execution plan for a query, the SQL Server optimizer attempts to choose a plan that provides the fastest response time for that query. The most significant input the optimizer uses in evaluating the cost of various alternative query plans is the cardinality estimates for each operator, which you can see in the Show plan (EstimateRows and EstimateExecutions attributes). By the below step I am able to figure out that there is a need to use some other process to upload the batch instead of Function. It is a user-defined scalar function and UDF is used in the select list, this happens when creating the results of the query to pass to the next stage of query processing. It’s the row by row processing that seems to slow SQL Server the most.  It does not create any issue for a small number of rows, but this penalty shows up as poor query execution time when a query applies a UDF to a large number of rows, typically 1000 or more. No matter how simple a UDF there’s a large performance penalty paid when they’re used. The penalty is incurred because the SQL Server database engine must create its own internal cursor like processing. It must invoke each UDF on each row. It turns a Seek into a Scan.

Categories: Performence Tuning

Step By Step to figure out the Inefficient query Plan:

December 24, 2012 1 comment

DBA can use profiler for the same but the below step is more convenient.

 

Step 1:  The query against sys.dm_exec_query_stats is an efficient way to determine which query is using the most cumulative CPU.

select

highest_cpu_queries.plan_handle,

highest_cpu_queries.total_worker_time,

q.dbid,

q.objectid,

q.number,

q.encrypted,

q.[text]

from

(select top 50

qs.plan_handle,

qs.total_worker_time

from

sys.dm_exec_query_stats qs

order by qs.total_worker_time desc) as highest_cpu_queries

cross apply sys.dm_exec_sql_text(plan_handle) as q

order by highest_cpu_queries.total_worker_time desc

 

Step 2: We can use the following query to find the top user sessions that are allocating internal objects, including currently active tasks.

 

SELECT

t1.session_id,

(t1.internal_objects_alloc_page_count + task_alloc) as allocated,

(t1.internal_objects_dealloc_page_count + task_dealloc) as

deallocated

from sys.dm_db_session_space_usage as t1,

(select session_id,

sum(internal_objects_alloc_page_count)

as task_alloc,

sum (internal_objects_dealloc_page_count) as

task_dealloc

from sys.dm_db_task_space_usage group by session_id) as t2

where t1.session_id = t2.session_id and t1.session_id >50

order by allocated DESC

 

Here is a sample output.

 

session_id allocated            deallocated

———- ——————– ——————–

52         5120                 5136

51         16                   0

 

Step 3: Once you have isolated the task or tasks that are generating a lot of internal object allocations, we  can find out which Transact-SQL statement it is and its query plan for a more detailed analysis.

 

select

t1.session_id,

t1.request_id,

t1.task_alloc,

t1.task_dealloc,

t2.sql_handle,

t2.statement_start_offset,

t2.statement_end_offset,

t2.plan_handle

from (Select session_id,

request_id,

sum(internal_objects_alloc_page_count) as task_alloc,

sum (internal_objects_dealloc_page_count) as task_dealloc

from sys.dm_db_task_space_usage

group by session_id, request_id) as t1,

sys.dm_exec_requests as t2

where t1.session_id = t2.session_id and

(t1.request_id = t2.request_id)

order by t1.task_alloc DESC

 

Here is a sample output.

 

session_id request_id  task_alloc           task_dealloc

———————————————————

52         0           1024                 1024

sql_handle                                                                   statement_start_offset

———————————————————————–

0x02000000D490961BDD2A8BE3B0FB81ED67655EFEEB360172   356

 

statement_end_offset  plan_handle

———————————

-1                    0x06000500D490961BA8C19503000000000000000000000000

 

You can use the sql_handle and plan_handle to get the SQL statement and the query plan as follows:

 

select text from sys.dm_exec_sql_text(@sql_handle)

select * from sys.dm_exec_query_plan(@plan_handle)

Compare SQL Server 2012 Always On Technology with other high availability feature like Mirroring, Log shipping and clustering and why it is more acceptable.

December 24, 2012 1 comment

Here I have mentioned some reasons why SQL Server Always On technology  is a good high availability option as compared with other high availability features like Mirroring, Log shipping and clustering.. SQL Server 2012 now has Always On technology, where the database can keep mirror copy/copies of the database that are constantly being refreshed.  In the event of a failure of the Primary, it will seamlessly fail over to another copy. SQL Server 2012 Always On is employs mirroring and clustering together to achieve the high availability

 

 

SQL Server 2012 Always On — SQL Server Mirroring

This has significant disadvantages over traditional SQL Server Mirroring.

  • In traditional mirroring we cannot manage groups of databases, so that when a single database failed they would all failover.
  • There is only one Mirror of the database allowed
  • The Mirror is not useable unless you are prepared to work with Snapshots.

 

  • In SQL Server 2012 Always On Technology,  we have Availability Group option, using which Databases can now be grouped together. When a failure happens, the entire group of database is treated as one. When one fails over they all fail over. This is very powerful when you have an application that uses more than one database in an instance.

 

SQL Server 2012 Always On — SQL Server Log Shipping.

 

Log shipping is sort of manual mirroring which allows more than one replica to be kept; perhaps a local one and a remote one.  This is more difficult to setup and failover is not automatic The biggest downfall is we  cannot use the receiving database since it is in a recovery state  and  read-only  as the transaction logs are being applied. In  SQL Server 2012 Always On Technology all the databases can failover automatically.

 

SQL Server Always On Versus SQL Server Clustering.

With SQL Server clustering, you are dealing with a shared storage system that adds cost and complexity to the equation.  AlwaysOn doesn’t need to use shared storage. So here it is more cost effective.