Archive

Archive for May, 2012

Microsoft SQL Server – TempDB usage per active session


Lists the TempDB usage per each active session.
It helps identifying the sessions that use the tempdb heavily with internal objects.

When the internal objects usage is high, the session is probably using big hash tables or spooling in worktables. It could be a symptom of an inefficient plan or a missing index.

Shrinking a TempDB full of internal objects will probably have no effect, because the engine will not release the deallocated space. The only possible alternative to restarting the service or Kill the specified SPID after received the approval from business owner.

It helps to identify the root cause of the problem why Tempdb is rapidly increased.

Script: I already tested in our Citagus test environment and it work as expected.,

;WITH task_space_usage AS (

— SUM alloc/delloc pages

SELECT session_id,

request_id,

SUM(internal_objects_alloc_page_count) AS alloc_pages,

SUM(internal_objects_dealloc_page_count) AS dealloc_pages

FROM sys.dm_db_task_space_usage WITH (NOLOCK)

WHERE session_id <> @@SPID

GROUP BY session_id, request_id

)

SELECT TSU.session_id,

TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],

TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],

EST.text,

— Extract statement from sql text

ISNULL(

NULLIF(

SUBSTRING(

EST.text,

ERQ.statement_start_offset / 2,

CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset – ERQ.statement_start_offset ) / 2 END

), ”

), EST.text

) AS [statement text],

EQP.query_plan

FROM task_space_usage AS TSU

INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)

ON TSU.session_id = ERQ.session_id

AND TSU.request_id = ERQ.request_id

OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST

OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP

WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL

ORDER BY 3 DESC, 5 DESC

By: Rajib Kundu

Posted in Microsoft SqlServer Administration | Comments (51)

6 Jun

Should I take the backup with or without compress? Which option is best?

In our day to day life the business owner frequently asked that question .

Explanation:

File Size Summary
The SQL 2008 Backup without compression was 3.5GB
The SQL 2008 Backup with compression was 986MB. The reduced space is 2.5GB; the compressed file is only 28% of the original size!

Note: Not all databases will have this compression factor or CPU usage due to systems and data types etc.

The SQL 2008 Backup without compression took approx 6 minutes and the SQL 2008 Backup with compression took approx 3 minutes. You might be asking – how is that possible?

Ans: Well the CPU is used to compress the data before it is written to disk, so LESS data is written to disk. That is the reason to take the less time to do the backup job with compress features.

Advertisements
Categories: Performence Tuning

Script to check SQL Server Cluster Nodes


Following Script will help your to check SQL Server Cluster Nodes, which will query clustered server instance and determine how many nodes exist in the SQL Server cluster

For SQL Server 2000

– Returns the list of nodes on which the virtual server can run. Such information is useful in failover clustering environments.

– If the current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

SELECT * FROM ::fn_virtualservernodes()

 

For SQL Server 2005 /2008 / 2008 R2

– determine the nodes on a SQL Server clustered server instance:

SELECT * FROM sys.dm_os_cluster_nodes

For SQL Server 2005/2008/2008 R2 and SQL Server 2011 (Denali)

– VIEW SERVER STATE permission are required to execute this function

– current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

SELECT * FROM fn_virtualservernodes()

 

Following Script will help your to check SQL Server Cluster Nodes, which will query clustered server instance and determine how many nodes exist in the SQL Server cluster

 

For SQL Server 2000

 

– Returns the list of nodes on which the virtual server can run. Such information is useful in failover clustering environments.

– If the current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

 

SELECT * FROM ::fn_virtualservernodes()

 

For SQL Server 2005 /2008 / 2008 R2

 

– determine the nodes on a SQL Server clustered server instance:

 

SELECT * FROM sys.dm_os_cluster_nodes

 

For SQL Server 2005/2008/2008 R2 and SQL Server 2011 (Denali)

 

– VIEW SERVER STATE permission are required to execute this function

– current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

 

SELECT * FROM fn_virtualservernodes()

Categories: Cluster

Performance Analysis of Logs (PAL) Tool

May 6, 2012 1 comment

Logman

Manages and schedules performance counter and event trace log collections on local and remote systems.

Data Collection

First, we need to identify what Objects and Counters we need. We can automate creating one perfmon log file for each day. Then move the old files to a network share for later reference.

Cut and paste the contents below to a file name “SQL2005BaselineCounters.config” and save the file in “E:\perflogs”. You can choose any local drive/folder, make sure there is enough space in it. Try not to use the same storage that your SQL Server databases use, because this can slow down your I/O throughput.

Note in the below code I am collecting SQL Server data for an instance named “MSSQL$DEV”. This will need to be replaced for your server. If you just have a default instance this will just be “SQLServer

“\Memory\Available MBytes”
“\Memory\Free System Page Table Entries”
“\Memory\Pages Input/sec”
“\Memory\Pages/sec”
“\MSSQL$DEV:Access Methods\Full Scans/sec”
“\MSSQL$DEV:Access Methods\Page Splits/sec”
“\MSSQL$DEV:Access Methods\Workfiles Created/sec”
“\MSSQL$DEV:Access Methods\Worktables Created/sec”
“\MSSQL$DEV:Buffer Manager\Buffer cache hit ratio”
“\MSSQL$DEV:Buffer Manager\Checkpoint pages/sec”
“\MSSQL$DEV:Buffer Manager\Free pages”
“\MSSQL$DEV:Buffer Manager\Lazy writes/sec”
“\MSSQL$DEV:Buffer Manager\Page life expectancy”
“\MSSQL$DEV:Buffer Manager\Page reads/sec”
“\MSSQL$DEV:Buffer Manager\Page writes/sec”
“\MSSQL$DEV:Buffer Manager\Stolen pages”
“\MSSQL$DEV:General Statistics\Logins/sec”
“\MSSQL$DEV:General Statistics\Logouts/sec”
“\MSSQL$DEV:General Statistics\User Connections”
“\MSSQL$DEV:Latches\Average Latch Wait Time (ms)”
“\MSSQL$DEV:Locks(_Total)\Average Wait Time (ms)”
“\MSSQL$DEV:Locks(_Total)\Lock Requests/sec”
“\MSSQL$DEV:Locks(_Total)\Number of Deadlocks/sec”
“\MSSQL$DEV:Memory Manager\Target Server Memory (KB)”
“\MSSQL$DEV:Memory Manager\Total Server Memory (KB)”
“\MSSQL$DEV:SQL Statistics\Batch Requests/sec”
“\MSSQL$DEV:SQL Statistics\SQL Compilations/sec”
“\MSSQL$DEV:SQL Statistics\SQL Re-Compilations/sec”
“\Paging File(_Total)\% Usage”
“\Paging File(_Total)\% Usage Peak”
“\PhysicalDisk(_Total)\Avg. Disk Read Queue Length”
“\PhysicalDisk(_Total)\Avg. Disk sec/Read”
“\PhysicalDisk(_Total)\Avg. Disk sec/Transfer”
“\PhysicalDisk(_Total)\Avg. Disk sec/Write”
“\PhysicalDisk(_Total)\Avg. Disk Write Queue Length”
“\Process(sqlservr)\% Privileged Time”
“\Process(sqlservr)\% Processor Time”
“\Processor(_Total)\% Privileged Time”
“\Processor(_Total)\% Processor Time”
“\System\Context Switches/sec”
“\System\Processor Queue Length”

Fig 1

Next step is to create a counter log in perfmon which will use the above counters. From a command prompt, execute the statement below on your SQL Server box.

logman create counter SQL2005Perf -f bin -b 01/01/2009 00:00:05 -E 01/01/2009 23:59:00 -si 05 -v mmddhhmm -o “E:\perflogs\SQL2005Perf” -cf “E:\perflogs SQL2005BaselineCounters.config” -u domain\username *

You will be prompted for the password

Fig 2

This will create a new counter log named “SQL2005Perf”. To verify this, open perfmon and click “counter logs” under Performance logs and alerts.

Fig 3

Here is the notes for each option used above:

  • SQL2005Perf – name of the counter log
  • -f bin – binary format, with a .blg extension
  • -b 01/01/2009 00:00:05 – begin data and time, input a datetime to start the counter log at a later time, if you input a datetime that is past, the counter log will start immediately
  • -E 01/01/2009 23:59:00 – end date and time
  • -si 05 – 5 second interval
  • -v mmddhhmm – filename will be named as SQL2005Perf_mmddhhmm.blg
  • -o “E:\perflogs\SQL2005Perf” – output folder name
  • -cf “E:\perflogs\SQL2005BaselineCounters.config” – config file name with the counters
  • -u domain\username * – the * will prompt for the password for the domain user account.

You can manually start and stop the above counter logs with below commands:

Logman start SQL2005Perf 

Logman stop SQL2005Perf

To get a list of all your counter logs you can run this command.

Logman query

Automate Data Collection

 

It would be nice if we have performance data for each day. This can be done by stopping the counter log at 11:59:59 PM and again starting it at 12:00:01 AM daily. Once you stop the counter log the log file is closed and starting the counter log creates a new log file.

Lets create a scheduled task which will do this.

Lets first create a batch file “StopCopyStart.bat” that will be executed by the scheduled task. Copy the contents below and save it in the folder “E:\perflogs”. Make sure to change the “Servername”, “SourceServerName” to match your servers.

logman stop sql2005perf
sleep 1
move E:\perflogs\SQL2005Perf\*.blg \\Servername\E$\perflogs\SourceServerName
sleep 1
logman start sql2005perf

Fig 4

From the command prompt execute this statement to create the scheduled task.

schtasks /create /tn “Stop Copy And Start SQL2005Perf Perfmon Counter Log” /tr E:\perflogs\StopCopyStart.bat /sc daily /st 23:59:58 /ed 01/01/2099 /ru system

Fig 5

Here are the notes for each option used above:

  • schtasks /create – create a new scheduled task
  • /tn “Stop Copy And Start SQL2005Perf Perfmon Counter Log” – scheduled task name
  • /tr E:\perflogs\StopCopyStart.bat – run this batch file
  • /sc daily /st 23:59:59 – schedule time
  • /ed 01/01/2099 – end date
  • /ru system user account

A new scheduled task with name “Stop Copy And Start SQL2005Perf Perfmon Counter Log” will now be created. From the command prompt type “schtasks”, this will list all the scheduled tasks.

Next Steps

  • The idea is to move the performance data .blg file from each of the SQL Server to a common share. Create one folder for each server and move the .blg file there. So, the user account to execute the scheduled task should have permissions on the share where we move the performance log file.
  • You should do some housekeeping on this network share to delete files older than 30 or 60 days.
  • Always have one week ( one file for each day of week ) performance data for each server to be used as a baseline. If you have any performance problems you can compare with the baseline.

Performance Analysis of Logs (PAL) Tool

PAL (Performance Analysis of Logs) tool is a powerful tool that reads in a performance monitor counter log and analyzes it using known thresholds.

Features

Thresholds files for most of the major Microsoft products such as IIS, MOSS, SQL Server, BizTalk, Exchange, and Active Directory.

An easy to use GUI interface which makes creating batch files for the PAL.ps1 script.

A GUI editor for creating or editing your own threshold files.

Creates an HTML based report for ease of copy/pasting into other applications.

Analyzes performance counter logs for thresholds using thresholds that change their critieria based on the computer’s role or hardware specs.

For more information

http://pal.codeplex.com/

Once the PAL is installed on your local system or shared server you can analyze the perfmon countests.

  • Copy all the perfmon files to C:\PAL location.(any folder)
  • Edit the PAL_reports_creation.bat and make shore all the file names should be same as the files which you have copied to c:\PAL.
  • Run the batch file bat1.bat and all the reports will generate and copied to output_path.

Create batch file and paste the script for each server.

Below script is for server1 if you have 5 servers then you need to have 5 times below script and set the parameters as per the server configurations.

Batch file:-

——————————————————————————————————————-

CScript “C:\Program Files\PAL\PAL v1.3.5\PAL.vbs” /LOG:”server1_perfmon_log_path.csv” /THRESHOLDFILE:”C:\Program Files\PAL\PAL v1.3.5\SQLServer2005.xml” /INTERVAL:”AUTO” /ISOUTPUTHTML:True /ISOUTPUTXML:False /OUTPUTDIR:”output_path” /HTMLOUTPUTFILENAME:”[LogFileName]_PAL_ANALYSIS_[DateTimeStamp]_[GUID].htm” /NumberOfProcessors:12 /ThreeGBSwitch:False /SixtyFourBit:True /TotalMemory:16 /KernelDumpOnCDrive:True

————————————————————————————————————————–

Categories: Performence Tuning

Is it possible that SQL 2005 backup can restore in SQL 2000?


As per Microsoft there is no workaround. SQL Server 2005 does not support downgrades or restores to SQL Server 2000. But if server contains SQL 2000 and 2005 both instances then we can restore the SQL 2005 backup on SQL 2000 instance.

Or

There is no “Restore” functionality, but there is a workaround to copy 2005 databases to 2000:

Right-click on DB -> tasks -> generate scripts
select DB and click “script all objects…”, hit next
select any options you want, specifically changing “script for server version” to SQL Server 2000
next through and run the script

Now just export data from the 2005 database to the newly-created 2000 database.
Right-click on DB -> tasks -> export…
set source and hit next
set destination and hit next
select “copy data from one or more tables”, hit next
select all, check “optimize for many tables” and “run in a transaction”
you may have to edit each table mapping and check “enable identity insert”

Categories: Backup and Recovery

Very interesting feature of Backup compression


I come across very interesting feature of Backup compression.

If you have taken the backup of the database by TSQL statement like with compress command that time the compress is workout but the size is different. If you would like to take the full Backup compress futures then enable advance option and set the server level setting for backup compression on. After On the option then if take the backup then the size is different.

I have taken Adventure database backup without the server settings option that time the size is 135235 KB but the database is backed after enable the option then the size is 34906 kb.

 

To enable the option
USE master;
GO
EXEC sp_configure 'show advanced option', '1';
RECONFIGURE
GO
EXEC sp_configure 'backup compression default', '1';
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'show advanced option', '0';
RECONFIGURE
GO

Categories: Backup and Recovery

Proactive Monitoring method Database Log space status to avoid log file full error


As per the customer requirement we have to make a script that notified us whenever a database (including tempdb) has less than 50% unused transaction log space. We monitored for this condition proactively discover long running, transaction log consuming queries. Such queries can take a long time to rollback and can sometimes be a result of a user error – for example an accidental delete of many records. The DBA at this customer knew he had to take action when log space was low, to investigate such potential user error.

With the monitoring scripts used by us the alarm went off regularly on tempdb. Because of the checkpoint behavior described above, unused transaction log space is only reclaimed at 70% log usage of tempdb. The customer still wanted to headroom of the 50% limit, so he had to be able to distinguish between the “by design” checkpoint at 70% and a troublemaking, long running transaction preventing log space from being reclaimed.

The solution for the customer was to monitor to the column log_reuse_wait_desc in sys.databases in combination with the unused transaction log space. If this column has the value different than CHECKPOINT and NOTHING, the log usage is a result of a long running log space consuming transaction, in tempdb.

The script below illustrates how this is done:

SELECT name
, db.log_reuse_wait_desc
, ls.cntr_value AS size_kb
, lu.cntr_value AS used_kb
, CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)
AS used_percent
, CASE WHEN CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT) > .5 THEN
CASE
/* tempdb special monitoring */
WHEN db.name = ‘tempdb’
AND log_reuse_wait_desc NOT IN (‘CHECKPOINT’, ‘NOTHING’) THEN ‘WARNING’
/* all other databases, monitor foor the 50% fill case */
WHEN db.name <> ‘tempdb’ THEN ‘WARNING’
ELSE ‘OK’
END
ELSE ‘OK’ END
AS log_status
FROM sys.databases db
JOIN sys.dm_os_performance_counters lu
ON db.name = lu.instance_name
JOIN sys.dm_os_performance_counters ls
ON db.name = ls.instance_name
WHERE lu.counter_name LIKE ‘Log File(s) Used Size (KB)%’
AND ls.counter_name LIKE ‘Log File(s) Size (KB)%’

Categories: Backup and Recovery

SQL server store procedure new features in SQL 2008-2012


Stored procedures today:

One of the biggest stored-procedure improvements in SQL Server 2008 and 2008 R2 is table-valued parameters. This parameter type groups multiple rows of data into sets and then sends them all at once to the database server, reducing round trips between client and server, and it doesn’t create temporary tables or numerous parameters. Table-valued parameters don’t require any locking to initially populate client data. They also enable the client to specify sort order.

The table-valued structure is easy to understand and use, yet it can accommodate complex business logic. You use table-valued parameters the same way you use classic stored-procedure parameters. First, declare a user-defined table type and create a stored procedure with the input parameter of that table type. Next, declare a variable of the table type and reference it. Use an INSERT statement to populate the table variable instead of a SET or SELECT statement. Finally, plug the filled table variable into the stored procedure as an input parameter.

SQL Server 2008 also introduced a MERGE statement that allows for multiple data manipulation language (DML) operations in a single T-SQL statement. With the appropriate indexes on joined tables, you’ll gain increased query performance. That’s because a single MERGE statement reduces the number of times the data in source and target tables is processed. In SQL Server 2005 and earlier versions, each INSERT, DELETE, and UPDATE statement had to process the data one time. MERGE is useful, say, when you are synchronizing two tables (see “Sidebar title TK”).

In SQL Server 2008 and 2008 R2 stored procedures, grouping operations have been enhanced by a new GROUPING SETS syntax. A simple GROUP BY clause returns one row for each combination of all column values, whereas GROUPING SETS returns one row for each unique value in each column. Without this new feature, the latter would take multiple GROUP BY statements for each column combined in a UNION structure. That would inevitably require more resources, like disk I/O operations, memory and runtime.

SQL Server 2008 and 2008 R2 also use a new row constructor to insert multiple rows in a single INSERT statement in stored-procedure programming. Database administrators may already be familiar with this syntax: “INSERT INTO TABLE (COLa, COLb, COLc) … VALUES (V1a, V1b, V1c …), (V2a, V2b, V2c …), … (V1000a, V1000b, V1000c …)”. Here the number 1000 indicates the maximum allowed rows in one INSERT statement. The new INSERT syntax is cleaner because it removes duplicate strings. It also allows for faster round trips to and from the server by reducing network traffic.

In my triple-trial test, I ran 10 batches of 1,000 new-style inserts per batch and 10,000 individual inserts in parallel. Each test counted as one round trip, but compared with the old-school INSERTs, the simplified row constructor cut down the number of transactions and bytes sent from the server 1,000-fold and reduced the amount of client-sent data by a factor of three and client processing time by a factor of 300. Total execution time was reduced by more than 50%.

Categories: SQL Server 2008