Archive

Archive for February, 2013

February 14, 2013 Leave a comment

Rajib Kundu Blog

DBA can use profiler for the same but the below step is more convenient.

 

Step 1:  The query against sys.dm_exec_query_stats is an efficient way to determine which query is using the most cumulative CPU.

select

highest_cpu_queries.plan_handle,

highest_cpu_queries.total_worker_time,

q.dbid,

q.objectid,

q.number,

q.encrypted,

q.[text]

from

(select top 50

qs.plan_handle,

qs.total_worker_time

from

sys.dm_exec_query_stats qs

order by qs.total_worker_time desc) as highest_cpu_queries

cross apply sys.dm_exec_sql_text(plan_handle) as q

order by highest_cpu_queries.total_worker_time desc

 

Step 2: We can use the following query to find the top user sessions that are allocating internal objects, including currently active tasks.

 

SELECT

t1.session_id,

(t1.internal_objects_alloc_page_count + task_alloc) as allocated,

(t1.internal_objects_dealloc_page_count + task_dealloc) as

deallocated

from sys.dm_db_session_space_usage as t1,

(select session_id,

sum(internal_objects_alloc_page_count)

as task_alloc,

sum (internal_objects_dealloc_page_count) as

task_dealloc

from sys.dm_db_task_space_usage group by session_id) as t2

where t1.session_id = t2.session_id and t1.session_id >50

order by allocated DESC

 

Here is a sample output.

 

session_id allocated            deallocated

———- ——————– ——————–

52         5120                …

View original post 142 more words

February 14, 2013 Leave a comment

Index Fragmentation Report in SQL Server 2005 and 2008

February 14, 2013 Leave a comment

Index Fragmentation Report in SQL Server 2005 and 2008 by Arshad ali/William

 

select object_name(IPS.[object_id]) as [TableName]
,SI.index_id as [SI.index_id]
,SI.name AS [IndexName]
,IPS.index_type_desc
,IPS.index_id
,IPS.avg_fragmentation_in_percent
,IPS.index_level
,IPS.avg_fragment_size_in_pages
,IPS.avg_page_space_used_in_percent
,IPS.record_count
,IPS.ghost_record_count
,IPS.fragment_count
from sys.dm_db_index_physical_stats(db_id(), null, null, null, ‘detailed’) as IPS
inner join sys.indexes as SI with (nolock) on IPS.[object_id] = SI.[object_id] and IPS.index_id = SI.index_id
inner join sys.tables as ST with (nolock) on IPS.[object_id] = ST.[object_id]
where ST.is_ms_shipped = 0
and IPS.avg_fragmentation_in_percent>=10 — allow limited fragmentation
and IPS.page_count>25 — ignore small tables
and IPS.index_type_desc<>’heap’ — ignore heaps
order by IPS.avg_fragmentation_in_percent desc

Returned Column Description
avg_fragmentation_in_percent It indicates the amount of external fragmentation you have for the given objects.

The lower the number the better – as this number approaches 100% the more pages you have in the given index that are not properly ordered.

For heaps, this value is actually the percentage of extent fragmentation and not external fragmentation.

avg_page_space_used_in_percent It indicates how dense the pages in your index are, i.e. on average how full each page in the index is (internal fragmentation).

The higher the number the better speaking in terms of fragmentation and read-performance. To achieve optimal disk space use, this value should be close to 100% for an index that will not have many random inserts. However, an index that has many random inserts and has very full pages will have an increased number of page splits. This causes more fragmentation. Therefore, in order to reduce page splits, the value should be less than 100 percent.

fragment_count A fragment is made up of physically consecutive leaf pages in the same file for an allocation unit. An index has at least one fragment. The maximum fragments an index can have are equal to the number of pages in the leaf level of the index. So the less fragments the more data is stored consecutively.
avg_fragment_size_in_pages Larger fragments mean that less disk I/O is required to read the same number of pages. Therefore, the larger the avg_fragment_size_in_pages value, the better the range scan performance.
Categories: Index

Retrieving a query’s cached plan and SQL text

February 14, 2013 Leave a comment

When a query executes, a cached plan is created for it. A cached plan describes, at a granular level, how the query will be executed, for example, how indexes will be used.  A cached plan is a great starting point in determining why a query is behaving as it is, for example, why it’s running slowly. Often it also offers clues on how performance can be improved.

 

 

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED SELECT TOP 10

    st.text AS [SQL]

    , DB_NAME(st.dbid) AS DatabaseName

    , cp.usecounts AS [Plan usage]

    , qp.query_plan

FROM sys.dm_exec_cached_plans cp

CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp WHERE st.text LIKE ‘%CREATE PROCEDURE%’

ORDER BY cp.usecounts DESC

To troubleshooting complex problems what kind of Behaviors and attitudes require for DBA teach Manager

February 12, 2013 Leave a comment

I would like to share a thought. I am very frequently received the question from my peer , other team mate how you can manage so long hours shift time  whereas as per DBA manger profile to own all the DBA operations (SQL Server, Analysis service and Reporting  service), provide technical guidance, client interactions, team building and people management and ensuring strong handshake between onsite – offshore teams. We are worked as single person in shift and 250 server in your shoulder. I am not only the person as per current trend cause it is a  support industry current trend   and everywhere Tech manager need to take the responsibility. The Technical manager has to take the other  responsibility like – Connecting & communicating with key stakeholders, Identifying & dealing with risks  and  sticking to scheduled budget & time.

 

The following behaviors  and attitudes are characteristic of the most effective  professionals when troubleshooting complex problems:Sorce(SQL server 2012 Internal Book)

➤      Remain calm — Stay objective, no matter  how urgent the problem.  Project confidence and calmness to your peers, end users, and management, even if they show signs of stress or panic. This reassures them that you are in control  and able to resolve the problem.  These people are more likely to give you the time and space necessary to investigate and resolve the issue if they trust your capability.

➤      Remember that problems are never random — Problems with computers happen  for a reason. When you don’t understand the reason,  the cause may seem random, but there is always an explanation. Intermittent or infrequent problems  in particular appear  random; seek to identify patterns or correlating events that could lead to the circumstances that cause the problem.

➤     Avoid intolerance — Never assume that you know how to solve a problem  until you have a problem  description and have done some basic testing. It is not necessary to provide an instant  answer; the correct answer with a short delay trumps  a quick, inaccurate answer. This habit also builds your credibility with management as a reliable and capable engineer.

➤      Avoid looking for fixes — Ensure that finding the cause is your first priority!  The people around you will be pressing hard for a fix or an estimated  time to fix. The fix is the goal, but you must first lay the foundation by understanding the cause.

➤      Think ahead — Proactively consider potential blockers.  If you may need to restore the data- base, start the tape retrieval process in parallel with troubleshooting. This reduces overall downtime and impact if you do need to revert to the backup.

PROFESSIONAL SQL SERVER® 2012 INTERNALS AND TROUBLESHOOTING Book Feedback

February 11, 2013 Leave a comment

 sql 2012 Inter Book

Since cople of days I am reading the book and it is so nice . WE ALL HAVE  SQL SERVER books that are considered  a must have; the ones that every serious SQL Server Professional  displays proudly  on their bookshelf and it is come under the same catagory . It is so nicely clarify the SQL SERVER ARCHITECTURE , memory Sorage and DEMYSTIFYING HARDWARE . The QUERY PROCESSING AND EXECUTION part is so nice  which help you to dig out the root cause of any performence issue.

SQL Script to Update Usage for all databases and rectify inaccuracies in the catalog view

February 5, 2013 26 comments
Categories: Performence Tuning