Resolution step for the issue” No full-text supported languages found. Invalid locale ID was specified. Please verify that the locale ID is correct and corresponding language resource has been installed.”

March 31, 2015 1 comment

We are getting the below issue after install the full text search and here I mention the workaround.

Issue: Informational: No full-text supported languages found. Invalid locale ID was specified. Please verify that the locale ID is correct and corresponding language resource has been installed.


1st Step:SELECT @@language; then run the below command


sp_fulltext_service ‘update_languages’


If the issue is not resolved then needs to follow the below step.

After the installation completed,I went into SQL Server Configuration Manager and under the SQL Server 2005 Services detail Right Click SQL Server Full Text Search -> Properties.

  • Stop the service here.
  • Change the Built In Account to Network Service
  • Start the service.
  • Went back into SQL Server Management Studio
  • Selected properties from my database – > files -> uncheck the full-text >Then saved.
  • Then went back into the same setting and re-enabled it.
  • Then when I went to the table level – > create full-text index. I can now select a language (before I saw an empty drop-down list)
Categories: Full Text Catalog

Some Important information for Tempdb database which should be known by Each SQL DBA

March 31, 2015 Leave a comment

Tempdb Database:  It is a workspace for holding temporary objects or intermediate result sets.

 –       Temporary user objects that are explicitly created, such as: global o local temporary tables, temporary stored procedures, table variables, or cursors.

 –       Internal objects that are created by the SQL Server Database Engine, for example, work tables to store intermediate results for spools or sorting.

 –       Row versions that are generated by data modification transactions in a database that uses read-committed using row versioning isolation or snapshot isolation transactions.

 –       Row versions that are generated by data modification transactions for features, such as: online index operations, Multiple Active Result Sets (MARS), and AFTER triggers.

 Operations within Tempdb are minimally logged. This enables transactions to be rolled back. Tempdb is re-created every time SQL Server is started so that the system always starts with a clean copy of the database. Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in Tempdb to be saved from one session of SQL Server to another.


Every SQL Server has a shared database named Tempdb that is for use by temporary objects. Because there is only one Tempdb database per instance, it often proves to be a bottleneck for those systems that make heavy usage of Tempdb. Typically, this happens because of PAGELATCH, in-memory latch contention on the allocation bitmap pages inside of the data files. The allocation bitmap pages are the page free space (PFS), global allocation map (GAM), and shared global allocation map (SGAM) pages in the database. The first PFS page occupies PageID 1 of the database, the first GAM page occupies PageID 2, and the first SGAM page occupies PageID 3 in the database. After the first page, the PFS pages repeat every 8088 pages inside of the data file, the GAM pages repeat every 511,232 pages (every 3994MB known as a GAM interval), and the SGAM pages repeat every 511,232 + 1 pages in the database.

When PAGELATCH contention exists on one of the allocation bitmap pages in the database, it is possible to reduce the contention on the in-memory pages by adding additional data files, with the same initial size and auto-growth configuration. This works because SQL Server uses a round-robin, proportional fill algorithm to stripe the writes across the data files. When multiple data files exist for a database, all of the writes to the files are striped to those files, with the writes to any particular file based on the proportion of free space that the file has to the total free space across all of the files: This means that writes are proportionally distributed to the files according to their free space, to ensure that they fill at the same time, irrespective of their size. Each of the data files has its own set of PFS, GAM, and SGAM pages, so as the writes move from file to file the page allocations to occur from different allocation bitmap pages spreading the work out across the files and reducing the contention on any individual page.


The size and physical placement of the Tempdb database can affect the performance of a system. As per the Microsoft best practice always recommended separate SSD drive for Tempdb .

To optimizing Tempdb Performance so you can learn how to avoid performance issue caused by lack of free disk space.

There are several different published suggestions for calculating the number of files used by Tempdb for the best performance. The SQL Server Customer Advisory Team (SQLCAT) team recommends that Tempdb should be created with one file per physical processor core, and this tends to be one of the most commonly quoted configuration methods for Tempdb. While this recommendation is founded in practical experience, it is important to keep in mind the types of environments that the SQLCAT team typically works, which are typical the highest volume, largest throughput environments in the world, and therefore are atypical of the average SQL Server environment. So while this recommendation might prevent allocation contention in Tempdb, it is probably overkill for most new server implementations today. Paul Randal has written about this in the past in his blog post A SQL Server DBA myth a day: (12/30) Tempdb should always have one data file per processor core where he suggests a figure of ¼ to ½ the number of cores in the server as a good starting point. This has typically been the configuration that I have followed for a number of years for setting up new servers, and I made a point of then monitoring the allocation bitmap contention of Tempdb on the actual workload to figure out if it was necessary to increase the number of files further.

At PASS Summit 2011, Bob Ward, a Senior Escalation Engineer in Product Support, presented a session on Tempdb and some of the changes that were coming in SQL Server 2012. As a part of this session Bob recommended that for servers with eight CPUs or less, start off with one file per CPU for Tempdb. For servers with more than eight CPUs Bob recommended to start off with eight Tempdb data files and then monitor the system to determine if PAGELATCH contention on the allocation bitmaps was causing problems or not. If allocation contention continues to exist with the eight files, Bob’s recommendation was to increase the number of files by four and then monitor the server again, repeating the process as necessary until the PAGELATCH contention is no longer a problem for the server. To date, these recommendations make the most sense from my own experience and they have been what we’ve recommended at SQLskills since Bob’s session at PASS.

Disk space issue at Tempdb:

Most of the issues occur because of the long running queries or reports you are executing in the SQL Server Instance and that have to use different temporary data stored in the Tempdb database.

Determining the Amount of Free Space in Tempdb

The following query returns the total number of free pages and total free space in megabytes (MB) available in all files in Tempdb.

SELECT SUM(unallocated_extent_page_count) AS [free pages],

(SUM(unallocated_extent_page_count)*1.0/128) AS [free space in MB]

FROM sys.dm_db_file_space_usage;

Determining the Amount Space Used by the Version Store:

The following query returns the total number of pages used by the version store and the total space in MB used by the version store in Tempdb.

SELECT SUM(version_store_reserved_page_count) AS [version store pages used],

(SUM(version_store_reserved_page_count)*1.0/128) AS [version store space in MB]

FROM sys.dm_db_file_space_usage;


Determining the Longest Running Transaction:


If the version store is using a lot of space in Tempdb, you must determine what is the longest running transaction. Use this query to list the active transactions in order, by longest running transaction.


SELECT transaction_id

FROM sys.dm_tran_active_snapshot_database_transactions

ORDER BY elapsed_time_seconds DESC;


Determining the Amount of Space Used by Internal Objects


The following query returns the total number of pages used by internal objects and the total space in MB used by internal objects in Tempdb.


SELECT SUM(internal_object_reserved_page_count) AS [internal object pages used],

(SUM(internal_object_reserved_page_count)*1.0/128) AS [internal object space in MB]

FROM sys.dm_db_file_space_usage;

Determining the Amount of Space Used by User Objects:

The following query returns the total number of pages used by user objects and the total space used by user objects in Tempdb.


SELECT SUM(user_object_reserved_page_count) AS [user object pages used],

(SUM(user_object_reserved_page_count)*1.0/128) AS [user object space in MB]

FROM sys.dm_db_file_space_usage;

Determining the Total Amount of Space (Free and Used)

The following query returns the total amount of disk space used by all files in Tempdb.

 SELECT SUM(size)*1.0/128 AS [size in MB]

FROM tempdb.sys.database_files

 Monitoring Space Used by Queries

 One of the most common types of Tempdb space usage problems is associated with large queries that use a large amount of space. Generally, this space is used for internal objects, such as work tables or work files. Although monitoring the space used by internal objects tells you how much space is used, it does not directly identify the query that is using that space. The following methods help identify the queries that are using the most space in Tempdb. The first method examines batch-level data and is less data intensive than the second method. The second method can be used to identify the specific query, temp table, or table variable that is consuming the disk space, but more data must be collected to obtain the answer. if the query is not active, what you get back may not be the actual culprit).

 ;WITH task_space_usage AS (

    — SUM alloc/delloc pages

    SELECT session_id,


           SUM(internal_objects_alloc_page_count) AS alloc_pages,

           SUM(internal_objects_dealloc_page_count) AS dealloc_pages

    FROM sys.dm_db_task_space_usage WITH (NOLOCK)

    WHERE session_id <> @@SPID

    GROUP BY session_id, request_id


SELECT TSU.session_id,

       TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],

       TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],


       — Extract statement from sql text





                 ERQ.statement_start_offset / 2,

                 CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset

                  THEN 0

                 ELSE( ERQ.statement_end_offset ERQ.statement_start_offset ) / 2 END

               ), ”

           ), EST.text

       ) AS [statement text],


FROM task_space_usage AS TSU

INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)

    ON  TSU.session_id = ERQ.session_id

    AND TSU.request_id = ERQ.request_id

OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST

OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP




Reference article:


Categories: TempDB

Six Basic Fears

March 29, 2015 Leave a comment

There are six Basic fears that we are all suffer from. ( Quote from Reach me)

The most common is fear of going broke
The fear of criticisms
Fear of failure of not living up to your expectation yourself of not being who you want to be
Fear of abandonment
Fear of loosing loved one
Fear of sickness
Fear of dying

Some Important Trace in Microsoft SQL server

October 28, 2014 Leave a comment

A trace flag is a directive used to “set specific server characteristics or to switch off a particular behavior”. Here I have listed some Important Trace  which can be useful as per the environment.


Trace Flag 834

• Trace flag 834 allows SQL Server 2005 to use large-page allocations for the memory that is allocated for the buffer pool. 

• May prevent the server from starting if memory is fragmented and if large pages cannot be allocated

• Best suited for servers that are dedicated to SQL Server 2005

• Page size varies depending on the hardware platform

• Page size varies from 2 MB to 16 MB. 

• Improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU

• Only applies to 64-bit architecture

• Startup

• Documented: KB920093

• Now automatic:

• Enterprise / Developer Edition

• “Lock Pages in Memory” privilege

• >= 8GB RAM


Trace Flag 835

• Trace flag 835 enables “Lock Pages in Memory”  support for SQL Server Standard Edition

• Enables SQL Server  to use AWE APIs for buffer pool allocation

• Avoids potential performance issues due to trimming working set

• Introduced in:

• SQL Server 2005 Service pack 3 Cumulative Update 4

• SQL Server 2008 Service Pack 1 Cumulative Update 2

• Only applies to 64-bit architecture

• Startup

• Documented: KB970070


Trace Flag 3226

• Trace flag 3226 prevents successful back operations from being logged

• By default SQL Server logs every successful backup operation to the ERRORLOG and

the System event log

• Frequent backup operations can cause log files to grow and make finding other

messages harder


Documented: BOL


Trace Flag 806

• Trace Flag 806 enables DBCC audit checks to be performed on pages to test for logical consistency problems. 

• These checks try to detect when a read operation from a disk does not experience any errors but the read operation returns data that is not valid. 

• Pages will be audited every time that they are read from disk

• Page auditing can affect performance and should only be used in systems where data Stability is in question.


Documented: KB841776


“SQL Server I/O Basics, Chapter 2” white paper


Trace Flag 818


• “Trace flag 818 enables an in-memory ring buffer that is used for tracking the last 2,048

successful write operations that are performed by the computer running SQL Server, not

including sort and workfile I/Os”

• Use to further diagnose operating system, driver, or hardware problems causing lost write

conditions or stale read conditions

• May see data integrity-related error messages such as errors 605, 823, 3448.

• Documented: KB826433



Trace Flag 3422

• Trace Flag 3422 enables log record auditing

• “Troubleshooting a system that is experiencing problems with log file corruption may be easier using the additional log record audits this trace flag provides”

• “Use this trace flag with caution as it introduces overhead to each transaction log record”

• Similarly to trace flag 806, you would only use

this to troubleshoot corruption problems




“SQL Server I/O Basics, Chapter 2” white paper


Trace Flag 2528


• Trace flag 2528 disables parallel checking of

objects during DBCC CHECKDB, DBCC


• Scope: Global | Local

• Documented: BOL

• Typically leave parallel DBCC checks enabled

• DBCC operations can dynamically change their

degree of parallelism

• Alternatives:


• MAXDOP option

• Resource Governor



Trace Flag 1224


• Trace flag 1224 disables lock escalation based on the number of locks

• Memory pressure can still trigger lock escalation

• Database engine will escalate row or page locks to table locks

• 40% of memory available for locking

• sp_configure ‘locks’

• Non-AWE memory

• Scope: Global | Session

• Documented: BOL


Trace Flag 1211


• Trace flag 1211 disables lock escalation based on memory pressure or number of locks

• Database engine will not escalate row or page locks to table locks

• Scope: Global | Session

• Documented: BOL

• Trace flag 1211 takes precedence over 1224

• Microsoft recommends using 1224

 • Trace flag 1211 prevents escalation in every case, even under

memory pressure

• Helps avoid “out-of-locks” errors when many locks are being used.

• Can generate excessive number of locks

• Can slow performance

• Cause 1204 errors


Trace Flag 1118

• Trace flag 1118 directs SQL Server to allocate full extents to each tempdb objects (instead of mixed


• Less contention on internal structures such as SGAM pages

• Story has improved in subsequent releases of SQL


• So represents a “edge case”

Scope: Global

Documented: KB328551, KB936185

Working with tempdb in SQL Server 2005 white paper


Trace Flag 4199 In SQL server 2008

October 28, 2014 Leave a comment

We have received the below error where it is indicate the database is corrupted but we have received the alert for Tempdb.



Error DESCRIPTION:           The Database ID 2, Page (6:1896), slot 0 for LOB data type node does not exist. This is usually caused by transactions that can read uncommitted data on a data page. Run DBCC CHECKTABLE.





To resolve the issue we have to enable the Trace Flag 4199.To ensure this trace flag will always be set, you need to modify the startup properties of your SQL Server Windows Service so that you specify the -T4199 parameter as follows:


Trace Flag 4199         /* IMPORTANT */


• Trace flag 4199 enables all the fixes that were previously made for the query processor under many trace flags

• Policy:

• Any hotfix that could potentially affect the execution plan of a query must be controlled by a trace flag

• Except for fixes to bugs that can cause incorrect results or corruption

• Helps avoid unexpected changes to the execution plan

• Which means that virtually everyone is not necessarily running SQL Server with all the latest query processor fixes enabled?

• Scope: Session | Global


Documented: KB974006

Microsoft are strongly advising not to enable this trace flag unless you are affected

Physical to Virtual OS Migration along with SQL server database

October 28, 2014 Leave a comment

Physical to Virtual OS Migration

Prepare the Source System

Although VSMT doesn’t modify the source system, I recommend that you follow the best practice of backing up the source system before you start the P2V migration process. In addition, disable any drivers or applications that are specific to the physical hardware and that won’t be available in the virtual machine environment.

Step 2: Prepare the MobileP2V Server

VSMT includes a tool called GatherHW.exe that collects the physical hardware information on the source server and creates an XML configuration file you can use to analyze the source server for any known hardware incompatibilities in the source system (dynamic disks, more than 3.6 GB RAM, unsupported devices, and so forth). To run GatherHW.exe, you must copy it to the source system. I recommend creating a share called VSMT on the MobileP2V server in the VSMT installation folder, which is by default C:\Program Files\Microsoft VSMT. You’ll also need a place to store the XML files that GatherHW.exe produces, so create a directory called C:\P2VSource on MobileP2V and share it as P2VSource, specifying local administrator write permissions.

Here’s quick summary of the MobileP2V server drive configurations you’ll be using: C drive (C: – operating system), D drive (D: – ADS image files), and E drive (E: – virtual machine storage).

Step 3: Gather the Configuration Information

Once you’ve created the shares on the MobileP2V server, log on to Testserver as the local administrator. Then, create a directory called C:\VSMT, and map a network drive to \\MobileP2V\VSMT. Copy GatherHW.exe to C:\VSMT. Double-click GatherHW .exe on the source system to collect the configuration information. GatherHW.exe creates an XML file with the name of the source system (e.g., Testserver.xml) in the directory. Copy the XML file to \\MobileP2V P2VSource.

Step 4: Validate the Configuration Information

After collecting the configuration information from Testserver with GatherHW.exe, use VMScript.exe (which was installed on Mobile P2V as part of VSMT) to validate the data. To run VMScript.exe against the XML file, log on to the MobileP2V server and open a command prompt. Change directory to C:\Program Files\Microsoft VSMT. In the command window, execute the VMScript by typing:

VMScript.exe /hwvalidate /hwinfofile:”C:  P2VSource\Testserver.xml”

VMScript analyzes the XML file and reports any errors or configuration issues with the source hardware. (Note that some server hardware such as special add-in boards, USB-attached devices, and other devices—such fiber channel host bus adapters— won’t work on virtual machines.)

Examine the VMScript output for any issues, warnings, or errors. Use Vmpatch.exe to correct any issues and copy any missing system files, service packs, or hotfix files before continuing. If you receive the following error regarding missing Windows Server 2003 Service Pack 2 (SP2) files, see the sidebar, “Adding Windows Server 2003 SP2 Support to the VSMT Patch Directory,” for how to update the patch cache with Windows 2003 SP2 drivers.

Error: Cannot find patch files for the operating system/service pack level in the C: Program Files\Microsoft VSMT\Patches Source\5.2.3790\SP2 directory.

Step 5: Generate the Migration Scripts

After you’ve resolved any issues with the Testserver configuration and you’ve rerun VMScript until there are no blocking issues, generate the migration scripts. These scripts control disk image capture, virtual machine creation, and disk image deployment to the virtual machine. To generate the migration script, run VMScript with the following syntax:

VMScript /hwgeneratep2v /   hwinfofile:”path\Source.xml” /   name:vm_name /vmconfigpath:”vm   path” /virtualDiskPath:”vm path” /   hwdestvs:controller_server

In this script, path\Source.xml is the path to the xml configuration file (C:\P2VSource TestServer.xml), vm_name is the name to assign to the virtual machine in the Virtual Server console (TESTMIGRATION), vm path is the location where you want the .vmc and the .vhd files to be stored on the specified host (E:\VMs), and controller_server is the name of the Virtual Server host (MobileP2V).

By default, the migration scripts are configured to create fixed-size virtual hard disks. If the physical disks on the source system have an extensive amount of unallocated free space or you don’t want to use fixed-size virtual hard disks, execute VMScript with the / virtualDisk-Dynamic option. This option also speeds up the virtual machine creation process. If you use /virtualDisk- Dynamic the command line looks like:

VMScript /hwgeneratep2v /hwinfofile:”C:  P2VSource\TestServer.xml” /   name:TESTMIGRATION /vmconfigpath:”E:  VMs” /virtualDiskPath:” E:\VMs” /   hwdestvs:MOBILEP2V /virtualDiskDynamic

VMScript.exe generates the migration scripts in a subdirectory, C:\Program Files Microsoft VSMT\p2v\TESTMIGRATION. Execute the VMScript command line, and you’ll see the output shown in Figure 1. VMScript creates 12 output files that are used during the migration process. The readme file, TestMigration_P2V_Readme .txt, provides information about script creation and driver issues. The three XML files contain information used during the migration about the hard disk and driver configuration. The TestMigration_ boot.ini file is a copy of the boot.ini information from the source machine. You’ll execute three scripts directly during the migration process: TestMigration_ Capture.cmd captures the source disk drives into ADS images, Test- Migration_CreateVM.cmd creates the target virtual machine using the source configuration information, and TestMigration_ DeployVM.cmd images the captured source disk images to the target VM drives.

VMScript also creates a subdirectory called Patches. It is automatically populated with known patches that you’ll need to install.

Step 6: Load the Required Drivers into ADS

When VMScript validates the source system configuration information, it doesn’t validate that all the required drivers are installed in the ADS file cache. The most important driver to install is the source system network card. Without this driver, the source server can’t be captured. Download the latest network interface card drivers for the source system to a temporary directory on MobileP2V. Copy the driver files into C:\Program Files\Microsoft ADS\NBS Repository\User\PreSystem. When you copy the network interface card driver files into the ADS file cache, don’t create any subdirectories or include Txtsetup.oem files. The subdirectories aren’t needed because the driver files must be placed directly in the PreSystem directory, and the Txtsetup.oem file isn’t used.

After you’ve copied the files, restart the ADS Builder service so that it finds the new drivers. Open a command window and type

net stop adsbuilder

Then press Enter. Type

net start adsbuilder

Then press Enter.

Step 7: Capture the Testserver System Disk

Now you’re ready to capture the Testserver system disk images. The TestMigration_Capture. cmd migration script executes and leverages ADS to capture each disk image sequentially. Log on to MobileP2V as local administrator and follow these steps to start the disk image capture process of TestServer. Open a command window and change directories to C:\Program Files\Microsoft VSMT p2v\TestMigration. Execute the TestMigration_ capture.cmd script. When prompted, log on to the source server, Testserver, restart it, and boot it to the Pre-execution Environment (PXE) interface.

ADS takes control of the source system and boots it into the Deployment Agent to initiate the disk image capture. To follow the progress of each disk image capture, you can use the Automated Deployment Service MMC snap-in on the Controller server. In the MMC snap-in, go to Devices, Running Jobs, then double-click on the running job, as shown in Figure 2. Image captures can take awhile depending on the size and number of the disks. If the server has a slow network interface, consider updating the interface card to a faster card connected to a faster port to reduce the transfer time. When the image captures are complete, ADS shuts down and removes the source system from the device database. The last task before the script terminates is changing system file attributes,

Step 8: Create the Virtual Machine

Before you migrate the captured disk images, you must create the virtual machine and configure it with the same memory, disk, and network configuration as the physical machine. The TestMigration_CreateVM.cmd script (one of the scripts that VMScript generates) automates this for you. To launch the script, open a command window and change directories to C:\Program Files\Microsoft VSMT\p2v\TestMigration. Execute the Test- Migration_CreateVM.cmd script. The script creates a new virtual machine configuration file E:\VMS\TestMigration\TestMigration .vmc, registers the virtual machine, connects the virtual machine to the default virtual network VM0, creates and attaches the virtual hard disks (VHDs) to the virtual machine, and attaches a Remote Installation Services (RIS) virtual floppy disk to the virtual floppy drive. If you get this error

Error:System.IO.FileLoadException: The located assembly’s manifest definition with the name ‘Microsoft.VirtualServer.Interop’ does not match the assembly reference.

then the MobileP2V server is running Virtual Server 2005 R2 Service Pack 1(SP1). VSMT 1.1 is compatible with Virtual Server 2005 R2 but not Virtual Server 2005 R2 SP1, Refer to the sidebar, “Why VSMT 1.1 Doesn’t Support Virtual Server 2005 R2 SP1,” for more information on how to resolve this issue.

When all these tasks are complete, check the ADS device database using the ADS MMC snap-in. The virtual machine should have been added to the ADS device database and set to boot to the Deployment Agent.

Step 9: Deploy the ADS Disk Images to the TestMigration Virtual Machine

After the virtual machine is created, the source server disk images must be restored. TestMigration_DeployVM.cmd controls this part of the migration procedure. To restore the source disk images and deploy the virtual machine, go to C:\Program Files\Microsoft VSMT\p2v\TestMigration and execute


To follow the progress of the virtual machine deployment, you can use the Virtual Server 2005 R2 Administration Website on the Controller server. You’ll see the virtual machine boot into the Deployment Agent and the disk images restore to the virtual hard disks, as shown in Figure 4. The hardware-dependent system files are then swapped for virtual machine-compatible versions, and required operating system configuration settings are applied.

If you use the MMC snap-in to check the ADS device database, you’ll see that the virtual machine is still in the device database. The TestMigration_DeployVM.cmd script terminates after removing the RIS virtual floppy disk from the virtual machine. The virtual machine remains booted in the Deployment Agent.

Step 10: Complete the Migration Process

Before you complete the source system to virtual machine migration process, perform a few final cleanup tasks. The TestMigration virtual machine is still booted into the Deployment Agent, so you need to reboot it: Open the ADS MMC snap-in, select and right-click the TestMigration device, then select run job. A New Job wizard launches. Click Next. Select to create a one-time job, and click Next. Then click Next to skip the description screen. Select Internal command, and click Next. Select \bmonitor reboot, and click Next. Click Finish to reboot the TestMigration VM.

Once the machine is rebooted, release control of the device, and delete the virtual machine from the device database. Log on to the virtual machine, and install the Virtual Machine Additions to get keyboard and mouse integration and better performance. Complete any remaining configuration modifications, and test the virtual machine connectivity and performance to ensure that it’s running as expected. Once the virtual machine testing is complete, migrate TestMigration from the MobileP2V solution to the production Virtual Server host. Once you do that, you can back up and delete the source system disk images from the ADS image store.

Once the Virtual server is ready then SQL data won’t need to migrate just need only change LUN mapping and attach LUN as physical RDM/s to Virtual Machine

Reference Link:

Citagus–SQL Server Database Administration Memorandum

As a Microsoft Certified Partner, we are a complete end-to-end solution for our customers who are purchasing, implementing, monitoring, and maintaining medium to large SQL Server environments. Our team is comprised of proven successful professionals who have honed their skills by delivering results and being held accountable for those results in a corporate setting. We are able to provide superior resources and customer service to each and every customer, 24 hours a day, 7 days a week.

We are also proud of the fact that we have created true lasting partnerships with our customers. It is a value of our organization and key ingredient of what makes us successful in what we do.As we work to serve our customers, to build strong relationship and deliver innovative and creative solutions to their business challenges, we rely on our values to guide our decisions and actions, ensuring everything we do is beneficial to the client, and the organization.

We believe our commitment is not only to ensure that we continue to live up to the standards we have set, but also to advance our awareness of our customer needs to even higher levels. By leveraging the experience we have gained assisting our clients, delivering high-value professional services, become a long term strategic partner, and contribute to the overall success of our clients’.

Our Value Proposition:

Low-Cost & High Quality Solutions

Our time-tested methodology enables us to deliver rapid solutions in a cost-effective way. One way we achieve significant cost efficiencies is from our ability to bring you the right solutions well ahead of your schedule. Our flexibility in meeting your add-on business needs translates into a low total cost of ownership for you. And by choosing a consulting partner that has had consistent success in delivering projects, you minimize the risk as well.

Un-Paralleled Industry Experience

DBA team has both SQL Certified Master & SQL Certified Professionals with around 15 years of industry experience in handling mission critical databases to ensure that your databases are in safe hands.


We are truly flexible and willing to face any SQL Server challenge. Our DBA consultant are not bound only for the DBA job and provided support for any kind of SQL server work. It could be high availability, SSRS and SSAS issue. Our consultant are certified in SQL server  BI  also That’s why we are able to figure out the root cause easily.

 It isn’t our goal to be the best, it is our commitment. We take your vision and deliver it on time, under budget, and to perfection. We have consistently played a key role in leading successful change in DBA administration projects that drive business growth and improve business processes. Our strengths include an extensive Database administration background—with project management and ITIL experience—that focuses on results.   I believe these qualities, along with my team experience and proven capabilities make us an excellent contributor

As the Product Solution Manager of Citagus, I sincerely thank you for giving us the opportunity to earn your business. I look forward to a long and fruitful relationship. Please contact with me at if you are looking any sort of assistance for SQL server DBA end.