Archive

Archive for the ‘SQL server DBA administration’ Category

Hosting DBA service with SQL Server 2014

March 19, 2017 Leave a comment

The database industry has seen significant change over the last 5 years, from the introduction of new technologies like clustering, big data, and in-memory data platforms to the recent variants of popular open source SQL alternatives like Percona Server and MariaDB making their way into popular technology stacks. Even with these new technologies becoming increasingly common, many enterprises still trust their mission critical database operation to Microsoft SQL Server. Recently, Microsoft released its latest iteration of the technology, Microsoft SQL Server 2014, with notable advancements in performance including in-memory functionality and added security with high availability.

 

In-Memory OLTP

As one of the first to market with support for this new functionality, SmartERP is focused on increasing database performance by allowing some data to be queried and processed in RAM instead of on constrained disk resources. Previous technology forced users to load ALL information into RAM, making large data sets problematic. Now, database administrators can choose which data tables to process in-memory, allowing those operations to achieve maximum priority to valuable processing resources. By speeding up the overall performance of query operation across a database environment, users can effectively do more with less resources, which eliminates the need to scale into expensive hardware solutions. SmartERP teams are already helping users plan for their migration to SQL Server 2014 by evaluating target data sets to load into memory.

High Availability

While replication and failover have long been part of any core strategy for business continuity, achieving true high availability for a database environment has been problematic. Organizations often must decide what level of inconsistency between database environments is acceptable – hours or days? However, with a new functionality called AlwaysOn, and strong connections between multi-cloud datacenters including ones belonging to SmartERP and Microsoft Azure, users can now implement nearly 100 percent resilient database environments that protect from both database operational failure and datacenter availability. Even though SmartERP has an industry leading record in uptime and a 100 percent uptime SLA, we want our users to meet their unique availability requirements, which may include a highly available database environment. We can help you meet those requirements with a team of DBA’s that have successfully implemented high availability for many customers. Check out this report on our support of AlwaysOn for more information.

DBA Services

Recently, we shared with you the economic benefits of our DBA services offering, including a full time DBA available 24×7 at a fraction of the cost of an in-house resource. We have seen growing interest in our DBA coverage and released a tiered offering to accommodate a variety of supplemental DBA needs. We have also witnessed some organizations relying solely on our DBA services to ensure database functionality, while others have formed close partnerships with their core database team and our DBA service. This allows our customers to focus on important development and optimization efforts without the burden of maintaining and monitoring the health of the existing database. Moving between any database iteration requires some thought and planning. Our DBA team is helping customers understand the scope of work and code change required to not only migrate, but also utilize new functionality. Our partnership with Microsoft ensures that our DBA Services team is ready to address even the more challenging scenarios when switching.

Flexible Licensing

At SmartERP, customers can consume fully managed SQL application licensing along with support at a low monthly cost. This allows users to move in-between SQL versions and hardware profiles without needing to address their software license agreement. We want to make sure you always have the best combination of optimized hardware and SQL versioning to meet your objectives.

Hybrid Potential

Lastly, while many web applications are moving to the cloud, many database users still require the performance consistency and controls of a dedicated server. Often, users partnering with a single cloud or managed hosting provider will be forced to make sacrifices by either not moving applications and web assets to the cloud because of database restrictions or sacrificing database performance and reliability by trying to operate it in a utility cloud service. With a Managed Private Cloud powered by Microsoft Cloud Platform, and the built in Database-as-a-Service capability, customers can take advantage of a fully managed, physically isolated, private cloud environment that allows customers to programmatically spin up SQL Server 2014 databases on demand. Customers can also strengthen their high availability and disaster recovery capabilities by choosing from a number of available destinations, including Microsoft Azure, as a designated disaster recovery location.
SmartERP can bridge public and private clouds running SQL Server over a strong network connection. It allows users to leverage both cloud and dedicated server resources in a single environment. Dominoes UK spoke about their use of hybrid to serve web content via the cloud while still having a reliable dedicated database in dedicated hosting. You can read more about that here.

SmartERP DBA services professionals are ready to help you wrap your brain around the cool new features in Microsoft SQL Server 2014. If you want to know more, please download our whitepaper and visit go.SmartERP.com/SQL2014.

SQL Server DBA Support

March 10, 2017 Leave a comment

SQL Server DBA Support

 

  • Proactive Problem management
  • Predictive capacity analysis
  • Intuitive event and incident correlation

Apart from this our service catalogue comprise of regular DBA activities like Monitoring, Backups, Space Management, Resource Utilization, License Rationalization, patch management Database Vulnerability Assessment, Object Migration and other activities. Other activities include:

  • Manage hierarchical security (access control, permissions, encryption keys and certificates) for databases and ensure data integrity using templates and schemas
  • Testing of new releases and features for application, O/S and Database upgrade & Troubleshoot application performance & Optimize query& index performance.
  • Designed & Tested a Scale-Out Architecture using Replication, DTC / 2PC, Distributed Database  with Data & Middle Tier, Web & Client Tier  for SQL Server Databases to facilitate a High-Availability environment by creating Read-Only copies for Reporting Services & Read-Write Hot copies of critical Databases, for Hot Instant Disaster Recovery.
  • Enhanced performance, scalability and usability across multiple cores & support that efficiently scale computer, networking and storage in both physical and virtual environments.
  • Establishes and maintains policies, standard operating procedures, and associated documentation for user interaction with the database environment.
  • Support on-premises to Microsoft Azure IaaS , PaaS & amazon RDS, Hybrid Scenarios with SSIS, Stretch Database, PowerBI with on-premises data.

Revenue and Risk Reward Program:

 

  • SLAs based on Risk Reward –Example. Availability of 99.95%
  • 10% cost reduction year over year during contractual period
  • Technology Consumption Management to ensure customer uses licenses/servers to the minimal and reduce OPEX costs.

SmartERP SQL server Portfolio

Six Basic Fears

March 29, 2015 Leave a comment

There are six Basic fears that we are all suffer from. ( Quote from Reach me)

The most common is fear of going broke
The fear of criticisms
Fear of failure of not living up to your expectation yourself of not being who you want to be
Fear of abandonment
Fear of loosing loved one
Fear of sickness
Fear of dying

Some Important Trace in Microsoft SQL server

October 28, 2014 Leave a comment

A trace flag is a directive used to “set specific server characteristics or to switch off a particular behavior”. Here I have listed some Important Trace  which can be useful as per the environment.

 

Trace Flag 834

• Trace flag 834 allows SQL Server 2005 to use large-page allocations for the memory that is allocated for the buffer pool. 

• May prevent the server from starting if memory is fragmented and if large pages cannot be allocated

• Best suited for servers that are dedicated to SQL Server 2005

• Page size varies depending on the hardware platform

• Page size varies from 2 MB to 16 MB. 

• Improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU

• Only applies to 64-bit architecture

• Startup

• Documented: KB920093

• Now automatic:

• Enterprise / Developer Edition

• “Lock Pages in Memory” privilege

• >= 8GB RAM

 

Trace Flag 835

• Trace flag 835 enables “Lock Pages in Memory”  support for SQL Server Standard Edition

• Enables SQL Server  to use AWE APIs for buffer pool allocation

• Avoids potential performance issues due to trimming working set

http://msdn.microsoft.com/en-us/library/ms187499.aspx

• Introduced in:

• SQL Server 2005 Service pack 3 Cumulative Update 4

• SQL Server 2008 Service Pack 1 Cumulative Update 2

• Only applies to 64-bit architecture

• Startup

• Documented: KB970070

 

Trace Flag 3226

• Trace flag 3226 prevents successful back operations from being logged

• By default SQL Server logs every successful backup operation to the ERRORLOG and

the System event log

• Frequent backup operations can cause log files to grow and make finding other

messages harder

 

Documented: BOL

 

Trace Flag 806

• Trace Flag 806 enables DBCC audit checks to be performed on pages to test for logical consistency problems. 

• These checks try to detect when a read operation from a disk does not experience any errors but the read operation returns data that is not valid. 

• Pages will be audited every time that they are read from disk

• Page auditing can affect performance and should only be used in systems where data Stability is in question.

 

Documented: KB841776

 

“SQL Server I/O Basics, Chapter 2” white paper

http://technet.microsoft.com/en-au/library/cc917726.aspx

 

Trace Flag 818

 

• “Trace flag 818 enables an in-memory ring buffer that is used for tracking the last 2,048

successful write operations that are performed by the computer running SQL Server, not

including sort and workfile I/Os”

• Use to further diagnose operating system, driver, or hardware problems causing lost write

conditions or stale read conditions

• May see data integrity-related error messages such as errors 605, 823, 3448.

• Documented: KB826433

 

 

Trace Flag 3422

• Trace Flag 3422 enables log record auditing

• “Troubleshooting a system that is experiencing problems with log file corruption may be easier using the additional log record audits this trace flag provides”

• “Use this trace flag with caution as it introduces overhead to each transaction log record”

• Similarly to trace flag 806, you would only use

this to troubleshoot corruption problems

 

Documented: 

 

“SQL Server I/O Basics, Chapter 2” white paper

http://technet.microsoft.com/en-au/library/cc917726.aspx

 

Trace Flag 2528

 

• Trace flag 2528 disables parallel checking of

objects during DBCC CHECKDB, DBCC

CHECKFILEGROUP and DBCC CHECKTABLE.

• Scope: Global | Local

• Documented: BOL

• Typically leave parallel DBCC checks enabled

• DBCC operations can dynamically change their

degree of parallelism

• Alternatives:

 

• MAXDOP option

• Resource Governor

 

 

Trace Flag 1224

 

• Trace flag 1224 disables lock escalation based on the number of locks

• Memory pressure can still trigger lock escalation

• Database engine will escalate row or page locks to table locks

• 40% of memory available for locking

• sp_configure ‘locks’

• Non-AWE memory

• Scope: Global | Session

• Documented: BOL

 

Trace Flag 1211

 

• Trace flag 1211 disables lock escalation based on memory pressure or number of locks

• Database engine will not escalate row or page locks to table locks

• Scope: Global | Session

• Documented: BOL

• Trace flag 1211 takes precedence over 1224

• Microsoft recommends using 1224

 • Trace flag 1211 prevents escalation in every case, even under

memory pressure

• Helps avoid “out-of-locks” errors when many locks are being used.

• Can generate excessive number of locks

• Can slow performance

• Cause 1204 errors

 

Trace Flag 1118

• Trace flag 1118 directs SQL Server to allocate full extents to each tempdb objects (instead of mixed

extents)

• Less contention on internal structures such as SGAM pages

• Story has improved in subsequent releases of SQL

Server

• So represents a “edge case”

Scope: Global

Documented: KB328551, KB936185

Working with tempdb in SQL Server 2005 white paper

http://www.microsoft.com/technet/prodtechnol/sql/2005/wor

kingwithtempdb.mspx.

Trace Flag 4199 In SQL server 2008

October 28, 2014 Leave a comment

We have received the below error where it is indicate the database is corrupted but we have received the alert for Tempdb.

 

 

Error DESCRIPTION:           The Database ID 2, Page (6:1896), slot 0 for LOB data type node does not exist. This is usually caused by transactions that can read uncommitted data on a data page. Run DBCC CHECKTABLE.

 

Workaround:

 

 

To resolve the issue we have to enable the Trace Flag 4199.To ensure this trace flag will always be set, you need to modify the startup properties of your SQL Server Windows Service so that you specify the -T4199 parameter as follows: http://support.microsoft.com/kb/2801413

 

Trace Flag 4199         /* IMPORTANT */

 

• Trace flag 4199 enables all the fixes that were previously made for the query processor under many trace flags

• Policy:

• Any hotfix that could potentially affect the execution plan of a query must be controlled by a trace flag

• Except for fixes to bugs that can cause incorrect results or corruption

• Helps avoid unexpected changes to the execution plan

• Which means that virtually everyone is not necessarily running SQL Server with all the latest query processor fixes enabled?

• Scope: Session | Global

 

Documented: KB974006

Microsoft are strongly advising not to enable this trace flag unless you are affected

Physical to Virtual OS Migration along with SQL server database

October 28, 2014 Leave a comment

Physical to Virtual OS Migration

Prepare the Source System

Although VSMT doesn’t modify the source system, I recommend that you follow the best practice of backing up the source system before you start the P2V migration process. In addition, disable any drivers or applications that are specific to the physical hardware and that won’t be available in the virtual machine environment.

Step 2: Prepare the MobileP2V Server

VSMT includes a tool called GatherHW.exe that collects the physical hardware information on the source server and creates an XML configuration file you can use to analyze the source server for any known hardware incompatibilities in the source system (dynamic disks, more than 3.6 GB RAM, unsupported devices, and so forth). To run GatherHW.exe, you must copy it to the source system. I recommend creating a share called VSMT on the MobileP2V server in the VSMT installation folder, which is by default C:\Program Files\Microsoft VSMT. You’ll also need a place to store the XML files that GatherHW.exe produces, so create a directory called C:\P2VSource on MobileP2V and share it as P2VSource, specifying local administrator write permissions.

Here’s quick summary of the MobileP2V server drive configurations you’ll be using: C drive (C: – operating system), D drive (D: – ADS image files), and E drive (E: – virtual machine storage).

Step 3: Gather the Configuration Information

Once you’ve created the shares on the MobileP2V server, log on to Testserver as the local administrator. Then, create a directory called C:\VSMT, and map a network drive to \\MobileP2V\VSMT. Copy GatherHW.exe to C:\VSMT. Double-click GatherHW .exe on the source system to collect the configuration information. GatherHW.exe creates an XML file with the name of the source system (e.g., Testserver.xml) in the directory. Copy the XML file to \\MobileP2V P2VSource.

Step 4: Validate the Configuration Information

After collecting the configuration information from Testserver with GatherHW.exe, use VMScript.exe (which was installed on Mobile P2V as part of VSMT) to validate the data. To run VMScript.exe against the XML file, log on to the MobileP2V server and open a command prompt. Change directory to C:\Program Files\Microsoft VSMT. In the command window, execute the VMScript by typing:

VMScript.exe /hwvalidate /hwinfofile:”C:  P2VSource\Testserver.xml”

VMScript analyzes the XML file and reports any errors or configuration issues with the source hardware. (Note that some server hardware such as special add-in boards, USB-attached devices, and other devices—such fiber channel host bus adapters— won’t work on virtual machines.)

Examine the VMScript output for any issues, warnings, or errors. Use Vmpatch.exe to correct any issues and copy any missing system files, service packs, or hotfix files before continuing. If you receive the following error regarding missing Windows Server 2003 Service Pack 2 (SP2) files, see the sidebar, “Adding Windows Server 2003 SP2 Support to the VSMT Patch Directory,” for how to update the patch cache with Windows 2003 SP2 drivers.

Error: Cannot find patch files for the operating system/service pack level in the C: Program Files\Microsoft VSMT\Patches Source\5.2.3790\SP2 directory.

Step 5: Generate the Migration Scripts

After you’ve resolved any issues with the Testserver configuration and you’ve rerun VMScript until there are no blocking issues, generate the migration scripts. These scripts control disk image capture, virtual machine creation, and disk image deployment to the virtual machine. To generate the migration script, run VMScript with the following syntax:

VMScript /hwgeneratep2v /   hwinfofile:”path\Source.xml” /   name:vm_name /vmconfigpath:”vm   path” /virtualDiskPath:”vm path” /   hwdestvs:controller_server

In this script, path\Source.xml is the path to the xml configuration file (C:\P2VSource TestServer.xml), vm_name is the name to assign to the virtual machine in the Virtual Server console (TESTMIGRATION), vm path is the location where you want the .vmc and the .vhd files to be stored on the specified host (E:\VMs), and controller_server is the name of the Virtual Server host (MobileP2V).

By default, the migration scripts are configured to create fixed-size virtual hard disks. If the physical disks on the source system have an extensive amount of unallocated free space or you don’t want to use fixed-size virtual hard disks, execute VMScript with the / virtualDisk-Dynamic option. This option also speeds up the virtual machine creation process. If you use /virtualDisk- Dynamic the command line looks like:

VMScript /hwgeneratep2v /hwinfofile:”C:  P2VSource\TestServer.xml” /   name:TESTMIGRATION /vmconfigpath:”E:  VMs” /virtualDiskPath:” E:\VMs” /   hwdestvs:MOBILEP2V /virtualDiskDynamic

VMScript.exe generates the migration scripts in a subdirectory, C:\Program Files Microsoft VSMT\p2v\TESTMIGRATION. Execute the VMScript command line, and you’ll see the output shown in Figure 1. VMScript creates 12 output files that are used during the migration process. The readme file, TestMigration_P2V_Readme .txt, provides information about script creation and driver issues. The three XML files contain information used during the migration about the hard disk and driver configuration. The TestMigration_ boot.ini file is a copy of the boot.ini information from the source machine. You’ll execute three scripts directly during the migration process: TestMigration_ Capture.cmd captures the source disk drives into ADS images, Test- Migration_CreateVM.cmd creates the target virtual machine using the source configuration information, and TestMigration_ DeployVM.cmd images the captured source disk images to the target VM drives.

VMScript also creates a subdirectory called Patches. It is automatically populated with known patches that you’ll need to install.

Step 6: Load the Required Drivers into ADS

When VMScript validates the source system configuration information, it doesn’t validate that all the required drivers are installed in the ADS file cache. The most important driver to install is the source system network card. Without this driver, the source server can’t be captured. Download the latest network interface card drivers for the source system to a temporary directory on MobileP2V. Copy the driver files into C:\Program Files\Microsoft ADS\NBS Repository\User\PreSystem. When you copy the network interface card driver files into the ADS file cache, don’t create any subdirectories or include Txtsetup.oem files. The subdirectories aren’t needed because the driver files must be placed directly in the PreSystem directory, and the Txtsetup.oem file isn’t used.

After you’ve copied the files, restart the ADS Builder service so that it finds the new drivers. Open a command window and type

net stop adsbuilder

Then press Enter. Type

net start adsbuilder

Then press Enter.

Step 7: Capture the Testserver System Disk

Now you’re ready to capture the Testserver system disk images. The TestMigration_Capture. cmd migration script executes and leverages ADS to capture each disk image sequentially. Log on to MobileP2V as local administrator and follow these steps to start the disk image capture process of TestServer. Open a command window and change directories to C:\Program Files\Microsoft VSMT p2v\TestMigration. Execute the TestMigration_ capture.cmd script. When prompted, log on to the source server, Testserver, restart it, and boot it to the Pre-execution Environment (PXE) interface.

ADS takes control of the source system and boots it into the Deployment Agent to initiate the disk image capture. To follow the progress of each disk image capture, you can use the Automated Deployment Service MMC snap-in on the Controller server. In the MMC snap-in, go to Devices, Running Jobs, then double-click on the running job, as shown in Figure 2. Image captures can take awhile depending on the size and number of the disks. If the server has a slow network interface, consider updating the interface card to a faster card connected to a faster port to reduce the transfer time. When the image captures are complete, ADS shuts down and removes the source system from the device database. The last task before the script terminates is changing system file attributes,

Step 8: Create the Virtual Machine

Before you migrate the captured disk images, you must create the virtual machine and configure it with the same memory, disk, and network configuration as the physical machine. The TestMigration_CreateVM.cmd script (one of the scripts that VMScript generates) automates this for you. To launch the script, open a command window and change directories to C:\Program Files\Microsoft VSMT\p2v\TestMigration. Execute the Test- Migration_CreateVM.cmd script. The script creates a new virtual machine configuration file E:\VMS\TestMigration\TestMigration .vmc, registers the virtual machine, connects the virtual machine to the default virtual network VM0, creates and attaches the virtual hard disks (VHDs) to the virtual machine, and attaches a Remote Installation Services (RIS) virtual floppy disk to the virtual floppy drive. If you get this error

Error:System.IO.FileLoadException: The located assembly’s manifest definition with the name ‘Microsoft.VirtualServer.Interop’ does not match the assembly reference.

then the MobileP2V server is running Virtual Server 2005 R2 Service Pack 1(SP1). VSMT 1.1 is compatible with Virtual Server 2005 R2 but not Virtual Server 2005 R2 SP1, Refer to the sidebar, “Why VSMT 1.1 Doesn’t Support Virtual Server 2005 R2 SP1,” for more information on how to resolve this issue.

When all these tasks are complete, check the ADS device database using the ADS MMC snap-in. The virtual machine should have been added to the ADS device database and set to boot to the Deployment Agent.

Step 9: Deploy the ADS Disk Images to the TestMigration Virtual Machine

After the virtual machine is created, the source server disk images must be restored. TestMigration_DeployVM.cmd controls this part of the migration procedure. To restore the source disk images and deploy the virtual machine, go to C:\Program Files\Microsoft VSMT\p2v\TestMigration and execute

TestMigration_DeployVM.cmd

To follow the progress of the virtual machine deployment, you can use the Virtual Server 2005 R2 Administration Website on the Controller server. You’ll see the virtual machine boot into the Deployment Agent and the disk images restore to the virtual hard disks, as shown in Figure 4. The hardware-dependent system files are then swapped for virtual machine-compatible versions, and required operating system configuration settings are applied.

If you use the MMC snap-in to check the ADS device database, you’ll see that the virtual machine is still in the device database. The TestMigration_DeployVM.cmd script terminates after removing the RIS virtual floppy disk from the virtual machine. The virtual machine remains booted in the Deployment Agent.

Step 10: Complete the Migration Process

Before you complete the source system to virtual machine migration process, perform a few final cleanup tasks. The TestMigration virtual machine is still booted into the Deployment Agent, so you need to reboot it: Open the ADS MMC snap-in, select and right-click the TestMigration device, then select run job. A New Job wizard launches. Click Next. Select to create a one-time job, and click Next. Then click Next to skip the description screen. Select Internal command, and click Next. Select \bmonitor reboot, and click Next. Click Finish to reboot the TestMigration VM.

Once the machine is rebooted, release control of the device, and delete the virtual machine from the device database. Log on to the virtual machine, and install the Virtual Machine Additions to get keyboard and mouse integration and better performance. Complete any remaining configuration modifications, and test the virtual machine connectivity and performance to ensure that it’s running as expected. Once the virtual machine testing is complete, migrate TestMigration from the MobileP2V solution to the production Virtual Server host. Once you do that, you can back up and delete the source system disk images from the ADS image store.

Once the Virtual server is ready then SQL data won’t need to migrate just need only change LUN mapping and attach LUN as physical RDM/s to Virtual Machine

Reference Link: http://blog.pluralsight.com/vmware-p2v-migration