SQL Server database migration to SQL Database in the cloud environment without Downtime


SQL Server database migration to SQL Database in the cloud environment without Downtime

 

We need to ensure that the source database is compatible with Azure SQL Database using the Data Migration Assistant (DMA). SQL Database V12 is approaching feature parity with SQL Server, other than issues related to server-level and cross-database operations. Databases and applications that rely on partially supported or unsupported functions need some re-engineering to fix these incompatibilities before the SQL Server database can be migrated.

 

When we cannot afford to remove your SQL Server database from production while the migration is occurring, we can use SQL Server transactional replication as your migration solution. To use this method, the source database must meet the requirements for transactional replication and be compatible for Azure SQL Database.

To use this solution, configure Azure SQL Database as a subscriber to the SQL Server instance that wish to migrate. The transactional replication distributor synchronizes data from the database to be synchronized (the publisher) while new transactions continue occur.

With transactional replication, all changes to data or schema show up in Azure SQL Database. Once the synchronization is complete and we are ready to migrate, change the connection string of your applications to point them to our Azure SQL Database. Once transactional replication drains any changes left on your source database and all your applications point to Azure DB, you can uninstall transactional replication. Your Azure SQL Database is now your production system.

Advertisements
Categories: Migration

SQL server Database Migration form DB2, MySQL, Oracle, Sybase ASE and Access.


SQL server Database Migration form DB2, MySQL, Oracle, Sybase ASE and Access.

The Microsoft SQL Server Migration Assistant, which is often abbreviated to just SSMA, comes in different versions that support the migration from several of the most common database formats.

The one we’re most interested in is SSMA for Access, but there’s also separate versions that will migrate from MySQL databases, or Oracle databases, or a Sybase, or many others. SSMA is a free tool that greatly simplifies the database migration process when moving from Access to SQL Server in four key ways. First, SSMA for Access automates the conversion of Microsoft Access database objects to their SQL Server counterparts. Then, it gives you fine control over the data type mappings.

It then loads the objects into SQL Server, followed by a migration of your data from Microsoft Access to its new home in SQL Server. Finally, the Migration Assistant will even update your Access database with links to the new data tables, so that your downtime is kept to a minimum during the transition. SSMA for Access is currently at version 6.0, but is designed to support migration from Microsoft Access 97 and higher to all editions of SQL Server 2005, 2008, 2012, and 2014, as well as Microsoft’s cloud hosted SQL Azure databases.

Migration Sources

Once you decide to migrate to SQL Server – SmartDog SQL service is just what you need. We are help out to client to build a road plan for migration.

Migration should be conducted in three phases:

  • Pre-migration inventory / checks
  • Database migration
  • Post-migration configuration / checks

 

Pre-migration phase: For the pre-migration phase, review and document the following tasks on the SOURCE server.

  • Database sizes
  • Data and Log file location
  • Server and Database properties (Collation, Auto Stats, DB Owner, Recovery Model, Compatibility level, etc.)
  • Collect the information of dependent applications, make sure application services will be stopped during the database migration
  • Database logins, users and their permissions
  • Dependent objects (SQL Agent Jobs and Linked Servers)
  • Document maintenance plans
  • On the TARGET server, conduct the following tasks:
  • Verify adequate disk space exists
  • Appropriate folders have been created
  • SQL Server is correctly installed and configured with latest Service Pack and Cumulative Update
  • Set SQL Server properties; memory, tempdb size, autogrowth and drive location.
  • Appropriate connectivity exists with application servers, web servers, etc.
  • Create databases consistent with planned database migrations

Database migration phase

  • From SOURCE server, get recent list of DB and file location list
  • On SOURCE SQL Server, Script migrate user LOGIN information between servers, save output for subsequent step.
  • Use native SQL Server to back up databases from SOURCE, use backup compression to reduce size of backup (optionally, use detach database, copy db files, then reattach database)
  • On SOURCE server, place databases in Read Only mode
  • Restore databases to TARGET
  • On TARGET SQL, apply script to migrate users/group security (Created in Step 3)
  • Update any custom SSRS DSNs to point to TARGET server.
  • On SOURCE server, script any required SQL Server Agent Jobs (select Job, right click, Script Job as > CREATE to > File ) to create
  • On TARGET, using the file create, then transfer/execute job creation on TARGET. Create schedules as needed.
  • Need to look out the datatype part if database are migrated from different source like Oracle Sybase

Post-migration phase

  • Point the application to the new DB server address, or Availability Group LISTENER for AlwaysOn Availability Groups (Connection strings, etc. to altered by the application support team)
  • Restart / review Network connections between all stake holding servers (Network Team)
  • Check the SQL Server Error Log and Windows Error logs for any failures
  • Confirm application functionality with end users

 

Categories: Migration Tags:

100% uptime with SQL Azure DR Plan


With the allure of cost savings, greater flexibility, and more agility, many organizations are eyeing the cloud as an alternative for deploying new applications, including those with high database performance requirements. In fact, technology research firm TechNavio predicts a 62 percent annual growth rate of cloud-based databases through 2018. We are build up the setup SQL server Cluster in Azure Platform.

HADR deployment architectures

 

SQL Server HADR technologies that are supported in Azure include:

  • Always On Availability Groups
  • Always On Failover Cluster Instances
  • Log Shipping
  • SQL Server Backup and Restore with Azure Blob Storage Service

 

  • Database Mirroring – Deprecated in SQL Server 2016

 

SQL Server Cluster in Azure

Solution that provides the most options and flexibility to implement a high availability SQL Server solution, we will be discussing how to build a Windows Server Failover Cluster for SQL Server using Azure virtual machines.  As stated, one of the requirements for WSFC is the ability to access shared storage which is not an option in Azure, but by integrating a solution like SIOS DataKeeper Cluster Edition into the mix we can build a SQL Server cluster using Azure virtual machines.  SIOS DataKeeper Cluster Edition is a Microsoft Azure Certified product which means this is a fully supported Microsoft solution.

  • Provision the virtual machines
  • Add the Failover Clustering feature to the virtual machines
  • Create a file share witness
  • Install DataKeeper and setup the storage

SIOS DataKeeper allows us to get around the need for shared storage, by creating disk volumes that are replicated between each node in the cluster. The data in these volumes is synchronized using real time, block level replication. The data therefore exists on all nodes in the cluster, which also eliminates the possibility of losing the data with a single shared storage source.

Once the volume or volumes have been created on the cluster nodes using SIOS DataKeeper, this storage becomes available within the Failover Cluster Manager as available storage.

After the storage has been configured, the following steps remain:

  • Install SQL Server on the first cluster node using the “New SQL Server failover cluster installation option”. When you go through the setup of SQL Server you will see the available storage that has been configured using SIOS DataKeeper.
  • Install SQL Server on the second cluster node using the “Add node to a SQL Server failover cluster” option.
  • Configure the Azure Internal Load Balancer, so clients can connect to active node.
Categories: Azure SQL Server

Help to achieve SOX compliance auditing to organization


Achieving SOX compliance requirements is the mandatory for all publicly traded companies. But when it comes to most IT teams, SOX compliance can be quite vague and confusing. SOX compliance is not written with technology mandate in mind, but rather a mandate for accounting, legal, and financial reporting. In the SOX Act there’s no reference can be found to anything specific related to IT. It is often said that SOX was “written by lawyers, for lawyers”

In order to address SOX compliance from an IT perspective, the COSO (Committee of Sponsoring Organizations of the Treadway Commission) framework, makes some general references to IT controls, and COBIT (Control Objectives for Information and Related Technology) framework which sets ahead specific IT control objectives (and closely aligned with COSO) are established.

A SOX Primer

Although the legislation is long and covers a wide range of financial matters, below outlines the sections that have been interpreted as being pertinent to IT (information technology) departments:

  • SOX Section 302 – Corporate Responsibility for Financial Reports ◦The CEO and CFO must review and approval all financial reports. They are responsible for any misrepresentations, internal accounting controls, any fraud involving the management of the audit committee and they must indicate any material changes in internal accounting controls.
  • SOX Section 404: Management Assessment of Internal Controls ◦All annual financial reports must include an Internal Control Report stating that management is responsible for an “adequate” internal control structure and an assessment by management of the effectiveness of the control structure. Any shortcomings in these controls must also be reported. In addition, registered external auditors must attest to the accuracy of the management’s assertion that internal accounting controls are in place, operational and effective.
  • SOX Section 409 – Real Time Issuer Disclosures ◦Companies are required to disclose on almost real-time basis information concerning material changes in its financial condition or operations.
  • SOX Section 902 – Attempts and Conspiracies to Commit Fraudulent Offenses ◦It is a crime for any person to corruptly alter, destroy, mutilate, or conceal any document with the intent to impair the integrity or availability for use in an official proceeding.

 

SOX Preparation Checklist for DBAs

  • Data integrity ownership and responsibilities communicated to appropriate business owner’s acceptance of responsibilities.
  • Key database systems inventoried and owners identified
  • Database Management staff understands and accepts their responsibility regarding internal controls
  • Division of roles and responsibilities, a segregation of duties between logical DBAs (SQL Developers) and physical DBAs that prevents single DBA from unauthorized alterations
  • Review documented database management processes
  • Review documented database management risks
  • Documented database management process controls
  • Testing of database management control methods
  • Gap identification and controls improvement process
  • Update database management processes and document controls

 

Data Auditing Checklist

 

  • Pervasive – Monitor and record critical data activity across the full range of databases, applications and systems.
  • Transparent – Non-intrusive and invisible to users, especially privileged users. In addition, transparent from a performance perspective such that the databases, overall system, users or network are not negatively impacted.
  • Intelligent – Ability to filter and collect only specified target activities as required to achieve compliance and discard the unneeded items. This enables an organization to efficiently manage compliance and data, reducing both storage costs and liability.
  • Scalable – Scale easily and cost-effectively to keep pace with changes in the enterprise IT environment.
  • Flexible – Allow an organization to easily tailor data auditing to its specific needs. Flexible, policy-based rules will enable easy customization. Create and modify policies to meet the data auditing needs of other regulations, handling multiple compliance challenges with a single solution.
  • Real-time – Isolate and identify unusual activity in real time to help detect, alert and stop non-compliant data activity rapidly to mitigate risk.
  • Historical – Document a comprehensive, easily searchable audit trail for monitored data activity.   Then provide rich reporting capabilities, in alignment with an organization’s own corporate business processes.

Common Auditing

  • Monitoring database access by privileged users
  • Monitoring changes in privileges
  • Monitoring access failures
  • Monitoring schema changes
  • Monitoring direct data access
  • Documentation review and verification
  • Review of audited data from monitoring system

 

Change Management

  • Don’t let anyone authorize and/or implement changes they initiate.
  • Allow the DBA the right to refuse any implementation that does not have proper documented authorization.
  • Do not make record level changes to application data unless fully authorized, justified and documented. These kinds of changes often help users out of a jam, but they go completely around the application controls. Denying users this ability might lead to better business process in the end, and get you off the hook for problems that might crop up from such changes.
  • Document the day-to-day configuration changes you make.

Backup/Recovery

  • Have processes written for both. Test them regularly. Document the testing.
  • Review backup logs, and document the reviews to show that you’re looking at them. You don’t want to be caught saying I check them every day, and they’re fine. If the auditor finds an unreported error, their trust in what you say will take a hit.
  • Review backup schedules, and make sure they represent current agreements with your business units.
Categories: Audit

Common DBA project Challenge along with resolution


Problem

To me the biggest blunder is knowing that a problem exists and either ignoring it or procrastinating on the implementing the resolution. . This tip focuses on Common DBA project Challenge that could have been prevented.

Solution

The reality is that nothing is perfect and as technical professionals we need to build a realistic solution with the time and budget available, then communicate any potential issues to the business so they are aware of them.

To deliver better application performance, DBAs should consider the following tips:

  • Be proactive and align behind end-user experience as a shared objective across the entire IT organization by looking at application performance and the impact that the database has on it continuously, not only when it becomes a major problem.
  • Measure performance based not on an infrastructure resources perspective, but on end-user wait times. Wait-time analysis gives DBAs a view into what end-users are waiting for and what the database is waiting for, providing clear visibility into bottlenecks.
  • Implement monitoring tools that provide visibility across the entire application stack, including all the infrastructure that supports the database – virtualization layers, database servers, hosts, storage systems, networks, etc.
  • Establish historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, to detect any anomalies before they become larger problems.
  • Have a common set of goals, metrics and SLAs across all databases, ideally based on application response times, not only uptime.
  • Use tools that provide a single dashboard of performance and the ability to drill down across database technologies and deployment methods, including cloud.
  • Document a consistent set of processes for ensuring integrity and security: backup and restore processes, encryption at rest and on transit, detection of anomalies and potential security events in logs, to name a few.
  • Establish a strategy, roadmap, and guidelines for moving to the cloud (or not) and for reducing workload costs by moving databases to lower-license-cost versions or open-source alternatives.
  • Make sure team members can escape firefighting mode and spend enough time proactively optimizing performance of the databases and taking care of important maintenance tasks, which can result in significant cost savings and prevent problems in the future.

Database project Operations issue:

  • All senior team members on vacation – When you have a major deployment make sure to have your key staff members on site and available to meet the project needs. Do not fool yourself that a junior team member will be able to just push a button and deploy a solution, especially when a minimal amount of testing is conducted. When a problem arises it is the senior team member’s knowledge and expertise that is needed to quickly resolve issues. If all of those people are out on the beach and will be back next week, it makes sense to wait a week for the deployment to have your team onsite and available to address any unexpected issues.
  • Putting all of your eggs in 1 basket – When you work through an enterprise upgrade whether it is an application or hardware firmware, do not upgrade all of the systems (including the DR site) at once. Take a step back and be sure to have some systems that are out of sync for a short period of time to be migrate to a known stable platform in case an unexpected issue arises.
  • Not validating backups on a daily basis – If a serious issue occurs, make sure you have a solid last line of defense. That is a consistent and reliable set of backups on a daily basis. In addition, make sure your backup plan includes retiring tapes on a weekly, monthly or quarterly basis to be able to rollback to some point in time rather than going out of business. Also check-in with the business to ensure backups are not needed for legal or regulatory needs.
  • Not changing passwords – As an administrator you have the keys to the kingdom and need to recognize the responsibility that you have. As such, make sure your passwords are complex, change them frequently and do not share your passwords.
  • Password expiration – This is almost the opposite of the previous bullet. With SQL Server 2005 password policies can be setup for standard logins so the passwords expire and accounts get locked out. When this happens your applications will not be accessible if one of these accounts are in use. As such, setting password expiration is a good idea, just be sure to change the password and coordinate the change with your team.
  • Letting the primary file group fill up – With the rate of data growth, be sure to either cap your database size or monitor the size on a daily, weekly or monthly basis or permit your databases to automatically grow. In either circumstance, be sure to watch your disk space so that you do not fill up your disk and then have 2 problems (full file group and full disk drive).
  • Hot data centers – High temperatures mean failure for servers. The failure could be a controller card or a disk drive, but one sustained spike in the room temperature could be a critical problem that is not fully realized for a three to six month time period. Make sure your temperature is properly regulated, has a backup conditioning system and can notify your team before an issue arises.

Agile Implementation Strategy

March 30, 2017 Leave a comment

Categories: Agile Strategy

Hosting DBA service with SQL Server 2014

March 19, 2017 Leave a comment

The database industry has seen significant change over the last 5 years, from the introduction of new technologies like clustering, big data, and in-memory data platforms to the recent variants of popular open source SQL alternatives like Percona Server and MariaDB making their way into popular technology stacks. Even with these new technologies becoming increasingly common, many enterprises still trust their mission critical database operation to Microsoft SQL Server. Recently, Microsoft released its latest iteration of the technology, Microsoft SQL Server 2014, with notable advancements in performance including in-memory functionality and added security with high availability.

 

In-Memory OLTP

As one of the first to market with support for this new functionality, SmartERP is focused on increasing database performance by allowing some data to be queried and processed in RAM instead of on constrained disk resources. Previous technology forced users to load ALL information into RAM, making large data sets problematic. Now, database administrators can choose which data tables to process in-memory, allowing those operations to achieve maximum priority to valuable processing resources. By speeding up the overall performance of query operation across a database environment, users can effectively do more with less resources, which eliminates the need to scale into expensive hardware solutions. SmartERP teams are already helping users plan for their migration to SQL Server 2014 by evaluating target data sets to load into memory.

High Availability

While replication and failover have long been part of any core strategy for business continuity, achieving true high availability for a database environment has been problematic. Organizations often must decide what level of inconsistency between database environments is acceptable – hours or days? However, with a new functionality called AlwaysOn, and strong connections between multi-cloud datacenters including ones belonging to SmartERP and Microsoft Azure, users can now implement nearly 100 percent resilient database environments that protect from both database operational failure and datacenter availability. Even though SmartERP has an industry leading record in uptime and a 100 percent uptime SLA, we want our users to meet their unique availability requirements, which may include a highly available database environment. We can help you meet those requirements with a team of DBA’s that have successfully implemented high availability for many customers. Check out this report on our support of AlwaysOn for more information.

DBA Services

Recently, we shared with you the economic benefits of our DBA services offering, including a full time DBA available 24×7 at a fraction of the cost of an in-house resource. We have seen growing interest in our DBA coverage and released a tiered offering to accommodate a variety of supplemental DBA needs. We have also witnessed some organizations relying solely on our DBA services to ensure database functionality, while others have formed close partnerships with their core database team and our DBA service. This allows our customers to focus on important development and optimization efforts without the burden of maintaining and monitoring the health of the existing database. Moving between any database iteration requires some thought and planning. Our DBA team is helping customers understand the scope of work and code change required to not only migrate, but also utilize new functionality. Our partnership with Microsoft ensures that our DBA Services team is ready to address even the more challenging scenarios when switching.

Flexible Licensing

At SmartERP, customers can consume fully managed SQL application licensing along with support at a low monthly cost. This allows users to move in-between SQL versions and hardware profiles without needing to address their software license agreement. We want to make sure you always have the best combination of optimized hardware and SQL versioning to meet your objectives.

Hybrid Potential

Lastly, while many web applications are moving to the cloud, many database users still require the performance consistency and controls of a dedicated server. Often, users partnering with a single cloud or managed hosting provider will be forced to make sacrifices by either not moving applications and web assets to the cloud because of database restrictions or sacrificing database performance and reliability by trying to operate it in a utility cloud service. With a Managed Private Cloud powered by Microsoft Cloud Platform, and the built in Database-as-a-Service capability, customers can take advantage of a fully managed, physically isolated, private cloud environment that allows customers to programmatically spin up SQL Server 2014 databases on demand. Customers can also strengthen their high availability and disaster recovery capabilities by choosing from a number of available destinations, including Microsoft Azure, as a designated disaster recovery location.
SmartERP can bridge public and private clouds running SQL Server over a strong network connection. It allows users to leverage both cloud and dedicated server resources in a single environment. Dominoes UK spoke about their use of hybrid to serve web content via the cloud while still having a reliable dedicated database in dedicated hosting. You can read more about that here.

SmartERP DBA services professionals are ready to help you wrap your brain around the cool new features in Microsoft SQL Server 2014. If you want to know more, please download our whitepaper and visit go.SmartERP.com/SQL2014.