SQLMCM Training – Day 18

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today we covered the topics Full Text Search, Change Data Capture, Change Tracking, and Service Broker. I have taken no notes on Service Broker, because I have well documented that topic a few years ago here.

Full Text Search

  • Fulltext Index is changed automatically in the background (by default)
  • Fulltext Catalog is a container for Fulltext Indexes
  • sys.dm_fts_memory_buffers
  • sys.dm_fts_memory_pools
  • sys.dm_fts_parser

CDC/Change Tracking

  • CDC can use the Transactional Replication Log Reader Job
  • Sparse Columns are not supported when used in a Column Set
  • Computed Columns are not supported
  • Individual XML elements are not tracked
  • 2 Capture instances are supported per table
  • KEEP_CDC option when you restore/attach CDC database backup on a Enterprise Edition

Tomorrow is the last day, where all is about Powershell. But I’m doing tomorrow nothing regarding the training, because I’m trying the SQLMCM Knowledge Exam in the afternoon, and want to have a long sleep with a fresh mind. I’m not expecting to pass the Knowledge Exam on the first try, but at least I know afterwards how hard it is really, and which topics I must study more, and to which level.

With this input I can also recalibrate my studying, because after 3 weeks of SQLskills trainings, it is really hard to find the right level – you can talk with Paul about the actual source code implementation of some components inside SQL Server. But is that really the level to which you have to know it? Tomorrow afternoon I know a little bit more about it…

Thanks for reading

-Klaus

SQLMCM Training – Day 17

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Bob spoke about Auditing, SQLCLR, XML, and Spatial Data. Because I already know these concepts very well, my study notes for today are very sparse.

Auditing

  • SQL Trace has too much drawbacks from Auditing
    • Separate program
  • Column Level GRANT overrides a DENY
  • Auditing audits changes to Auditing
  • Auditing Targets can’t be shared
  • Auditing uses Extended Events in the background
  • You always need to do a CREATE SERVER AUDIT
    • Defines the target
    • Defines what to do when the audit is not available
    • QUEUE_DELAY = 0
      • Synchronous auditing without data loss

SQLCLR

  • SQLCLR is always loaded, because it is internally used
    • Spatial, HierarchyID data types
  • Only for functions
    • WITH RETURNS NULL ON NULL INPUT
    • If you can’t change the .NET code to use nullable types
  • SQLCLR uses the Multi Page Allocator
    • Uses memory outside the Buffer Pool
    • “MemToLeave” area, can be controlled through the “-g” startup parameter on x32
  • SQLCLR is not controlled by Max Server Memory
    • So you need to set Max Server Memory, so that SQLCLR also has some memory available
  • You get for each owner of an assembly per database an AppDomain
  • SQL Server Log shows which AppDomains are loaded
    • DDL AppDomain
      • For Assembly Verification during startup
    • Runtime AppDomain
      • For executing SQLCLR code
    • sys.dm_clr_appdomains
  • SQL Server calls the .NET GC, when memory pressure occurs
  • sys.dm_clr_tasks
    • forced_yield_count
  • SQL Server creates invisible assemblies, when you call one assembly from another assembly
    • The dependent assembly is invisible
    • You can’t register .NET code in an invisible assembly
    • You have to make that assembly visible by ALTER ASSEMBLY
    • Dependent assemblies are automatically dropped when the “root” assembly is dropped
    • sys.assembly_references
  • sys.dm_exec_query_stats
    • clr_time
  • When you execute SQL code in SQLCLR, it’s dynamic TSQL Code
    • This breaks Ownership Chaining
    • Use EXECUTE AS OWNER instead

Thanks for reading

-Klaus

SQLMCM Training – Day 16

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Bob spoke the whole day about securing SQL Server, security itself, and encrypting data. It was really hard content, and sometimes it was really hard to follow, because of too much PowerPoint slides. I already miss Paul’s whiteboard drawings… ;-) Here are my study notes.

SQL Server Setup

  • SQL Server VSS Writer
    • Needed for 3rd party backup products
  • When you change the SQL Server service account password, you don’t have to restart SQL Server
  • When you change the service account through SQL Server Configuration Manager, the Configuration Manager will put the new user in the appropriate group
    • Permissions are granted on the group level, never on the user level
    • Service Master Key is also re-encrypted through the Configuration Manager
    • Registry permissions are also granted on the group level, instead of the account level
  • NETWORK SERVICE exists only once on a computer
    • SQL Server would be shared with other applications, when they are also run under NETWORK SERVICE
  • http://sqlskills.com/BLOGS/BOBB/post/About-SQL-Servers-usage-of-Service-SIDs.aspx

  • Permissions are assigned to the Service SID of SQL Server
  • Service SIDs can’t be used for Clustering
  • Clustering needs Domain User Accounts
  • http://download.microsoft.com/download/1/2/A/12ABE102-4427-4335-B989-5DA579A4D29D/SQL_Server_2008_R2_Security_Best_Practice_Whitepaper.docx
  • SQL Browser Service
  • When SQL Server Agent Job is owned by sa
    • T-SQL steps are running as sysadmin
    • Non T-SQL steps are running as agent Service Account
      • You need the appropriate permission on the Agent Service Account, which is bad, when you need a l ot of different permissions
  • When SQL Server Agent Job is NOT owned by sa
    • T-SQL steps
      • Agent logs into SQL Server
      • Uses EXECUTE AS WITH OWNER
    • Non SQL steps
      • Runs as SQL Agent Proxy
      • Create Credentials
        • The Credential gets the permissions from the configured identity
        • You have to change the credential when the password of the identity changes!
        • Credential needs the permission “Log on as a batch job” through secpol.msc
      • Create Proxy over Credential
        • Proxy is the security context for the SQL Server Agent Job Step
        • Proxy accesses the permissions from the configured identity, that is attached to the proxy account
  • Kerberos needs a SPN (Service Principal Name) in the Active Directory
    • Format: MSSQLSvc/server1:1433

Authentication/Authorization

  • Guest account can’t be deleted, only deactivated
  • sys.tables/principal_id shows the owner of the table
  • sp_adduser should not be used any more
    • Use CREATE USER instead
    • sp_adduser creates a schema which is owned by the newly created user
  • Application Roles
  • MIsmatched SIDs
    • sp_change_user_login
  • TRUSTWORTHY property
    • Does the sysadmin trust the DBO?
  • fn_my_permissions()
  • sys.fn_builtin_permissions()
  • Windows Groups can’t have a default schema
    • A user can be in more than one group, which default schema from which group should be associated with the user?
    • The last added group (the login with the highest login_id) is used for default language/default database
  • SQL Server executes stored procedures as CALLER
  • Dynamic T-SQL breaks the Ownership-Chain!
  • When the owner of the stored procedure is the same as the owner of the tables that are accessed, permissions are not checked
  • The Owership Chain overrides a DENY
  • Database Master Key is always encrypted with TRIPLE_DES
    • The same with the Service Master Key
  • Service Master Keys are used to encrypt
    • Database Master Keys
    • Linked Server passwords
    • Credentials
  • Service Master Key is encrypted by
    • DPAPI and Service Account
      • Needed for Cluster Failovers (the Service Account is on each node the same)
    • DPAPI and Local Machine Key
  • Service Master Key is generated the first time, when SQL Server is started after installation
    • Therefore you can clone an installation
  • You can regenerate the Service Master Key with the startup option “-K”
  • Database Master Key can the optionally protected by the Service Master Key
    • You don’t need a password for opening the Database Master Key
  • When you backup a database (and therefore the Database Master Key) and restore it on another instance, you will loose the encryption by the Service Master Key
    • Therefore you have the option to open the Database Master Key through the provided password which is necessary
    • After you have opened the Database Master Key by password, you can add the encryption by the Service Master Key
  • When you want to encrypt the Database Master Key by the Service Master Key, you have to supply the password that is used to encrypt the Database Master Key
  • TempDb is also encrypted when you enable Transparent Database Encryption for a user database

Thanks for reading

-Klaus

SQLMCM Training – Day 15

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

The whole last 3 days I’ve spent almost every minute with reviewing the material from the last 2 weeks (around 38 hours in total…). Today Bob Beauchemin
joined us, and talks the whole week about SQL Server Development, Security, PowerShell, and some esoteric technologies like SQLCLR, Service Broker, and CDC/Change Tracking.

Don’t expect too much notes from this week, because I know a lot of those things already J

Module 1: Optimizing Procedural Code

  • Implicit Conversation leads to Clustered Index Scan
    • Seen in the predicate of the Scan Operator in the Execution Plan
    • Unicode <> Non Unique Conversion
    • Entity Framework v1 needs Unicode for that reason in the database
  • Constant Scan
    • One Column, One Row RowSet
  • Index over a computed column leads to a persisted computed column
  • Query Compilation
    • Phase 0
      • Initializes the Query Optimizer
      • Query cost <= 0.2 will take the plan for execution
    • Phase 1
    • Phase 2
      • Clock runs until timeout is occurred
        • See “Timeout” in Execution Plan
  • Hash Join is used when you have no indexes/keys on the table
  • When you have a key/index on at least one table, a nested loop join is used
  • Each query operator is COM object
  • Bitmap IN ROW optimization
  • Correlated sub query implies Nested Loop operator
  • Samples are used when Auto Create and Auto Update Statistics is used
    • Depends on the number of pages, around 20% – 30% are used
  • Service Broker queues have no statistics
    • The queues are changing too frequently, so it doesn’t make sense to maintain statistics on them
  • Density Vector returns the number of distinct rows
  • Filtered Indexes/Filtered Statistics are rebuild based on ALL rows, not the filtered rows
  • sys.dm_os_memory_cache_entries
    • original_cost
    • current_cost
  • DBCC FLUSHPROCINDB(db_id)
  • DBCC USEROPTIONS shows the SET and all the other session options
    • user_id = default schema id
  • Disable SET ARITHABORT option in SSMS
    • Every client that connects to SQL Server has this option also disabled
    • If you try to find out, why a query is slow from an application server or other user, you will get within SSMS a new execution plan, because ARITABORT is different
    • Therefore the execution plan is compiled, and you get an execution plan for the current supplied parameters
    • Therefore you can’t reproduce a parameter sniffing problem within SSMS when ARITABORT is enabled!
  • GROUP BY/HAVING clause
    • Query will never get parametrized
    • Also not, when FORCED parameterization is enabled on the database
  • When you need FORCED parameterization on query level
    • Use Plan Guides
  • Halloween Protection
    • Eager Spool is a Stop-And-Go Operator
    • Lazy Spool is a Pass-Through Operator
    • E.g. When you’re updating a CI in a table
  • Table Valued Function
  • Inline Statement Table Valued Function
    • Gets a Parse Tree (cacheobjtype), View (objtype) in sys.dm_exec_cached_plans
    • Get no record in sys.dm_exec_query_stats
    • You get only a record in sys.dm_exec_query_stats for the statement, that is calling the Inline Statement Table Valued Function
  • Multiple Statement Table Valued Function
    • Gets a Compiled Plan (cacheobjtype), Proc(objtype) in sys.dm_exec_cached_plans

Thanks for reading

-Klaus

Chance to win a free seat in my Performance Troubleshooting Workshop in London!

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

As you might know, I’m running my “Advanced SQL Server Performance Troubleshooting Workshop” from September 12 – 14 in Central London. You can find out further information about the detailed agenda here: http://csharp.at/Registration_UK.aspx

Next week you have the amazing possibility to win a free seat in that workshop! It’s very easy: Just register as the first person throughout this week (August 21 – August 26) for the workshop and get an additional seat for one of your colleagues for free!

Here are the exact eligibility requirements to get the free seat:

  • You have to be the FIRST person who registers for my Performance Troubleshooting Workshop in London, during the week from August 21 – August 26.
  • You have to use the discount code TWO4ONE during the registration.
  • You have to register the second person also through the discount code TWO4ONE.
  • Both registrations must have the same invoice address.
  • This offer is only available for persons/companies who haven’t yet registered an attendee for the workshop.
  • Everyone else who registers throughout the week from August 21 – August 26 with the discount code TWO4ONE, and has not won the free seat, receives a 5% discount of the original price of GBP 1,290.
  • The 5% discount does not apply to the first person who wins the additional seat for the workshop.
  • The registrations have to be done through the registration site available on http://csharp.at/Registration_UK.aspx

So let’s be very fast, and register for my upcoming workshop in London, and if you are the first one, you will get an additional seat for free!

See you soon in London J

-Klaus

SQLMCM Training – Day 11

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Jonathan presented on Replication and Paul about Database Snapshots, and his baby called “CHECKDB” – he has been married with CHECKDB for almost 5 years at Microsoft.

Module 8: Replication

  • Distributes data to different locations
  • Offloading reporting to another database
  • Transactional Replication
    • Used in server-to-server scenarios that require high throughput
    • Different Agents
      • Snapshot Agent
      • Log Reader Agent
        • Is also used by CDC & Change Data Capture
        • See log_reuse_wait_desc column in sys.database
        • Only committed transactions are copied to the Distribution Database
      • Distribution Agent
        • Runs on the Distributor for Push Subscriptions
        • Runs on the Subscriber for Pull Subscriptions
        • Copies transactions from the Distribution Databases to the Subscriber Databases
    • Peer-to-Peer Replication maintans copies of data across multiple server
      • If you enable it, you can’t go back!
      • P2P Replication must be set per publication
  • Merge Replication
    • Primarily for mobile and distributed server that have possible data conflicts
    • Starts with a snapshot
    • Incremential changes for data and schema are tracked with triggers
    • Conflict Detection
    • POS (Point Of Sale) appliations
    • Different Agents
      • Snapshot Agent
      • Merge Agent
        • Applies initial snapshots to subscribers
        • Merges incremential changes
        • Detects and resolve conflicts
          • Priority based – highest priority wins
          • SQLCLR/COM- based on complex business rules
    • Only the final changes to the data are applied
      • E.g. multiple Updates
      • Net changes are tracked at publisher and subscriber are merged periodically with conflict detection
    • Needs planning for TempDb & VersionStore
      • Depends on the number of articles
  • Snapshot Replication
    • You can a snapshot of the data delivered to the subscription
    • Can be used as a preparation for Transactional- and Merge Replication
      • For large databases a backup would be a better option
      • You can also use Bulk Copy the initial data
    • Snapshot takes locks when the snapshot is generated
      • Shared locks on the table are acquired
      • RCSI is not supported
  • Distribution database is needed, because there can be several subscribers
    • When you make changes in the Distribution Database, you are not supported by Microsoft
      • Like Indexing
    • Large Distribution Databases should be moved to its own server
  • SQL Server uses Table Scans inside the Distribution database, which can perform badly when the distribution database gets larger and large
    • You can set the retention period
  • Updatable Subscriptions
    • Allows subscribers to replicate changes back to the publisher
  • Database Mirroring & Replication
    • Publisher: Full Support
    • Subscriber. Limited Support
    • Distributor: No Support
      • Failover Clustering is the only High Availability option
  • Failover Clustering & Replication
    • Supported on each Replication role (Publisher, Subscriber, Distributor)
  • Replication Monitor
    • Provides health about the Replication topology
  • Tracer Token
    • Allows you to measure latency

Module 9: Database Snapshots

  • Consistent point of view of database
  • Can be created on a mirror to access database
  • Sparse file as a mapping- and data area
  • SQL Server is pushing 8kb once a time into the snapshot, not Extents
  • Once a page is pushed in the snapshot, it is never pushed again into the snapshot
  • Buffer Pool has an in-memory map which pages are in the snapshot and which are not
    • The first time when the snapshot is accessed the in-memory map is generated
  • When the database is grown, the new pages are never in the snapshot, because they never existed in the snapshot when the database was created
  • When the original page is pushed into the snapshot, the page is latched (the BUF_LATCH structure)
  • 3 different Snapshots of a database means 3 synchronous writes when a page gets updated
    • The original page must be pushed synchronously in each of the 3 snapshots
  • Snapshot on a Snapshot will not work
  • The snapshot stores its own copy of a page in the Buffer Pool, when the page is read
    • This can lead to memory pressure in the Buffer Pool
  • When there are open transactions during the creation of the snapshot
    • Crash Recovery is of the source database is run into the snaphot
    • E.g. When a transaction is active, a rollback of this transaction is occuring in the context of the snapshot
  • When you create a snapshot on a mirror, the consistent point of time of the snapshot is the time, when the last CHECKPOINT operation occured on the mirror
  • First step is a CHECKPOINT operation, you will not get in the snapshot the following:
    • All transactions that have comitted AFTER the CHECKPOINT operation, they are rollbacked
    • All transactions that were running at the CHECKPOINT operation, and were not committed at the time the CHECKPOINT operation occured
  • FILESTREAM file group can’t be included in a snapshot, but it does not prevent creating the snapshot
    • But you can’t REVERT the snapshot, if there is a FILESTREAM file group
  • Snapshot goes suspect, when there is no more space for the snapshot
    • Source database is un-effected, and snapshot can’t be used anymore
  • You can’t detach or restore the source code, when there is a snapshot defined for that database
  • http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx

  • sys.databases/source_database_id IS NOT NULL return all Snapshots
  • When you begin a transaction and finally rollback the transaction, the changed pages from that transaction stays in the snapshot
  • REVERTing a snapshot sets the log file to 0,5MB and 2 VLFs
  • Database Snapshot Performance Considerations under I/O Intensive Workloads

Module 10: Corruption Detection and Recovery

  • I/O Errors
    • 823: Hard I/O error
      • OS can’t read data from disk
    • 824 :Soft I/O error
      • Page Checksum failure
      • Torn Page Detection
        • Bits are corrected as soon as the pages are read into the Buffer Pool
      • Wrong PageID
    • 825: Read-Retry Error
    • Logged in msdb.suspect_pages
  • Page Protection options doesn’t protect you
    • They just detect corruption
  • Page Checksum is stored in the Page Header
  • When Page Checksum is enabled, nothing is done
    • A page is only checksumed, when the page gets dirty
    • Not available for TempDb since SQL Server 2008
      • You have to enable Checksum explictely on TempDb when upgrading from SQL Server 2005 to SQL Server 2008
  • Automatic Page Repair is available in both Standard and Enterprise Editions
  • CHECKDB doesn’t take any locks
  • Last Known Good (last time when CHECKDB run without any problems) is reported in the SQL Server Error Log, when you start SQL Server
  • EXTENDEND_LOGICAL_ACCESS
    • Indexed Views
    • XML Indexes
    • Spatial Indexes
  • Statistics Blobs are not checked
  • Repair can break foreign-key constraints
  • VLDBs
    • DBCC CHECKFILEGROUP
      • E.g. One partition per each night
  • You can run CHECKTABLE on the system tables
  • REPAIR_REBUILD option need single user mode
  • TABLERESULTS
    • Returns the output as a table
    • Undocumented, because the output can change from release to release
  • Online Index Rebuild reads the old index during the rebuild
    • Offline Index Rebuild also reads almost every time the old index during the rebuild
    • You have to drop and recreate the NCI
    • It could be problematic if the NCI enforces a constraint, in the mean time when the NCI is dropped, users can insert data that is not enforced by the NCI, and afterwards you can’t recreate the NCI
  • DBCC CHECKDB can be run on a suspect database
    • Suspect means that recovery started, but couldn’t be completed, so it is not transactional consistent
    • But you must put it in EMERGENCY mode
  • Not all pages can be single page restored (because not all operations on them are fully logged)
    • Boot page
    • Fileheader page
    • Allocation bitmap (not including IAM pages)
    • Certain pages in hidden, critical system catalogs
    • Logged in msdb.suspect_pages
    • Up to 100 pages can be restored during a Single Page Restore
    • Online Single Page Restore is an Enterprise feature
    • If you have subsequent log backups, they must be also restored
      • You also need to do a tail-log backup and restore it also
    • And the log system knows that only log records for the specific page must be replayed/rollbacked
  • Allocation System belongs to Page-ID 99
  • Repair is going to delete data!
  • Repair is always offline
  • When you repair a Replication, you must re-initialize the Subscriptions
    • Repair is physically, Replication is logically
  • Msg 8921 when TempDb run out of space
  • Do not try to repair system tables!!!
  • Emergency Mode for rebuilding the transaction log
  • XVI32 Hex Editor ;-)
  • Damaged PFS page
    • Allocation Order Scan uses the PFS page
      • E.g. SELECT * FROM TableA
    • You can retrieve the data, even when the PFS page is damaged, by using an Clustered Index Scan by ordering by the Clustered Key, or using an index hint that forces an Clustered Index Scan
  • A suspect or recovery pending database can’t be detached on SQL Server 2008, because it can’t be re-attached any more…
    • When you enforce it, it can be detached
    • To attach it again
      • Create a new database with the SAME log
      • Set the database offline
      • Delete the MDF/LDF file
      • Copy the MDF/LDF file from the detached database
      • Set the database online
      • Database is again in the SUSPECT state
  • ATTACH_REBUILD_LOG

Thanks for reading and stay tuned for the last week J

-Klaus

SQLMCM Training – Day 10

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Paul was speaking about Database Mirroring and Jonathan was talking about Failover Clustering – all in all a very intensive day J

Module 6: Database Mirroring

Module 7: Failover Clustering

  • Preferred owner configuration defines on which node a SQL Server can run
  • Cluster Validation Tests offline the shared cluster disks
    • You have to plan offline time
  • You need a current Cluster Validation Test Report for Microsoft Product Support Services to get help
    • Rerun Validation Tests, as soon as you add a new node to the cluster
  • Only the active nodes have to be licensed
    • The node with the most CPUs defines how many CPUs you have to license when you use a CPU based license
  • You need on odd numbe of votes to form a Quorum
  • Not enable Large Pages, when 2 instances can be run on one node
    • The whole buffer pool must be allocated during startup phase of SQL Server
  • SCSI-3 Persistent Reserveration must be enabled on the LUN level in the SAN
    • It’s normally not enabled by default
    • It’s needed for the shared disks in the cluster
  • TempDb is currently not supported on a local disk in a cluster
    • SQL Server Denali will support it
    • You can’t use currently SSDs als local disk for TempDb
  • Network Binding Order must be set with multiple NICs
  • MSDTC is only needed, when you need distributed transactions
    • MSDTC is installed as a cluster resource
      • MSDTC disk could be mounted on another cluster node
    • MSDTC should be in the same resource group als SQL Server
      • MSDTC can fail together with SQL Server
      • Otherwise MSDTC could get a bottleneck
  • You can have a separate MSDTC instance per SQL Server instance
  • Slipstreaming
  • Failback
    • Should be only configured, when the new failover node has less resources as the old one
    • Otherwise a failback occurs automatically as soon as the old node gets online
  • You need Disk Dependencies if you’re using Mount Points
  • Virtualization doesn’t provide you High Availability during Patching
  • Rolling Patches/Upgrades
    • Go to the SQL Server Instance Properties inside Cluster Manager (for each SQL Server Instance)
    • Advanced Policies Tab
    • Remove the Possible Owner who you are currently patching
    • The Instance is taken offline/online during Patching, therefore those steps are needed to be done
      • Patching node must be removed from the possible owners
    • After one instance is patched
      • Fail over to the patched instance, after allow it as a possible owner
      • Remove possible owner from the unpatched instance
      • Patch the unpatched instance

Thanks for reading

-Klaus

SQL MCM Training – Day 9

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Jonathan was talking the whole day about Consolidation and Virtualization. There were again a lot of new concepts, and new things to learn.

Module 4: Consolidation

  • Event Notifications for Monitoring
    • Deadlocks, Locking/Blocking Scenarios
  • Multiple Instances for TempDb bottleneck
  • Soft-NUMA should be used to bound multiple instances to dedicated CPUs
    • CPU Affinity Maks should not be used, because they are not supported anymore in SQL Server Denali
    • CPUs can be only taken from the local CPU
      • there is no cross CPU exchange
      • Soft NUMA nodes can’t be crossed
      • only local memory access
      • Improves performance
    • TCP ports can be bound to Soft NUMA instances
  • Multiple concurrent Backups can lead to Buffer Pool Contention
  • Multiple instances when you have high concurrent workloads and not enough worker threads
    • Number of worker threads is bound to the instance level
  • SQL Server 2008 R2 will report if you run on a Hypervisor
  • Detach, Copy, Attach
    • Maybe you’re copying free space over the network
  • Raw Device Mapping (RDM)
  • Migrate Databases through Database Mirroring/Log Shipping
  • On-the-Fly VHD file migration
    • Put everything on a VHD
      • Including master database etc.
    • Move the VHD file from one VM to another VM
  • There were no providers for 64bit Excel/Access, you can use the following workaround
    • Install SQL Express x32
    • Linked Server from SQL Server x64 to SQL Express x32
    • Linked server from SQL Express x32 to Excel/Access
  • Multiple Instances because of security issues
    • Each database needs sysadmin/securityadmin role
    • Put each database on its own instance
  • Virtual Machine Failover Cluster for Patching SQL Server Instances
    • Patch the Passive Node
    • Fail over
    • Patch the other Node
    • Fail back (if needed)
  • The total of min server memory should be smaller than the total amount of physical memory available
  • Max server memory is ALWAYS preferred for multiple instances
  • Memory\Available MBs > 150-300MBs
  • Thread Pool Starvation
    • Not too much worker threads are available
    • E.g.
      • Query runs with MAXDOP 4 across 100 different connections
      • Each query needs 9 worker threads (4 producer threads, 4 consumer threads, 1 coordinator thread)
      • These are almost 900 concurrent threads
      • Queries can timeout, because this amount of worker threads is not available once a time, see
        • select max_workers_count from sys.dm_os_sys_info
  • SQL Server Consolidation at Microsoft

Module 5: Virtualization

Thanks for reading

-Klaus

SQL MCM Training – Day 8

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

Today Paul talked the whole day about HA DR Strategies and Backup/Restore operations. I’ve already known a lot about Backup/Restores, but Paul showed some cool things that were even new to me J. Here are my study notes for the 3 modules of today:

Module 1: HA DR Strategies

  • Zero downtime is not possible
    • Even not mit Live Migration
  • Failover Clustering
    • Server is protected
    • Data is not protected
  • RPO – Recovery Point Objective
    • Maximum allowable data-loss
    • Zero data-loss is possible through synchronous Database Mirroring or synchronous SAN replication
  • What is the transaction log generation rate of your workload?
    • Impacts
      • Database Mirroring
      • Log Shipping
      • Replication
      • Log File Size
  • Recovery Model
    • Database Mirroring needs FULL
    • Log shipping works with FULL and BULKLOGGED

Module 2: Backups

  • Don’t plan a backup strategy, plan a restore strategy!!!
  • When a Full Database Backup starts, a CHECKPOINT occurs
    • CHECKPOINT generates log records
  • Concurrent Full Backups and Tx Log Backups are possible
    • Log Clearing from the Tx Log Backup is deferred until the concurrent Full Backup is completed
  • DBCC SQLPERF (LOGSPACE)
  • Only a log backup clears the transaction log in the FULL recovery model
  • Reverting from a database snapshot breaks the backup chain
  • While a log backup is running you can’t do a minimally logged operation in the bulk logged recover model
    • The ML bitmap mask can’t be changed in the mean time
  • Tail Log backup in the bulk logged recovery model also needs the data files (needs to backup the changed extents)
    • They must be accessible, not only the transaction log
    • Tail Log backup can be performed, but it leads to a corrupt database
  • Backup chain is not broken, when you switch between full <> bulk logged <> full
  • Backup chain is only broken when you switch to the SIMPLE recovery model
  • Database Snapshot can’t be created on a database in the RESTORING state because Backup/Restores aren’t going through the Buffer Pool
  • SQL Server can’t rebuild a partition, when they are spread across filegroups, where some filegroups are readonly
    • Therefore you can use Partitioned Views to union the Read/Write and the ReadOnly filegroups
  • When you do a Full backup of a readonly filegroup, nothing else (diff, log backup) has to be done
    • Just back it up once, and you’re done
  • COPY_ONLY
    • Also possible with transaction log backups – will not clear the transaction log
  • For each backup device a writer thread will be created
    • For the restore a seperate reader thread will be created for each backup device
  • Mirrored Backups
    • When a mirror disappears, the entire operation fails
  • CONTINUE_AFTER_ERROR
    • Forces Backups & Restores of corrupt databases
  • Backup Compression new on SQL Server 2008 Enterprise Edition
    • SQL Server 2008 R2 also includes it in Standard Edition

Module 3: Restores

  • Files can be only set OFFLINE, but you have to restore from backup to get it ONLINE
  • Prevent that Backup Compression preallocates space
    • Trace Flag 3042
    • kb2001026
  • STOPAT does nothing on a full or diff backup restore
    • It’s just syntetical reason
  • After STOPAT a full database backup should be taken to create a new well-known recovery starting point
    • Otherwise the restore sequence begins at the initial full backup and all subsequent restores must use STOPAT, which is a way more complicated
  • fn_dump_dblog
    • Look into a backup
    • 64 parameters!!!
  • STOPBEFOREMARK
    • Stops the restore process on a specific LSN
  • Piecemeal Restore
    • PRIMARY filegroup must be restored with the PARTIAL option
    • E.g. Single Page Restore when a page is corrupt
    • Only possible in the FULL or BULK LOGGED recovery model
    • Must start with the PRIMARY file group
  • sp_delete_backuphistory
    • Backup history tables in msdb must be cleaned up manually
  • Startup parameter -Q
    • Starts SQL Server without creating TempDb
    • Can be used when the model database is damaged, and SQL Server can’t create a copy of it for TempDb
  • xp_readerrorlog
  • Option RESTART
    • Skips steps during restore, when they are already done
    • Writes a .CKP file that stores the different phases
    • Stores in the default specified Backup folder
    • TF 3004: Instant File Initialization
    • TF 3014: What’s going during restores
    • TF 3605: Print out what caused on TF 3004 and TF 3014

Thanks for reading

-Klaus

MCM Training – Day 6 & 7

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan – you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

My whole weekend was dedicated to MCM studying, and I’ve reviewed a lot of stuff that we have learned through the last week, and I was also able to read a lot of additional whitepapers and blog posts, that Paul, Kimberly, and Jonathan have referred to us.

Today morning week 3 of the SQLskills training started. This week is all about High Availability and Disaster Recovery and seems to be a lot of fun, and I expect to see a lot of complicated demos and usage scenarios how to combine HADR technologies in SQL Server.

Some already have asked me, how my day looks like, so I want to share my daily schedule here with you:

  • 06:30am
    • Wake up
  • 06:30am – 07:00am
    • Getting up
    • Reading the news that have happened in Austria (I’m 9 hours behind the time in Austria, so it’s almost 04:00pm in the evening in Vienna
  • 07:00am – 07:30am
    • Video Chat with Karin & Philip
    • Philip already enjoys it, and looks every day forward to it, even he is only 18 months old J
  • 07:30am – 08:30am
    • Breakfast at SQLskills training, get up and running for the training
  • 08:30am – 05:30pm
    • SQLskills training
  • 05:30pm – 07:00pm
    • Some time that is not dedicated to SQL Server ;-)
  • 07:00pm – 11:00pm
    • Reviewing training material/notes from the day, reading whitepapers, working on some things in SQL Server that I want to try based on the training

This schedule repeats the whole week, expect this week, because the training “only” runs for 4 days.

Thanks for reading

-Klaus