14 Jul 2016 @ 12:55 PM 

There are several reasons for excessive log growth. Please go through the below possible reasons.

  • Operating a database in FULL recovery model, without taking log backups
  • Performing index maintenance
    • Index maintenance operations are a very common cause of excessive transaction log usage and growth, especially in databases using the FULL recovery model. The amount of log space required to perform index maintenance depends on the following factors
      • Rebuild or reorganize: Index rebuilds generally use a lot more space in the log.
      • Recovery model: If the risks to point-in-time recovery are understood and acceptable, then index rebuilds can be minimally logged by temporarily switching the database to run in BULK LOGGED recovery model. Index reorganization, however, is always fully logged.
    • If the Recovery Model is switched to the SIMPLE model to perform an index rebuild, the LSN chain will be immediately broken and we can only be able to recover the database to a point in time contained in the previous transaction log backup. To restart the chain, we need to switch back to the FULL model and immediately take a full or differential database backup.
    • If the Recovery Model is switch to the BULK_LOGGED model, the LSN chain is always maintained but there are still implications for the ability to perform point-in-time restores, since a log backup that contains a minimally logged operation can’t be used to recover to a point in time. If the ability to perform a point-in-time recovery is paramount for a database, then don’t use the BULK_LOGGED recovery model for index rebuilds or any other minimally logged operation, unless we can do it at a time when there is no concurrent user activity in the database. Alternatively, consider performing index reorganizations, in FULL recovery model, where possible.
    • If the BULK_LOGGED model is used, take steps to minimize the time period where point-in- time restore is unavailable, and so minimize exposure to data loss. To do this, take a log backup in FULL model, switch to BULK_LOGGED, perform the index rebuild, then switch back to FULL and take another log backup.
    • A final important point to note is that an ALTER INDEX REBUILD operation occurs in a single transaction. If the index is large, this could represent a long-running transaction that will prevent space reuse in the log for its duration. This means that, even if we rebuild an index in SIMPLE model, where we might think that the log should remain small since it is auto-truncated after a CHECKPOINT operation, and the rebuild is minimally logged, the log file can still expand quite rapidly during an extensive rebuild operation.
    • According to Microsoft https://technet.microsoft.com/en-us/library/ms189858.aspx for fragmentation levels greater than 5 percent but less than or equal to 30 percent, we should reorganize the index, and for fragmentation levels greater than 30 percent, we should rebuild it.
  • Long-running or uncommitted transactions that prevent space in the transaction log from being reused.
  • Lack of log space reuse
    • databases DMV provides the reason why Log space is not re-used

SELECT name , recovery_model_desc , log_reuse_wait_desc FROM sys.databases

Log_reuse_wait_desc provides the reason for why the Log space was not reused. For possible reasons please navigate to https://msdn.microsoft.com/en-us/library/ms178534.aspx.

Posted By: Uma Mahesh
Last Edit: 14 Jul 2016 @ 12:58 PM

EmailPermalinkComments Off on Reasons for Excessive Log Growth
Tags
Categories: SQL Server

 14 Jul 2016 @ 12:51 PM 

The physical architecture of the transaction log is shown in below image. It is split internally into smaller chunks called virtual log files (or VLFs). These are simply an aid to easier internal management of the transaction log. When a VLF becomes full, logging automatically proceeds to use the next VLF in the transaction log. There is a misconception that eventually the transaction log will run out of space, but this is where the transaction log is so different from data files.

Physical Architecture of Transaction Log

The transaction log is really a circular file—as long as the log records at the start of the transaction log have been truncated (or cleared). Then when logging reaches the end of the transaction log, it wraps around to the start again and begins overwriting what was there before.
A log record is no longer needed in the transaction log if all of the following are true:

  • The transaction of which it is part has committed.
  • The database pages it changed have all been written to disk by a checkpoint.
  • The log record is not needed for a backup (full, differential, or log).
  • The log record is not needed for any feature that reads the log (such as database mirroring or replication).

A log record that is still needed is called active, and a VLF that has at least one active log record is also called active. Every so often, the transaction log is checked to see whether all the log records in a full VLF are active or not; if they are all inactive, the VLF is marked as truncated (meaning the VLF can be overwritten once the transaction log wraps). When a VLF is truncated, it is not overwritten or zeroed in any way—it is just marked as truncated and can then be reused.

This process is called log truncation—not to be confused with actually shrinking the size of the transaction log. Log truncation never changes the physical size of the transaction log—only which portions of the transaction log are active or not. Below image shows the transaction log after truncation has occurred to the above image.

Transaction Log

Posted By: Uma Mahesh
Last Edit: 14 Jul 2016 @ 12:53 PM

EmailPermalinkComments Off on Physical Architecture of Transaction Log
Tags
Categories: General

 11 May 2016 @ 5:34 PM 

Below Store Procedure Provides the Indexes and its sizes for the given table

CREATE PROCEDURE getIndexSizes ( @tblName VARCHAR(500) )
AS
    EXECUTE AS CALLER;
    BEGIN
        SELECT  OBJECT_NAME(i.object_id) AS TableName ,
                i.name AS IndexName ,
                i.index_id AS IndexID ,
                ( 8 * SUM(a.used_pages) ) / 1024 AS 'Indexsize(MB)'
        FROM    sys.indexes AS i
                JOIN sys.partitions AS p ON p.object_id = i.object_id
                                            AND p.index_id = i.index_id
                JOIN sys.allocation_units AS a ON a.container_id = p.partition_id
        WHERE   OBJECT_NAME(i.object_id) = @tblName
        GROUP BY i.object_id ,
                i.index_id ,
                i.name
        ORDER BY OBJECT_NAME(i.object_id) ,
                i.index_id;
    END;	
Posted By: Uma Mahesh
Last Edit: 11 May 2016 @ 05:35 PM

EmailPermalinkComments Off on Finding Size of Index
Tags
Categories: SQL Server

 05 Apr 2016 @ 1:26 PM 

Did you know that when you change the recovery model of a database – plans for that database are removed from cache ? I didn’t…

SQL 2012 :-

I was doing some performance tuning today, using DMV sys.dm_exec_query_stats to look at summary information for database performance. I normally find this a very useful DMV as a starting point for any database tuning. However, to my surprise, today this DMV held very few rows for the database I was attempting to tune.

A little bit of investigation revealed that my oldest plan in cache for this database was created just after the recovery model had been changed from FULL to SIMPLE. Now, I know that certain SQL Server operations can flush cache, for example changing the max degree of parallelism on a server – but I’d never come across this before..

I wonder how many DBAs are inadvertently flushing procedure cache if they are switching recovery models often – for example, to help with bulk-logged performance ?

Here’s a simple test script to demonstrate :-

1. Create a database

USE Master
GO
CREATE DATABASE PlanCacheTest

2. Confirm the recovery model is full.

SELECT name, recovery_model_desc FROM sys.databases

3 Create a test proc in the database.

USE PlanCacheTest

CREATE PROC PlanCacheProc 
AS 
SELECT * FROM sysindexes ORDER BY dpages DESC

4. Run the proc and check there is an entry in the procedure cache now

EXEC PlanCacheProc 

SELECT * 
FROM sys.dm_exec_query_stats qs 
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt 
WHERE  qt.dbid = DB_ID('PlanCacheTest')

Will return 1 row.

5. Now change the recovery model

ALTER DATABASE  PlanCacheTestSET RECOVERY SIMPLE

6. Now look in the procedure cache again, the plan for my new proc has gone…

SELECT * 
FROM sys.dm_exec_query_stats qs 
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt 
WHERE  qt.dbid = DB_ID('PlanCacheTest')

Will return 0 rows.

USE Master
GO
DROP DATABASE PlanCacheTest

I see the same behaviour switching between all 3 recovery models.

Posted By: Uma Mahesh
Last Edit: 05 Apr 2016 @ 01:29 PM

EmailPermalinkComments Off on Changing recovery model removes plans from procedure cache
Tags
Categories: SQL Server

 18 Feb 2016 @ 11:46 AM 
CREATE PROCEDURE [dbo].[SearchObject]
(
@SearchString VARCHAR(255) ,
@NotContains VARCHAR(255)
)
AS
DECLARE @Text VARCHAR(1500) ,
@TextInit VARCHAR(1500);

SET @TextInit = 'USE @Code

INSERT INTO ##Temp2

SELECT

''@Code'' AS dbName,

a.[Object Name],

a.[Object Type]

FROM (SELECT DISTINCT

sysobjects.name AS [Object Name],

CASE

WHEN sysobjects.xtype = ''C'' THEN ''CHECK constraint''

WHEN sysobjects.xtype = ''D'' THEN ''Default or DEFAULT constraint''

WHEN sysobjects.xtype = ''F'' THEN ''Foreign Key''

WHEN sysobjects.xtype = ''FN'' THEN ''Scalar function''

WHEN sysobjects.xtype = ''P'' THEN ''Stored Procedure''

WHEN sysobjects.xtype = ''PK'' THEN ''PRIMARY KEY constraint''

WHEN sysobjects.xtype = ''S'' THEN ''System table''

WHEN sysobjects.xtype = ''TF'' THEN ''Function''

WHEN sysobjects.xtype = ''TR'' THEN ''Trigger''

WHEN sysobjects.xtype = ''U'' THEN ''User table''

WHEN sysobjects.xtype = ''UQ'' THEN ''UNIQUE constraint''

WHEN sysobjects.xtype = ''V'' THEN ''View''

WHEN sysobjects.xtype = ''X'' THEN ''Extended stored procedure''

END AS [Object Type]

FROM sysobjects

WHERE sysobjects.type IN (''C'', ''D'', ''F'', ''FN'', ''P'', ''K'', ''S'', ''TF'', ''TR'', ''U'', ''V'', ''X'')

AND sysobjects.category = 0

AND CHARINDEX(''@SearchString'', sysobjects.name) > 0

AND ((CHARINDEX(''@NotContains'', sysobjects.name) = 0

–OR CHARINDEX(''@NotContains'', sysobjects.name) <> 0
))) a';

SET @TextInit = REPLACE(@TextInit, '@SearchString', @SearchString);

SET @TextInit = REPLACE(@TextInit, '@NotContains', @NotContains);

SELECT name AS dbName ,
CAST(NULL AS VARCHAR(255)) AS ObjectName ,
CAST(NULL AS VARCHAR(255)) AS ObjectType
INTO ##Temp1
FROM master.dbo.sysdatabases
ORDER BY name;

SELECT *
INTO ##Temp2
FROM ##Temp1
WHERE 1 = 0;

DECLARE @Code VARCHAR(255) ,
@Count INT ,
@Incrementer INT;

SET @Count = ( SELECT COUNT(dbName)
FROM ##Temp1
);

DECLARE c_k CURSOR LOCAL FAST_FORWARD
FOR
SELECT dbName
FROM ##Temp1
ORDER BY dbName DESC;

OPEN c_k;

FETCH NEXT FROM c_k INTO @Code;

SET @Incrementer = 1;

WHILE ( @@fetch_status = 0 )
AND ( @Incrementer <= @Count )
BEGIN

SET @Text = REPLACE(@TextInit, '@Code', @Code);

PRINT @Text;

EXEC (@Text);

SET @Incrementer = @Incrementer + 1;

DELETE FROM ##Temp1
WHERE dbName = @Code;

FETCH NEXT FROM c_k INTO @Code;

END;

CLOSE c_k;

DEALLOCATE c_k;

SELECT *
FROM ##Temp2
ORDER BY dbName ,
ObjectType;

DROP TABLE ##Temp2;

DROP TABLE ##Temp1;
Posted By: Uma Mahesh
Last Edit: 29 Feb 2016 @ 05:03 PM

EmailPermalinkComments Off on Search for an Object in all Databases
Tags
Categories: General, SQL Server





 Last 50 Posts
 Back
Change Theme...
  • Users » 1
  • Posts/Pages » 76
  • Comments » 139
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight