Jump to content

Ms Sql Server Technology Discussions


9Pardhu

Recommended Posts

[b] Query to find number Rows, Columns, ByteSize for each table in the current database – Find Biggest Table in Database[/b]


[CODE]
USE DatabaseName
GO
CREATE TABLE #temp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
SET NOCOUNT ON
INSERT #temp
EXEC sp_msforeachtable 'sp_spaceused ''?'''
SELECT a.table_name,
a.row_count,
COUNT(*) AS col_count,
a.data_size
FROM #temp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ' KB', '') AS integer) DESC
DROP TABLE #temp
[/CODE]

Link to comment
Share on other sites

[b] Introduction and Explanation to SYNONYM – Helpful T-SQL Feature for Developer[/b]


DBA have been referencing database objects in [b][url="http://blog.sqlauthority.com/2007/06/26/sql-server-explanation-and-example-four-part-name/"]four part names[/url][/b]. SQL Server 2005 introduces the concept of a synonym. A synonyms is a single-part name which can replace multi part name in SQL Statement. Use of synonyms cuts down typing long multi part server name and can replace it with one synonyms. It also provides an abstractions layer which will protect SQL statement using synonyms from changes in underlying objects (tables etc).

[b]Create Synonyms :[/b]

[CODE]
USE AdventureWorks;
GO
CREATE SYNONYM MyLocation
FOR AdventureWorks.Production.Location;
GO
[/CODE]

[b]Use Synonyms :[/b]

[CODE]
USE AdventureWorks;
GO
SELECT TOP 5 *
FROM MyLocation;
GO
[/CODE]

[b]Drop Synonyms :[/b]

[CODE]
USE AdventureWorks;
GO
DROP SYNONYM MyLocation;
GO
[/CODE]


Synonyms can be created on only following objects.[list]
[*]Assembly (CLR) Stored Procedure
[*]Assembly (CLR) Table-valued Function
[*]Assembly (CLR) Scalar Function
[*]Assembly Aggregate (CLR) Aggregate Functions
[*]Replication-filter-procedure
[*]Extended Stored Procedure
[*]SQL Scalar Function
[*]SQL Table-valued Function
[*]SQL Inline-table-valued Function
[*]SQL Stored Procedure
[*]View
[*]Table (User-defined)
[/list]
Additionally SYNONYMS can be used only to change data of object not the schema of the object. SYNONYMS can be used with only SELECT, UPDATE, INSERT, DELETE, EXECUTE commands.

Following is image demonstrates use of SYNONYMS.

[img]http://www.pinaldave.com/bimg/synonyms.gif[/img]

An example of the usefulness of this might be if you had a stored procedure on a Users database that needed to access a Clients table on another production server. Assuming you created the stored procedure in the database Users, you might want to set up a synonym such as the following:

[CODE]
USE Users;
GO
CREATE SYNONYM Clients
FOR Offsite01.Production.dbo.Clients;
GO
[/CODE]

Now when writing the stored procedure instead of having to write out that entire alias every time you accessed the table you can just use the alias Clients. Furthermore, if you ever change the location or the name of the production database location all you need to do is modify one synonym instead of having to modify all of the stored procedures which reference the old server.

Link to comment
Share on other sites

[b] Export Data From SQL Server 2005 to Microsoft Excel Datasheet[/b]


Enable Ad Hoc Distributed Queries. Run following code in SQL Server Management Studio – Query Editor.

[CODE]
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
GO
RECONFIGURE;
GO
[/CODE]

Create Excel Spreadsheet in root directory c:\contact.xls (Make sure you name it contact.xls). Open spreadsheet, on the first tab of Sheet1, create two columns with FirstName, LastName. Alternatively you can [url="http://www.pinaldave.com/bimg/contact.xls"]download sample Spreadsheet[/url] from [url="http://www.pinaldave.com/bimg/contact.xls"]here[/url].

Run following code in SQL Server Management Studio – Query Editor.

[CODE]
USE [AdventureWorks];
GO
INSERT INTO OPENROWSET ('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\contact.xls;',
'SELECT * FROM [Sheet1$]')
SELECT TOP 5 FirstName, LastName
FROM Person.Contact
GO
[/CODE]

Open contact.xls spreadsheet you will see first five records of the Person.Contact inserted into the first two columns.

Make sure your spreadsheet is closed during this operation. If it is open it may thrown an error. You can change your spreadsheet name as well name of the Sheet1 to your desired name.

Link to comment
Share on other sites

[b] Execute Same Query and Statement Multiple Times Using Command GO[/b]


Following example demonstrate how GO can be used to execute same code multiple times.

[CODE]
SELECT GETDATE() AS CurrentTime
GO 5
[/CODE]

Above code will return current time 5 times as GO is followed with five times. The result is displayed with “Result to Text” mode on.
Beginning execution loop
CurrentTime
———————–
2008-01-19 1:52:54.200
(1 row(s) affected)
CurrentTime
———————–
2008-01-19 11:52:54.217
(1 row(s) affected)
CurrentTime
———————–
2008-01-19 11:52:54.217
(1 row(s) affected)
CurrentTime
———————–
2008-01-19 11:52:54.217
(1 row(s) affected)
CurrentTime
———————–
2008-01-19 11:52:54.233
(1 row(s) affected)
Batch execution completed 5 times.

Link to comment
Share on other sites

[b] Display Fragmentation Information of Data and Indexes of Database Table[/b]


One of my friend involved with large business of medical transcript invited me for SQL Server improvement talk last weekend. I had great time talking with group of DBA and developers. One of the topic which was discussed was how to find out Fragmentation Information for any table in one particular database. For SQL Server 2000 it was easy to find using DBCC SHOWCONTIG command. DBCC SHOWCONTIG has some limitation for SQL Server 2000.

SQL Server 2005 has sys.dm_db_index_physical_stats dynamic view which returns size and fragmentation information for the data and indexes of the specified table or view. You can run following T-SQL for any database to know detailed information of the database.

[CODE]
SELECT *
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test_contig'), NULL, NULL , 'DETAILED')
[/CODE]

Above query returns lots of information, most of the time we only need to know Tablename, IndexName and Percentage of Fragmentation. Following query returns only three most important details mentioned earlier. I have added an extra condition where results are filtered where average fragmentation is greater than 20%.

[CODE]
SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName,
i.name AS IndexName,
indexstats.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') indexstats
INNER JOIN sys.indexes i ON i.OBJECT_ID = indexstats.OBJECT_ID
AND i.index_id = indexstats.index_id
WHERE indexstats.avg_fragmentation_in_percent > 20
[/CODE]

The results will help DBA to make necessary reports.

Link to comment
Share on other sites

[b] Reclaim Space After Dropping Variable-Length Columns Using DBCC CLEANTABLE[/b]


All DBA and Developers must have observed when any variable length column is dropped from table, it does not reduce the size of table. Table size stays the same till Indexes are reorganized or rebuild. There is also DBCC command DBCC CLEANTABLE, which can be used to reclaim any space previously occupied with variable length columns. Variable length columns include varchar, nvarchar, varchar(max), nvarchar(max), varbinary, varbinary(max), text, ntext, image, sql_variant, and xml. Space can be reclaimed when variable length column is also modified to lesser length.

DBCC command for reclaiming space is very simple. Following example is for AdventureWorks database and Person.Contact table.

[CODE]
DBCC CLEANTABLE ('AdventureWorks','Person.Contact', 0)
[/CODE]

The result of DBCC is displayed below.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC is fully logged operation. It also does not affect temp tables and system tables.

Link to comment
Share on other sites

[b] Change Compatibility Level – T-SQL Procedure[/b]

[b]The best practice to change the compatibility level of database is in following three steps.[/b][list]
[*]Set the database to single user access mode by using ALTER DATABASE SET SINGLE_USER.
[*]Change the compatibility level of the database.
[*]Put the database in multiuser access mode by using ALTER DATABASE SET MULTI_USER.
[/list]
Above three steps are translated in T-SQL in following three line script. Run this procedure in Query Editor.

[CODE]
ALTER DATABASE AdventureWorks
SET SINGLE_USER
GO
EXEC sp_dbcmptlevel AdventureWorks, 90;
GO
ALTER DATABASE AdventureWorks
SET MULTI_USER
GO
[/CODE]

Link to comment
Share on other sites

[b] Interesting Observation – Using sqlcmd From SSMS Query Editor[/b]

sqlcmd can be used from Query Editor but it has to be enabled first.

Following image display how sqlcmd can be enabled in Query Editor. Go to SSMS Menu >> Query >> (click on ) SQLCMD Mode.

[img]http://www.pinaldave.com/bimg/sqlcmd_ex.jpg[/img]

Now on Query Editor will support sqlcmd mode. Let us see following example where script contains operating system commands as well SQL commands.
[CODE]
SELECT @@Version ServerVersion
!! Dir "C:\Documents and Settings"
!! Date /T
SELECT GETDATE() CurrentTime
[/CODE]
[color=black]
[/color]
Once above command has ran following output is displayed in Result Window.

[img]http://www.pinaldave.com/bimg/sqlcmd_ex1.jpg[/img]

[b]Interesting Observation:[/b]
[i]What really is interesting is that out of complete batch all the operating system commands runs first and those are followed by SQL commands. The order of SQL and operating system does not matter in resultset. [/i]

Link to comment
Share on other sites

[b] Find Currently Running Query – T-SQL[/b]

This is the script which I always had in my archive. Following script find out which are the queries running currently on your server.

[CODE]
SELECT sqltext.TEXT,
req.session_id,
req.status,
req.command,
req.cpu_time,
req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
[/CODE]

While running above query if you find any query which is running for long time it can be killed using following command.

[CODE]
KILL [session_id]
[/CODE]

Link to comment
Share on other sites

[b] sqlcmd – Using a Dedicated Administrator Connection to Kill Currently Running Query[/b]


If due to a long running query or any resource hogging query SQL Server is not responding sqlcmd can be used to connect to the server from another computer and kill the offending query. SQL Server provides a special diagnostic connection which allows administrators to access SQL Server when standard connection is not possible. Except very extreme situations dedicated administrator connection (DAC) is always successful.

Let us see how we can use DAC using sqlcmd.

Connect SQL Server using sqlcmd and -A option which will establish DAC.

[i][b]C:\>sqlcmd -S localhost -U sa -P dev -d master -A[/b][/i]

[img]http://www.pinaldave.com/bimg/sqlcmd_1.jpg[/img]

For our test purpose run following query which overload server and probably make it unresponsive. If your configure is not strong I strongly suggest not to run following query.

[i][b]Following query is just for test purpose and not part of solution.[/b][/i]

[CODE]
USE AdventureWorks
GO
SELECT *
FROM Person.Address
CROSS JOIN Person.Address a1
CROSS JOIN Person.Address a2
CROSS JOIN Person.Address a3
CROSS JOIN Person.Address a4
CROSS JOIN Person.Address a5
CROSS JOIN Person.Address a6
CROSS JOIN Person.Address a7
CROSS JOIN Person.Address a8
CROSS JOIN Person.Address a9
GO
[/CODE]

[img]http://www.pinaldave.com/bimg/sqlcmd_3.jpg[/img]

Once successfully connected it will provide prompt 1> enter following T-SQL query which will give SessionID of currently running query along with its elapsed time.

[CODE]
SELECT
req.session_id,
req.status,
req.total_elapsed_time
FROM sys.dm_exec_requests req
WHERE status = 'running'
AND req.total_elapsed_time > 1
[/CODE]

[img]http://www.pinaldave.com/bimg/sqlcmd_2.jpg[/img]

Our previously running query gave session id 52 in on my server. The session id may be different for each SQL Server. Once the session id is figured out it can be killed using KILL [SessionID] command. Always make sure to type command GO after each complete query.

[CODE]
KILL 52
[/CODE]

[img]http://www.pinaldave.com/bimg/sqlcmd_4.jpg[/img]

Once above query has run it will kill our example long running query and give following error.
[color=#ff0000][i]Msg 233, Level 20, State 0, Line 0[/i][/color]
[i]A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 0 – No process is on the other end of the pipe.)[/i]

[img]http://www.pinaldave.com/bimg/sqlcmd_5.jpg[/img]

sqlcmd is really useful utility of SQL Server and it comes in handy when server is not responsive.

Link to comment
Share on other sites

[b] Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior[/b]


When we have situation where we are dealing with INSERT and TRANSACTION, we can see this feature in action. Let us consider an example where we have two tables. One table has all the data and the second table has partial data. If you want to insert all the data from the first table to the second table and insert all nonduplicate values, you can use this feature.

Let us first create a table that has a duplicate record based on the FirstName.

[img]http://www.pinaldave.com/bimg/duptr1.jpg[/img]


[b]Option 1: With IGNORE_DUP_KEY set OFF[/b]

Once a table is created, create another table and a nonclustered index on it with the option IGNORE_DUP_KEY set OFF. Try to insert data from another table into this table by using the INSERT… SELECT statement.

[img]http://www.pinaldave.com/bimg/duptr2.jpg[/img]

As there are duplicate values, the insert will fail with error and not insert any single row in the table.


[b]Option 2: With IGNORE_DUP_KEY set ON[/b]

Now, once again create new table and create nonclustered index on it with the option GNORE_DUP_KEY set ON. Try to insert data from another table into this table using INSERT… SELECT statement.

[img]http://www.pinaldave.com/bimg/duptr3.jpg[/img]

Even though there are duplicate values in the table, the rows in the table will be inserted; however, the duplicate values will not be inserted and ignored. A warning message will be generated with the script, but the process will not be stopped.
I hope that this example clarifies the use of this feature. Again, as I said earlier, this is a business logic related issue, and it should be used based on your need.

The complete script from the example is given here.

[CODE]
USE tempdb
GO
CREATE TABLE DupData (ID INT,
FirstName VARCHAR(100),
LastName VARCHAR(100),
City VARCHAR(100))
GO
INSERT INTO DupData (ID,FirstName,LastName,City)
VALUES(1,'John','Doe','LA');
INSERT INTO DupData (ID,FirstName,LastName,City)
VALUES(2,'Joe','Doe','LA');
INSERT INTO DupData (ID,FirstName,LastName,City)
VALUES(3,'John','Doe','LA');
INSERT INTO DupData (ID,FirstName,LastName,City)
VALUES(4,'Joseph','Doe','LA');
GO
-- Check data from Test Table.
SELECT *
FROM DupData
GO
-- Option 1 - With IGNORE_DUP_KEY set OFF
CREATE TABLE DupIndex (ID INT,
FirstName VARCHAR(100),
LastName VARCHAR(100),
City VARCHAR(100))
GO
--Create Unique Index on Table
CREATE UNIQUE NONCLUSTERED INDEX [IX_DupIndex_FirstName]
ON [dbo].[DupIndex]
(
[FirstName] ASC
) ON [PRIMARY]
GO
-- Insert will throw an error
-- Error will rollback transaction
INSERT INTO DupIndex (ID,FirstName,LastName,City)
SELECT ID,FirstName,LastName,City
FROM DupData
ORDER BY ID
GO
-- Table with contain NO row
SELECT *
FROM DupIndex
GO
DROP TABLE DupIndex
GO
-- Option 2 - With IGNORE_DUP_KEY set ON
USE tempdb
GO
CREATE TABLE DupIndex (ID INT,
FirstName VARCHAR(100),
LastName VARCHAR(100),
City VARCHAR(100))
GO
--Create Unique Index on Table with IGNORE_DUP_KEY set ON
CREATE UNIQUE NONCLUSTERED INDEX [IX_DupIndex_FirstName]
ON [dbo].[DupIndex]
(
[FirstName] ASC
) WITH (IGNORE_DUP_KEY = ON) ON [PRIMARY]
GO
-- Insert will throw warning
-- Warning will not insert duplicate row
INSERT INTO DupIndex (ID,FirstName,LastName,City)
SELECT ID,FirstName,LastName,City
FROM DupData
ORDER BY ID
GO
-- Table with contain three rows
SELECT *
FROM DupIndex
GO
-- Clean up
DROP TABLE DupIndex
DROP TABLE DupData
GO
[/CODE]

Link to comment
Share on other sites

[b] The server network address “TCP://SQLServer:5023″ can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418)[/b]


While doing SQL Mirroring, we receive the following as the most common error:
[color=#ff0000]The server network address “TCP://SQLServer:5023″ cannot be reached or does not exist.
Check the network address name and that the ports for the local and remote endpoints are operational.
(Microsoft SQL Server, Error: 1418)[/color]

The solution to the above problem is very simple and as follows.

[b]Fix/WorkAround/Solution: [/b]Try all the suggestions one by one.

[b]Suggestion 1:[/b] Make sure that on Mirror Server the database is restored with NO RECOVERY option (This is the most common problem).

[b]Suggestion 2:[/b] Make sure that from Principal the latest LOG backup is restored to mirror server. (Attempt this one more time even though the full backup has been restored recently).

[b]Suggestion 3:[/b] Check if you can telnet to your ports using command TELNET ServerName Ports like “telnet SQLServerName 5023″.

[b]Suggestion 4:[/b] Make sure your firewall is turned off.

[b]Suggestion 5:[/b] Verify that the endpoints are started on the partners by using the [b]state[/b] or [b]state_desc[/b] column the of the [b]sys.database_mirroring_endpoints[/b] catalog view. You can start end point by executing an ALTER ENDPOINT statement.

[b]Suggestion 6:[/b] Try the following command as one of the last options.
GRANT CONNECT ON ENDPOINT::Mirroring TO ALL

[b]Suggestion 7:[/b] Delete the end points and recreate them.

Please note that some of the above suggestions can be security threat to your system. Please use them responsibly and review your system with security expert in your company.

Link to comment
Share on other sites

[b] Fragmentation – Detect Fragmentation and Eliminate Fragmentation[/b]


[b]Q. What is Fragmentation? How to detect fragmentation and how to eliminate it?[/b]
A. Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. We can classify fragmentation into two types:[list]
[*][b]Internal Fragmentation[/b]: When records are stored non-contiguously inside the page, then it is called internal fragmentation. In other words, internal fragmentation is said to occur if there is unused space between records in a page. This fragmentation occurs through the process of data modifications (INSERT, UPDATE, and DELETE statements) that are made against the table and therefore, to the indexes defined on the table. As these modifications are not equally distributed among the rows of the table and indexes, the fullness of each page can vary over time. This unused space causes poor cache utilization and more I/O, which ultimately leads to poor query performance.
[*][b]External Fragmentation[/b]: When on disk, the physical storage of pages and extents is not contiguous. When the extents of a table are not physically stored contiguously on disk, switching from one extent to another causes higher disk rotations, and this is called Extent Fragmentation.
[/list]
Index pages also maintain a logical order of pages inside the extent. Every index page is linked with previous and next page in the logical order of column data. However, because of Page Split, the pages turn into out-of-order pages. An out-of-order page is a page for which the next physical page allocated to the index is not the page pointed to by the next-pag[i]e[/i] pointer in the current leaf page. This is called Logical Fragmentation.
Ideal non-fragmented pages are given below:
[img]http://www.pinaldave.com/bimg/frag1.jpg[/img]
[b]Statistics for table scan are as follows:[/b][list]
[*]Page read requests: 2
[*]Extent switches: 0
[*]Disk space used by table: 16 KB
[*]avg_fragmentation_in_percent: 0
[*]avg_page_space_used_in_percent: 100
[/list]
Following are fragmented pages:
[img]http://www.pinaldave.com/bimg/frag2.jpg[/img]
[b]In this case, the statistics for table scan are as follows:[/b][list]
[*]Page read requests: 6
[*]Extent switches: 5
[*]Disk space used by table: 48 KB
[*]avg_fragmentation_in_percent > 80
[*]avg_page_space_used_in_percent: 33
[/list]
[b]How to detect Fragmentation:[/b] We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. For the screenshot given below, the query is as follows:

[CODE]
SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level,
avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count
FROM sys.dm_db_index_physical_stats
(DB_ID(N'AdventureWorksLT'), NULL, NULL, NULL , 'SAMPLED')
ORDER BY avg_fragmentation_in_percent DESC
[/CODE]

[img]http://www.pinaldave.com/bimg/frag3.jpg[/img]

Along with other information, there are two important columns that for detecting fragmentation, which are as follows:[list]
[*][b]avg_fragmentation_in_percent[/b]: This is a percentage value that represents external fragmentation. For a clustered table and leaf level of index pages, this is Logical fragmentation, while for heap, this is Extent fragmentation. The lower this value, the better it is. If this value is higher than 10%, some corrective action should be taken.
[*][b]avg_page_space_used_in_percent: [/b]This is an average percentage use of pages that represents to internal fragmentation. Higher the value, the better it is. If this value is lower than 75%, some corrective action should be taken.
[/list]
[b]Reducing fragmentation:[/b][list]
[*][b]Reducing Fragmentation in a Heap: [/b]To reduce the fragmentation of a heap, create a clustered index on the table. Creating the clustered index, rearrange the records in an order, and then place the pages contiguously on disk.
[*][b]Reducing Fragmentation in an Index: [/b]There are three choices for reducing fragmentation, and we can choose one according to the percentage of fragmentation:
[list]
[*]If [b]avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX[/b] [b]REORGANIZE[/b]: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running.
[*]If [b]avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD:[/b] This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.
[*][b](Update: Please note this option is strongly NOT recommended)Drop and re-create the clustered index[/b]: Re-creating a clustered index redistributes the data and results in full data pages. The level of fullness can be configured by using the FILLFACTOR option in CREATE INDEX.
[/list]
[/list]

Link to comment
Share on other sites

[b] Get File Statistics Using fn_virtualfilestats[/b]


I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly.

[CODE]
SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name,
mf.physical_name, vfs.BytesRead, vfs.BytesWritten,
vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS,
vfs.NumberReads, vfs.NumberWrites,
(Size*8)/1024 Size_MB
FROM ::fn_virtualfilestats(NULL,NULL) vfs
INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId
AND mf.FILE_ID = vfs.FileId
GO
[/CODE]

When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well.

Link to comment
Share on other sites

[b] Download Whitepaper – SSIS Operational and Tuning Guide – SSIS for Azure and Hybrid Data Movement – Leveraging a Hadoop cluster from SSIS[/b]


There are three interesting Whitepaper recently released by Microsoft regarding SSIS. If you are using SSIS enthusiast and work with Hybrid data this three Whitepapers are very essential white-paper in the reference.

[b][url="http://technet.microsoft.com/en-us/library/jj873729.aspx"]SSIS Operational and Tuning Guide[/url][/b]

[b][url="http://technet.microsoft.com/en-us/library/jj901708.aspx"]SSIS for Azure and Hybrid Data Movement[/url][/b]

[b][url="http://technet.microsoft.com/en-us/library/jj720569.aspx"]Leveraging a Hadoop cluster from SQL Server Integration Services (SSIS)[/url][/b]

Link to comment
Share on other sites

×
×
  • Create New...