ghazni Posted December 6, 2012 Report Share Posted December 6, 2012 [quote name='deals2buy' timestamp='1354754147' post='1302902707'] ya SC lo job edo bisket annav kadaa? Ippudu ekkada Richmond aa? [/quote] yupp........... Link to comment Share on other sites More sharing options...
deals2buy Posted December 6, 2012 Report Share Posted December 6, 2012 [quote name='deals2buy' timestamp='1354662563' post='1302897133'] I have the following set options [color=#0000ff]SET[/color] NUMERIC_ROUNDABORT [color=#0000ff]OFF[/color][color=#808080];[/color] [color=#0000ff]SET[/color] ANSI_PADDING[color=#808080],[/color] ANSI_WARNINGS[color=#808080],[/color] CONCAT_NULL_YIELDS_NULL[color=#808080],[/color] ARITHABORT[color=#808080],[/color]QUOTED_IDENTIFIER[color=#808080],[/color] ANSI_NULLS [color=#0000ff]ON[/color][color=#808080];[/color] I then execute the following T-SQL statement CREATE TABLE "SELECT" ("TABLE" int) GO; Is the table (Select) created ? [/quote] [b]Answer: [/b]The table is created [b]Explanation: [/b]From: [url="http://www.sqlservercentral.com/links/1427054/284181"]http://msdn.microsoft.com/en-us/library/aa259228(v=sql.80).aspx[/url] . When SET QUOTED_IDENTIFIER is ON, all strings delimited by double quotation marks are interpreted as object identifiers. Therefore, quoted identifiers do not have to follow the Transact-SQL rules for identifiers. They can be reserved keywords and can include characters not usually allowed in Transact-SQL identifiers Link to comment Share on other sites More sharing options...
deals2buy Posted December 6, 2012 Report Share Posted December 6, 2012 [quote name='ghazni' timestamp='1354754198' post='1302902709'] yupp........... [/quote] kool..deni meeda working? Link to comment Share on other sites More sharing options...
deals2buy Posted December 6, 2012 Report Share Posted December 6, 2012 [b] Dynamic PIVOT CLR[/b] In my quest to find the perfect PIVOT, I have run across a few different methodologies. Some of those found right here. All of them in some way shape or form required doing a dynamic SQL exec. Moreover, the dynamic nature of a PIVOT would require you to have a custom query for each different PIVOT. My methodology doesn't differ completely from all the methods I have seen, but tries to take best of all approaches and put it in one succinct implementation. My solution utilizes a .NET CLR stored procedure. The procedure will ultimately take in a few parameters. [b]@query [/b]nvarchar(4000) - This is where you do your select and always selecting into #temp, which will be in scope. I don't like using temp tables but it is the easiest way to accomplish this task. [b]@pivotColumn [/b]nvarchar(4000) - In retrospect I should have just limited this to the max size of a column name, but this is just an educational post and you can change this to your liking. As it states...this is the column which you want to pivot your data on. [b]@selectCols [/b]nvarchar(4000) - A comma delimited list of columns that you will want to select from. This must include the aggregate field as well as the @pivotColumn. I would also suggest another field for the row analytics ie PayMethod, OrderHour, ProductTotal where PayMethod or OrderHour could be my pivotColumn and ProductTotal will be my aggregate. [b]@aggCols [/b]nvarchar(4000) - Your aggregate ie 'sum(ProductTotal)' [b]@orderBy [/b]nvarchar(4000) - You guessed it...how you want your result ordered, typically by one of your select columns and not the pivot or aggregate column. Here is a sample call: DECLARE @query nvarchar(4000) DECLARE @pivotColumn nvarchar(4000) DECLARE @selectCols nvarchar(4000) DECLARE @aggCols nvarchar(4000) DECLARE @orderBy nvarchar(4000) set @query = 'select PayMethod, datepart(hh, OrderDate) as OrderHour, ProductTotal into #temp from dbo.Orders (nolock) where OrderDate between getdate()-1 and getdate() and division = ''15''' set @pivotColumn = 'PayMethod' set @selectCols = 'PayMethod, OrderHour, ProductTotal' set @aggCols = 'sum(ProductTotal)' set @orderBy = '' EXECUTE dbo.clrDynamicPivot @query ,@pivotColumn ,@selectCols ,@aggCols ,@orderBy GO So now lets talk about the CLR procedure. If you have never built a SQL CLR procedure. Perhaps you can grab one of your .NET guys and he can help you out. I wrote a blog on how to create a CLR function [url="http://throwex.blogspot.com/2010/11/sql-clr-function.html"]http://throwex.blogspot.com/2010/11/sql-clr-function.html[/url] . Once you feel confident enough to build a CLR function or procedure you can continue. I will try to explain what is going on in the DynamicPivot.cs file that was attached to the post. clrDynamicPivot is the public method that will ultimately be exposed to you as a stored procedure. As you can see it has the inputs that I described above. The name of the procedure will be "clrDynamicPivot" if you do not change anything. What this method will do is take the query that you wrote and execute it creating a temporary table #temp with all the fields that you specified in your query. Once that query has executed successfully it will then call a method that will collect all the unique data as columns that you selected in your pivotColumn. As it builds the final select from all the data that you passed in, it will utilize the same database connection that your underlying stored procedure is using, so there is no need to pass in credentials. [Microsoft.SqlServer.Server.SqlProcedure(Name="clrDynamicPivot")] public static void clrDynamicPivot(SqlString query, SqlString pivotColumn, SqlString selectCols, SqlString aggCols, SqlString orderBy) { string stmt = string.Empty; try { CreateTempTable(query); string pivot = GetPivotData(pivotColumn.ToString()); stmt = string.Format("select * from ( select {0} from #temp ) as t pivot ( {1} for {2} in ( {3} )) as p {4}", selectCols.ToString(), aggCols.ToString(), pivotColumn.ToString(), pivot, orderBy.ToString()); using (SqlConnection cn = new SqlConnection("Context Connection=True")) { SqlCommand cmd = cn.CreateCommand(); cmd.CommandText = stmt; cn.Open(); SqlDataReader reader = cmd.ExecuteReader(); SqlContext.Pipe.Send(reader); } } catch (Exception ex) { throw new Exception(string.Format("clrDynamicPivot Error stmt:{0}", stmt), ex); } } This method is used for generating the #temp table utilizing the same database connection that your procedure is running under. It's a void so it doesn't return anything. If this executes properly you will have the #temp table populated with the results of your query. public static void CreateTempTable(SqlString query) { using (SqlConnection sqlconn = new SqlConnection("Context Connection=True")) { SqlCommand sqlCmd = sqlconn.CreateCommand(); sqlCmd.CommandText = query.ToString(); sqlconn.Open(); sqlCmd.ExecuteNonQuery(); } } This method's job is to sqlect all of the unique rows from your #temp query based upon your pivot column. It will then loop through a SqlDataReader and generate comma delimited string of unique columns that could only be determined at run time. When you design your query, there would be no way for you to know the set of unique rows that would ultimately be selected. If you did know that information, there would be no need for you to do any dynamic pivots. Keep in mind that I am only looking at 3 different types of fields. This should either be expanded for other types of data or if someone can find a better way then by all means do it. Once again this is just an educational post. public static string GetPivotData(string pivotColumn) { string stmt = string.Format("select distinct {0} from #temp", pivotColumn); string pivotCols = string.Empty; using (SqlConnection cn = new SqlConnection("Context Connection=True")) { SqlCommand cmd = cn.CreateCommand(); cmd.CommandText = stmt; cn.Open(); using (SqlDataReader dr = cmd.ExecuteReader()) { while (dr.Read()) { if (dr.GetFieldType(0) == typeof(System.Int32)) pivotCols += "[" + dr.GetInt32(0) + "],"; if (dr.GetFieldType(0) == typeof(System.Decimal)) pivotCols += "[" + dr.GetDecimal(0) + "],"; if (dr.GetFieldType(0) == typeof(System.String)) pivotCols += "[" + dr.GetString(0) + "],"; } } } return pivotCols.Remove(pivotCols.Length - 1); } The results that are returned are shown below. [img]http://www.sqlservercentral.com/Images/16817.jpg[/img] Link to comment Share on other sites More sharing options...
deals2buy Posted December 6, 2012 Report Share Posted December 6, 2012 [b] SQL Server 2012 Integration Services - Implementing Package Security using Access Control[/b] http://www.databasejournal.com/features/mssql/ssis-2012-implementing-package-security-using-access-control.html Link to comment Share on other sites More sharing options...
Darling999 Posted December 6, 2012 Report Share Posted December 6, 2012 [quote name='deals2buy' timestamp='1354748379' post='1302902437'] nenu example ki oka record esaanu anthe... naadaggara multiple records untai... also aggregate function filter lo raasthe work avutunda? [/quote] Yeah work avtundi ayitey painaa query ni konchem dynamic cheyyi saripoddi Link to comment Share on other sites More sharing options...
Darling999 Posted December 6, 2012 Report Share Posted December 6, 2012 update T1 SET T1.city =T.City From T1 INNER JOIN ( Select Id, max(T2.startdate) Maxstrt, City From T2 group by ID, City)as T ON T1.ID=T.ID id wokr avtundi soodu Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 [b] Download SQL Server Data Type Conversion Chart[/b] Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. [b]Implicit Conversion:[/b] Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. [b]Explicit Conversions:[/b] Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. [b] Implicit Conversion:[/b] [img]http://www.pinaldave.com/bimg/conversion1.jpg[/img] [b] Explicit Conversion:[/b] [img]http://www.pinaldave.com/bimg/conversion2.jpg[/img] You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. [b][url="http://www.microsoft.com/en-us/download/details.aspx?id=35834"]Download SQL Server Data Type Conversion Chart[/url][/b] Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 [b] Stairway to PowerPivot and DAX - Level 2: The DAX COUNTROWS() and FILTER() Functions[/b] http://www.sqlservercentral.com/articles/Stairway+Series/90386/ Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 Database Deployment: The Bits - Copying Data Out http://www.simple-talk.com/sql/t-sql-programming/database-deployment-the-bits---copying-data-out/ [b] [url="http://www.sqlservercentral.com/blogs/steve_jones/2012/11/29/sql-server-thumbnail-metrics-os-memory/"]SQL Server Thumbnail Metrics – OS Memory[/url][/b] Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 Suppose I have two schemas: person and beneficiary. I create multiple tables with the name of "malls" with different schemas. Which statements will succeed? The answers are given as success or failure in the order of the statements listed below. create table person.malls(id int) go create table beneficiary.malls(id int) go create table malls(id int) go Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 [b] Data Distribution with SQL Server Replication[/b] [b] Introduction[/b] Ensuring that data is in “the right place at the right time” is increasingly critical as the database has become the linchpin in corporate technology infrastructure driving customer interactions, revenues, and decision making. The customer-facing OLTP servers have real-time data critical to the decision making process supported by the OLAP servers; the challenge is how to make the OLTP data available where and when it is needed. Database replication is a solution for distributing data that has been widely studied for over 20 years providing a mechanism to increase performance and availability as well as to distribute data to support business requirements. However, replication also creates “a large and complex distributed system with intricate dependencies and hidden communications channels” ([url="http://www.computer.org/csdl/proceedings/icde/2010/5445/00/05447893-abs.html"]Konstantinos Krikellas, 2010[/url]) requiring experienced DBAs for maintenance and support. The goal of this paper is to provide a foundation for understanding data replication as well as a discussion of the criteria for selecting an appropriate replication technology. [b] Background[/b] “Database replication is the process of maintaining multiple copies of data items in different locations called replicas” ([url="http://dl.acm.org/citation.cfm?id=1920847"]Bettina Kemme, 2010[/url]). Distributing data to disparate servers has been a research topic for more than two decades during which time multiple approaches to maintaining transactional atomicity, consistency, integrity, and durability (ACID) have been investigated. “Ten years ago, the theoretical basis for database replication revolved around the classic concepts of serializability and locking” (Bettina Kemme, 2010). In these schemes ACID was ensured using a 2-phase-commit protocol where locks were obtained for read operations while write operations used distributed locks. The overhead costs of locks lead some researchers to suggest quorums as a more efficient approach for ensuring the consistency of distributed data; however, many operational issues were identified with these approaches. The seminal paper, [url="http://research.microsoft.com/apps/pubs/default.aspx?id=68247"][i][u]Dangers of Replication and a Solution[/u][/i][/url] (Jim Gray, 1996), suggested that there was an exponential relationship between the number of replicas and “the transaction response times, the conflict probability, and the deadlock rates” (Bettina Kemme, 2010). Research conducted by Jim Gray led to new proposals for replication that eased the restrictions for consistency in order to increase responsiveness. Early research in replication, as described above, is commonly referred to as “eager” or synchronous replication “which keep the replicas synchronized within transaction boundaries” ([url="http://dl.acm.org/citation.cfm?id=1045671"]Christian Plattner, 2004[/url]). “Eager systems conform to 1-copy-serialization: the resulting schedules are equivalent to a serial schedule on a single database” (Christian Plattner, 2004). Confirming research by Jim Gray, Plattner and Alonso determined that eager replication results in very high communication overhead and a probability of deadlocks “proportional to the third power of the number of replicas” (Christian Plattner, 2004). These issues led to a revised view in the database community “that one could get either performance by sacrificing consistency (lazy replication approaches) or consistency at the cost of performance and scalability (eager replication approaches)” (Bettina Kemme, 2010). In contrast to eager replication, lazy or asynchronous replication “propagates the updates of a transaction once it has already committed” ([url="http://www.igi-global.com/chapter/survey-approaches-database-replication/20762"]F. D. Muñoz-Escoí, 2009[/url]). Lazy replication improves performance allowing faster transactional rates; however, this is achieved by sacrificing replica consistency a choice that was not well-received by some in the research community who found it difficult to accept “the fact that it was possible to implement (and commercialize!) a system with such ill-defined consistency guarantees…” (Bettina Kemme, 2010). It was, in fact, the need to commercialize a solution that led to the decision to implement lazy replication as the preferred solution in the real-world notwithstanding the issues of potentially stale data at the replicas as well as the need to implement mechanisms to resolve update conflicts (Christian Plattner, 2004). [b] SQL Server Replication[/b] Microsoft implements the Replication subsystem in SQL Server using an asynchronous (lazy) replication solution that provides a robust conflict resolution mechanism. As with any lazy replication implementation there is a possibility of stale data at the replicas; however, the consistency tradeoff is well-worth the achievable performance of SQL Server Replication. Additionally, the Replication subsystem includes tools to manage data consistency across the replicas allowing for a high degree of confidence in the data. There are many excellent references available describing the architecture and components of the SQL Server Replication subsystem; this paper will not include redundant discussion of these topics. Instead, the remaining sections of this paper will discuss the criteria for selecting a replication mechanism. [b] Replication Scenarios[/b] Replication is used for “two complimentary features: performance improvement and high availability” (F. D. Muñoz-Escoí, 2009). Performance can be increased with replication because each replicated server can respond to queries as read-only replicas requiring no coordination among the servers; this approach may also provide server redundancy where the failure of any one replica does not impact data accessibility. SQL Server supports this configuration with its Transactional Replication mechanism. Additionally, for replication configurations requiring replicas to update their local data SQL Server provides a Merge Replication mechanism with default and customizable conflict resolvers available to meet any business logic need. SQL Server also provides a Snapshot Replication mechanism which creates a point-in-time read-only copy of data on the replica. For clarity, the population of decision support systems is encompassed within the performance improvement feature as defined by Muñoz-Escoí. A major consideration when selecting a replication mechanism is the data modifications and data-type limitations required to support the internal replication mechanisms. Transactional and snapshot replication mechanisms have no intrusive data modification requirements; e.g. the former manages data distribution via the transaction log while the later creates and applies a point-in-time copy of the replicated data. Transactional replication does require that all tables have a primary key defined; additionally, large object types require changes to replication configuration parameters. Merge replication, in contrast, requires data augmentation in order to synchronize data updates across replicas. The major Merge replication requirements and limitations are listed below: TEXT, NTEXT, or IMAGE data types are not replicated. Any Foreign Keys in the database must be labeled as NOT FOR REPLICATION to avoid conflicts. Any User Defined Triggers must be labeled as NOT FOR REPLICATION to avoid conflicts. IDENTITY columns must be labeled as NOT FOR REPLICATION. IDENTITY values must be managed on a site-by-site basis. There are limits associated with Merge Replication regarding the amount of data that may be replicated as measured by row and column size. TIMESTAMP values are not replicated they are regenerated. Every replicated table must have a UNIQUEIDENTIFIER defined. Data inserted using BULK INSERT commands will not be replicated. [i]Table 1 Merge Replication Requirements and Limitations[/i] In my experience there are several scenarios where data replication is commonly utilized: data synchronization to a reporting server; data synchronization to replica(s) in support of business needs (i.e. off-loading data processing, stand-by server); and data synchronization of remote semi-connected replicas. The Replication subsystem defines specific criteria for the implementation of any of the replication mechanisms some requiring schema modifications thus many considerations must be weighed in designing a replication topology. [b] Criteria for Selecting a Replication Mechanism[/b] Distributing data across SQL Server instances in any enterprise environment requires a careful consideration of a myriad of issues. The Replication subsystem provides a range of options as well as customizable configurations to satisfy most data distribution needs; however, given the available options the selection of a specific replication mechanism may be daunting for the uninitiated. In an attempt to provide a starting point in the selection process I have developed the decision flow charts in Figures 1, 2, and 3 below. Please note that these flow charts represent high-level considerations and should not be interpreted as definitive – they are a starting point for a more thorough analysis. [img]http://www.sqlservercentral.com/Images/16839.gif[/img] [img]http://www.sqlservercentral.com/Images/16842.gif[/img] [img]http://www.sqlservercentral.com/Images/16843.gif[/img] [b] Summary[/b] Replication is a powerful mechanism supporting the distribution of data throughout an enterprise. Significant research has led the industry to adopt lazy or asynchronous replication implementations in order to support the high-levels of performance necessary in the real-world. The benefits of lazy replication come with a cost - data consistency; however, with replication properly configured and administered the data consistency issues may be mitigated thereby allowing high-performance data distribution. Commercial replication systems have evolved from the research community to satisfy the requirements of business by providing abstractions from the complexities of the underlying replication mechanisms. Microsoft, in its mission to provide easy-to-use tools for data management, has incorporated the technologies necessary for distributing data within the SQL Server product line. The Replication subsystem included with various feature sets in every edition of SQL Server provides the ability to natively distribute data throughout the enterprise with no additional licensing fees. The Replication subsystem in SQL Server allows for implementations supporting a variety of data distribution needs while minimizing implementation and administrative requirements. Transactional replication supports real-time data synchronization with low overhead for offline processing, stand-by servers, or reporting servers. Merge replication supports data synchronization across intermittently connected replicas while snapshot replication is most useful for distributing static data representing a point-in-time from source to replicas during off-peak hours. Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 [b] [url="http://www.sqlservercentral.com/blogs/sqltact/2012/12/03/management-studio-database-diagram-owners/"]Management Studio Database Diagram Owners[/url][/b] Link to comment Share on other sites More sharing options...
deals2buy Posted December 10, 2012 Report Share Posted December 10, 2012 [quote name='DARLING...' timestamp='1354814571' post='1302905442'] update T1 SET T1.city =T.City From T1 INNER JOIN ( Select Id, max(T2.startdate) Maxstrt, City From T2 group by ID, City)as T ON T1.ID=T.ID id wokr avtundi soodu [/quote] bhayya idi work avvatledu Link to comment Share on other sites More sharing options...
mtkr Posted December 10, 2012 Report Share Posted December 10, 2012 ... Link to comment Share on other sites More sharing options...
Recommended Posts