Darling999 Posted February 25, 2013 Report Share Posted February 25, 2013 LTT Link to comment Share on other sites More sharing options...
Darling999 Posted February 25, 2013 Report Share Posted February 25, 2013 Kaarthekeyaa Etupoyinav topics padtalledu Link to comment Share on other sites More sharing options...
deals2buy Posted February 25, 2013 Report Share Posted February 25, 2013 [quote name='DARLING...' timestamp='1361819262' post='1303327446'] Kaarthekeyaa Etupoyinav topics padtalledu [/quote] asalu DB ke vasthalenu bhayya ee madhya Link to comment Share on other sites More sharing options...
Darling999 Posted February 25, 2013 Report Share Posted February 25, 2013 [quote name='Kaarthikeya' timestamp='1361822592' post='1303327909'] asalu DB ke vasthalenu bhayya ee madhya [/quote] yeah adey soostunna, emaindi antha ok naa? Link to comment Share on other sites More sharing options...
9Pardhu Posted February 26, 2013 Author Report Share Posted February 26, 2013 To do steps for a SQL Server DBA when log file is full Log file full might because the log drive was full. To address this issue, we need to do the following steps: 1. First check if the log drive is full 2. If the log drive is 100% full you had to check which databases were causing the log grown. Then add extra log file to that specific database. Never trying to kill the session because that would cause my problem. Alway add extra log file and let the transactions finished. 3. When there are no pending/open transactions, we can start shrinking databases. First we checked the log file stats: DBCC sqlperf(logspace) It will show database name, log size and usage. Pay attention to those databases that have large log file AND lower usage. We always start with them. 4. Before shrinking a database, first to check what's the reason caused log file to grow: eg. We're trying to shrink database database10: SELECT name, log_reuse_wait_desc from sys.databases where name = 'database10' If log_reuse_wait_desc shown as "Nothing" then means we can shrink the log file. But it not always true in production environment. We will discuss it later. If it's "Log backup", then it means we need first do log backup to clean up active log record. If it's "Active transactoin" , means there's open transactions on this database. We need check the open transactions: dbcc opentran('database10') <===still use database10 as an example We can manually issue checkpoint command to clean up the pending transaction: use database10 checkpoint go 5. Then we can proceed shrinking database log: use Database10 dbcc shirnkfile('database10_log', 500) --Here we shrink database10's log file 'database10_log' from 1000MB to 500MB 6. Sometimes even the log_reuse_wait_desc in step 4 is "Nothing", we still got failed in step 5. In that case, we need repeat log backup again. Then try step 5. Link to comment Share on other sites More sharing options...
deals2buy Posted February 26, 2013 Report Share Posted February 26, 2013 [quote name='DARLING...' timestamp='1361824189' post='1303328099'] yeah adey soostunna, emaindi antha ok naa? [/quote] ya antha ok koddiga busy unna Link to comment Share on other sites More sharing options...
deals2buy Posted February 28, 2013 Report Share Posted February 28, 2013 [b][size=5]SSIS Basics: Using the Conditional Split[/size][/b] [b] Adding a Connection Manager to Our SSIS Package[/b] Before we do anything else, we need to set up a connection manager to the database that contains our source data. To do this, right-click the [b]Connection[/b] [b]Manager[/b] [b]s[/b] window and then click [b]New[/b] [b]OLE DB Connection[/b], as shown in Figure 1. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure1-03862aa7-d057-43ef-9938-8910c3f869d8.jpg[/img] [b]Figure 1: Adding an OLE DB connection manager[/b] When the [b]Configure[/b] [b]OLE[/b] [b]DB[/b] [b]Connection[/b] [b]Manager[/b] dialog box appears, click [b]New[/b]. This launches the [b]Connection[/b] [b]Manager[/b] dialog box. In the [b]Server[/b] [b]name[/b] drop-down list, select the name of the SQL Server instance that hosts the database containing the source data you want to use for this exercise. (Note that it should be some version of the [b]AdventureWorks[/b] database.) Then configure the authentication type you use to connect to the database. Finally, in the [b]Connect[/b] [b]to[/b] [b]a[/b] [b]database[/b] drop-down list, select the source database. On my system, I’m connecting to a SQL Server instance named [b]192.168.1.22/Cambridge[/b], using [b]SQL Server Authentication[/b] as my authentication type, and connecting to the [b]AdventureWorks[/b] database, as shown in Figure 2. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure2-8596e7b8-5e3d-4818-8728-f0383e06868e.jpg[/img] [b]Figure 2: Configuring an OLE DB connection manager[/b] After you set up your connection manager, click [b]Test Connection[/b] to ensure that you can properly connect to the database. If you receive a message confirming that the test connection succeeded, click [b]OK[/b] to close the [b]Connection[/b] [b]Manager[/b] dialog box. Your new connection manager should now be listed in the [b]Connection[/b] [b]Manager[/b] [b]s[/b] window. [b] Adding an OLE DB Source to Our Data Flow[/b] Now that we’ve established a connection to our database, we can set up our data flow to retrieve data from the [b]Employee[/b] table and then split that data into two sets. That means we must first add a [b]Data[/b] [b]Flow[/b] task to our control flow. To do so, drag the task from the [b]Control Flow Items[/b] section within the [b]Toolbox[/b] window to the control flow design surface, as shown in Figure 3. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure3-fd1aa7ca-929f-422d-8ea0-1533e3de6710.jpg[/img] [b]Figure 3: Adding a Data Flow task to the control flow[/b] After you’ve added the [b]Data[/b] [b]Flow[/b] task, double-click the task to go to the [b]Data[/b] [b]Flow[/b] tab. Once there, drag an [b]OLE[/b] [b]DB[/b] source from the [b]Toolbox[/b] window to the data flow design surface. The next step is to configure the [b]OLE DB[/b] source. To do so, double-click the component to open the [b]OLE DB Source Editor[/b]. Because we already created an [b]OLE DB[/b] connection manager, it will be the default value in the [b]OLE DB connection m[/b] [b]anager[/b] drop-down list. That’s the connection manager we want. Next, ensure that the [b]Table[/b] [b]or[/b] [b]view[/b] option is selected in the [b]Data[/b] [b]access[/b] [b]mode[/b] drop-down list, and then select the [b]HumanResources.Employee[/b] table from the [b]Name[/b] [b]of[/b] [b]the[/b] [b]table[/b] [b]or[/b] [b]the[/b] [b]view[/b] drop-down list. Your [b]OLE[/b] [b]DB[/b] [b]Source[/b] [b]Editor[/b] should now look similar to the one shown in Figure 4. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-figure4.png[/img] [b]Figure 4: Configuring the OLE DB Source Editor[/b] Now we need to select the columns we want to include in our data flow. Go to the [b]Columns[/b] page of the [b]OLE[/b] [b]DB[/b] [b]Source[/b] [b]Editor[/b]. You’ll see a list of available columns in the [b]Available[/b] [b]External[/b] [b]Columns[/b] table, as shown in Figure 5. We don’t want to include all those columns, however. We want to include only those shown in the [b]External[/b] [b]Column[/b] list in the grid at the bottom of the page. To remove the unwanted columns, deselect them in the [b]Available[/b] [b]External[/b] [b]Columns[/b] table. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure%205a-9f001bd8-9eec-421f-ac40-86926471484b.jpg[/img] [b]Figure 5: Selecting which columns to include in our data flow[/b] Once your selected columns are correct, click [b]OK[/b] to close the [b]OLE[/b] [b]DB[/b] [b]Source[/b] [b]Editor[/b]. [b] Adding a Conditional Split Transformation to the Data Flow[/b] Now we need to add in the [b]Conditional[/b] [b]Split[/b] transformation to the data flow. To do so, drag the transformation from the [b]Data Flow Transformations[/b] section of the [b]Toolbox[/b] window to the data flow design surface, beneath the [b]OLE[/b] [b]DB[/b] source, as shown in Figure 6. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure7a-5297851d-2545-4fcf-a5c3-7efc10ef6782.jpg[/img] [b]Figure 6: Adding the Conditional Split transformation to the data flow[/b] We now need to connect the [b]OLE[/b] [b]DB[/b] source to the [b]Conditional[/b] [b]Split[/b] transformation. Click the green arrow (the data flow path) at the bottom of the [b]OLE[/b] [b]DB[/b] source and drag it to the [b]Conditional[/b] [b]Split[/b] transformation, as shown in Figure 7. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure8-0ee26984-e1a0-4869-8693-991647d0e4a5.jpg[/img] [b]Figure 7: Connecting the source to the Conditional Split transformation[/b] After we add the [b]Conditional Split[/b] transformation, we need to do something with it. As I mentioned previously, we’re going to use the transformation to divide the data flow into two separate paths. To do this, double-click on the [b]Conditional Split[/b] transformation to open the [b]Conditional Split Transformation Editor[/b]. The editor is divided into three main windows, as shown in Figure 8. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure9-73b7544d-8dd9-493e-bcaf-cf6efafd2cc7.jpg[/img] [b]Figure 8: Working with the Conditional Split Transformation Editor[/b] Notice in Figure 8 that I’ve labeled the windows [b]1[/b], [b]2[/b] and [b]3[/b] to make them easier to explain. The windows provide the following functionality:[list=1] [*]Columns and variables we can use in our expressions that define how to split the data flow. [*]Functions we can use in our expressions that define how to split the data flow. [*]Conditions that define how to split the data flow. These need to be set in priority order; any rows that evaluate to true for one condition will not be available to the condition that follows. [/list] For this example, we’ll define our condition based on the [b]HireDat[/b] [b]e[/b] field in the data flow. So the first step to take is to expand the [b]Columns[/b] node in window 1 and drag the [b]HireDate[/b] field to the [b]Condition[/b] column of the grid in window 3. When you drag the file to the [b]Condition[/b] column, SSIS will populate the [b]Order[/b] and [b]Output Name[/b] columns, with all three columns written in red text as shown in Figure 9. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-figure9.png[/img] [b]Figure 9: Creating a condition that defines how to split the data flow[/b] As you can see in Figure 9, SSIS populates the columns with default values. For the [b]Order[/b] column, stick with the default value of [b]1[/b]. For the [b]Output Name[/b] column, we’ll provide a name for our outbound data path. (We’ll be creating two data paths, based on our split condition.) On my system, I used the name [b]Pre[/b] [b]Mar[/b] [b]1999[/b], so type that or another name into the [b]Output[/b] [b]Name[/b] column. Now we must define the expression that determines how the condition will evaluate the data. This we do in the [b]Condition[/b] column. As previously stated, we’ll split the data flow based on those employees who started before 1 March 1999 and those who started on or after that date. So we need to define two conditions. We’ve already been working on the first condition. To finish defining that one, we need to add a comparison operator to the expression in the [b]Condition[/b] column, in this case, the lesser-than ([b]<[/b]) symbol. There are two ways to add the operator. You can type it directly into the [b]Condition[/b] column (after the column name), or you can expand the [b]Operators[/b] node in section 2 and drag the less-than symbol to the [b]Condition[/b] column. Next, we need to add a date for our comparison, which we add as a string literal, enclosed in double quotes. However, because we’re comparing the date to the [b]HireDate[/b] column, we need to convert the string to the [b]DT_DBDATE[/b] SSIS data type. Luckily, SSIS makes this very easy. We enclose the data type in parentheses and add it before the string value. Once we’ve done this, the equation in the [b]Condition[/b] column should look as follows: [CODE]HireDate < (DT_DBDATE)"01-Mar-1999"[/CODE] When you include the data type in this way, SSIS automatically converts the string literal to the specified data type (assuming that the string conforms to the data type’s specifications). There are two ways you can add a data type to your equations. Either type it directly into the [b]Condition[/b] column, or expand the [b]Type[/b] [b]Casts[/b] node in window 2 and drag the data type to the [b]Condition[/b] column. Once you’ve defined your equation in the [b]Condition[/b] column, you’re first data path is ready to go. We now need to define our second data path. To do so, follow the same procedures as described above, only this time, use the greater-than-or-equal-to ([b]>=[/b]) operator instead of the lesser-than one. Your second expression should be as follows: [CODE]HireDate >= (DT_DBDATE)"01-Mar-1999"[/CODE] That’s all there is to it. Your [b]Conditional[/b] [b]Split[/b] [b]Transformation[/b] [b]Editor[/b] should now look similar to the one shown in Figure 10. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-figure10.png[/img] [b]Figure 10: Setting up the Conditional Split Transformation Editor[/b] Once you’re satisfied that you’ve configured the [b]Conditional[/b] [b]Split[/b] transformation the way you want it, click [b]OK[/b] to close the [b]Conditional[/b] [b]Split[/b] [b]Transformation[/b] [b]Editor[/b]. [b] Adding Data Flow Destinations to the Data Flow[/b] Now that we’ve split the data flow into multiple data paths, we now need to add a destination for each of those paths. So let’s start with the first data path. Drag an [b]OLE[/b] [b]DB[/b] destination from the [b]Data Flow Destinations[/b] section of the [b]Toolbox[/b] window to the data flow design surface, somewhere beneath the [b]Conditional[/b] [b]Split[/b] transformation. Next, drag the green data path arrow from the [b]Conditional[/b] [b]Split[/b] transformation to the [b]OLE[/b] [b]DB[/b] destination. When you connect the data path to the destination, the [b]Input Output Selection[/b] dialog box appears, as shown in Figure 11. The dialog box lets us choose which output we want to direct toward the selected destination. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure11-f3cb7fa8-f771-4d87-968a-ad5f036f4b6b.jpg[/img] [b]Figure 11: Configuring the Input Output Selection dialog box[/b] Notice that the dialog box includes the [b]Output[/b] drop-down list. These are the data path outputs available from the [b]Conditional[/b] [b]Split[/b] transformation. In this case, the drop-down list will include three options:[list] [*][b]PreMar1999[/b] [*][b]PostMar1999[/b] [*][b]Conditional[/b] [b]Split[/b] [b]Default[/b] [b]Output[/b] [/list] We’ve just set up the first two, so these should be self-explanatory. However, there is a third output data path, [b]Conditional[/b] [b]Split[/b] [b]Default[/b] [b]Output[/b], which captures any records that don’t meet the conditions defined in the first two outputs. In our example, there shouldn’t be any records in this category, but later we’ll configure this one anyway, just to demonstrate how it works. For the first [b]OLE[/b] [b]DB[/b] destination, select the first option, [b]PreMar1999[/b], and then click [b]OK[/b]. SSIS will assign the output name to the data path. We now need to configure the [b]OLE[/b] [b]DB[/b] destination. Double-click the component to open the [b]OLE DB Destination Editor[/b]. Next to the [b]Name of [/b]the table or the view drop-down list, click the New button. This launches the Create[b] Table[/b] dialogue box, which includes a [b]CREATE[/b] [b]TABLE[/b] statement that defines a table based on the data flow, as shown in Figure 12. The only change you might want to make to the statement is to modify the table name. I renamed mine to [b]PreMar1999[/b]. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-figure12.jpg[/img] [b]Figure 12: Creating a table when configuring the OLE DB destination[/b] Once you’re satisfied with the [b]CREATE[/b] [b]TABLE[/b] statement, click [b]OK[/b] to close the [b]CREATE[/b] [b]TABLE[/b] dialog box. You should be returned to the [b]Connection[/b] [b]Manager[/b] page of the [b]OLE DB Destination Editor[/b]. For all other settings on this page, stick with the default values, as shown in Figure 13. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure13-231c6b83-809a-4cf9-a25d-f08d97b5f32a.jpg[/img] [b]Figure 13: Using the default values in the OLE DB Destination Editor[/b] Now go to the [b]Mappings[/b] page of the [b]OLE DB Destination Editor[/b] to check that the columns have mapped correctly. You can also click the [b]Preview[/b] button on that page to ensure that the data looks as you would expect. At this stage, however, it’s likely to appear empty. Now click [b]OK[/b] to close the [b]OLE[/b] [b]DB[/b] [b]Destination[/b] [b]Editor[/b]. Next, you should consider renaming your [b]OLE[/b] [b]DB[/b] destination to avoid confusion down the road. On my system, I renamed the destination to [b]PreMar1999[/b]. To rename a component, right-click it, click [b]Rename[/b], and then type in the new name. Renaming components makes it easier to distinguish one from the other and for other developers to better understand their purpose. Once you’ve renamed your destination, you’re ready to set up the other destinations. Repeat the data flow configuration process for the [b]PostMar1999[/b] data path and the default data path. You’ll notice that once a data path has been used, it’s no long available as an option. When you’re finished configuring your [b]OLE[/b] [b]DB[/b] destinations and their data paths, your data flow should look similar to the one shown in Figure 14. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure14-dffb0b4f-a75f-4c6d-b85e-048a33a6e699.jpg[/img] [b]Figure 14: The data flow split into three data paths[/b] Now it’s time to test how it works. To do this, click the [b]Execute[/b] button (the green arrow) on the menu bar. If the package runs successfully, it should look similar to the data flow shown in Figure 15. [img]https://www.simple-talk.com/iwritefor/articlefiles/1730-Figure15-fd8fb598-5347-473b-8c67-77619112f1c7.jpg[/img] [b]Figure 15: Running the SSIS package[/b] If all your data flow components turn green, you know that your package is probably running correctly. However, another important indicator is the number of rows that were loaded into each table. As Figure 15 indicates, the package inserted 136 rows into the [b]PreMar1999[/b] table and 154 rows into the [b]PostMar1999[/b] table. (These numbers could vary, depending on which version of the [b]AdventureWorks[/b] database you’re using.) As expected, no rows were passed along the default data path. If you were to check the source data (the [b]Employee[/b] table), you should come up with a matching number of records. [b] Summary[/b] In this article, we created an SSIS package with a single data flow. We added a [b]Conditional Split[/b] transformation to the data flow in order split the data into multiple paths. We then directed each of those data paths to a different destination table. However, our destinations did not have to be SQL Server tables. They could have been flat files, spreadsheets or any other destination that SSIS supports. In future articles, we’ll use the [b]Conditional Split[/b] with other data flow components, such as the [b]Merge Join[/b] transformation, to insert, update and delete data. Link to comment Share on other sites More sharing options...
deals2buy Posted March 2, 2013 Report Share Posted March 2, 2013 Microsoft SQL Server Free Event in Virginia [url="http://www.andhrafriends.com/topic/389007-virginia-lo-unde-sql-server-guys-free-event-on-microsoft-sql-server/"]http://www.andhrafri...oft-sql-server/[/url] Link to comment Share on other sites More sharing options...
captcha Posted March 2, 2013 Report Share Posted March 2, 2013 Group lo kothaga join ayyanu, pleasant surprise to see this topic ! Chala manchi material undi.. Thanks & appreciate all the posters & followers of this topic.. Link to comment Share on other sites More sharing options...
deals2buy Posted March 4, 2013 Report Share Posted March 4, 2013 [quote name='captcha' timestamp='1362254050' post='1303356113'] Group lo kothaga join ayyanu, pleasant surprise to see this topic ! Chala manchi material undi.. Thanks & appreciate all the posters & followers of this topic.. [/quote] Welcome to the league Link to comment Share on other sites More sharing options...
calmandquiet Posted March 5, 2013 Report Share Posted March 5, 2013 Calling SSAS thopulu Emaina usefull info vunte share cheyandi ....SSAS lo konni doubts vunnai konchem thelisthe cheppara plzz 1)SSAS lo only CUBES okkate create cheyala or Facts and Dimensions Kuda maname create cheyala. 2)Data ware house ela create cheyali (DW cheyali ante masth rojulu paduthundi kada). 3)SSAS tasks ela vuntavi ... [img]http://www.desigifs.com/sites/default/files/brahmam_style15_1.gif?1290369188[/img] Link to comment Share on other sites More sharing options...
deals2buy Posted March 5, 2013 Report Share Posted March 5, 2013 Calling SSAS thopulu Emaina usefull info vunte share cheyandi ....SSAS lo konni doubts vunnai konchem thelisthe cheppara plzz 1)SSAS lo only CUBES okkate create cheyala or Facts and Dimensions Kuda maname create cheyala. [b]AFAIK Facts & Dimensions are base tables and based on those tables you build Cubes[/b] 2)Data ware house ela create cheyali (DW cheyali ante masth rojulu paduthundi kada). [b]Asalu DW ante nuvvem anukuntunnavo koddiga cheppu mundu[/b] 3)SSAS tasks ela vuntavi ... [b]depends...you may need to writer MDX Queries sometimes...leda you can just build the cubes using tools...varies from work to work locations.[/b] [b][color=#ff0000]but edaina nuvvu kashtapadalasinde [/color][/b] PS: nenu SSAS thopu em kaadu but naaku telisnanthavaraku cheptunna anthe Link to comment Share on other sites More sharing options...
calmandquiet Posted March 5, 2013 Report Share Posted March 5, 2013 Calling SSAS thopulu [b]AFAIK Facts & Dimensions are base tables and based on those tables you build Cubes[/b] ----what is SSAS Dev role just cubes okkate create chesi reports generate cheyala or facts and dimension tables kuda create cheyala. [b]Asalu DW ante nuvvem anukuntunnavo koddiga cheppu mundu[/b] [size=4]--Database ki schema(star and schema) with dimensions and facts arrange cheyali kada which is a OLAP.[/size] 3)SSAS tasks ela vuntavi ... [b]depends...you may need to writer MDX Queries sometimes...leda you can just build the cubes using tools...varies from work to work locations.[/b] ---So SSAS job role mainly MDX Queries ans Creating cubes aantava baaa??? [b][color=#FF0000]but edaina nuvvu kashtapadalasinde [/color][/b] ----Try chesthunanu cubes create cheyatam easy gane vundi but how to create dimensions and facts (schema) ala create cheyalo ardam kavatam ledu [img]http://www.desigifs.com/sites/default/files/brahmam_style15_1.gif?1290369188[/img] Link to comment Share on other sites More sharing options...
calmandquiet Posted March 5, 2013 Report Share Posted March 5, 2013 SSIS Packages to import data daily from OLTP Applications to data [size=4]warehouse Evvaraina chesara ?????[/size] Link to comment Share on other sites More sharing options...
loveindia Posted March 5, 2013 Report Share Posted March 5, 2013 [quote name='calmandquiet' timestamp='1362496130' post='1303368874'] Calling SSAS thopulu [b]AFAIK Facts & Dimensions are base tables and based on those tables you build Cubes[/b] ----what is SSAS Dev role just cubes okkate create chesi reports generate cheyala or facts and dimension tables kuda create cheyala. [b]Asalu DW ante nuvvem anukuntunnavo koddiga cheppu mundu[/b] [size=4]--Database ki schema(star and schema) with dimensions and facts arrange cheyali kada which is a OLAP.[/size] 3)SSAS tasks ela vuntavi ... [b]depends...you may need to writer MDX Queries sometimes...leda you can just build the cubes using tools...varies from work to work locations.[/b] ---So SSAS job role mainly MDX Queries ans Creating cubes aantava baaa??? [b][color=#FF0000]but edaina nuvvu kashtapadalasinde [/color][/b] ----Try chesthunanu cubes create cheyatam easy gane vundi but how to create dimensions and facts (schema) ala create cheyalo ardam kavatam ledu [img]http://www.desigifs.com/sites/default/files/brahmam_style15_1.gif?1290369188[/img] [/quote] Facts and dimensions are just like tables in RDBMS. First things to consider before creating facts and dimensions.. Should you follow Inman - Kimball Methodologies.? which are other wise called STAR SCHEMA and SNOW FLAKE SCHEMA. you should which one you should pick and why? Then you should ask yourself, Have I ever done Data Modelling before? If the answer is yes, then you should go ahead and create Tables (Facts and dimension) / Primary Keys / Foreign Keys / Indexes and then make ways to pull data into your Data warehouse. In short this is all you need to know to define Facts & Dimensions. 1 Link to comment Share on other sites More sharing options...
Recommended Posts